The European Union's long-awaited and much-debated AI Act, which was formally adopted by the European Parliament on March 13, addresses the rapid advancement and integration of artificial intelligence technologies across sectors, including the entire energy value chain, as organizations work to develop responsible AI solutions.
An acknowledgement by the European Commission of both AI’s transformative potential and its risks and ethical implications, this new comprehensive framework seeks to foster innovation and competitiveness while also protecting individual rights and societal values.1
ETHICAL AI PRINCIPLES
The ethical principles that guide the development and use of AI are central to the new Act. These principles emphasize:
- Respect for human autonomy: AI systems should support human decision-making processes, not undermine them.
- Prevention of harm: AI applications must prioritize safety and ensure that they do not harm people physically or psychologically.
- Fairness: The Act calls for measures to prevent discrimination and ensure equity in AI outcomes.
- Transparency and accountability: AI systems should be transparent and explainable, allowing for accountability in their operation and outcomes.
- Privacy and data governance: Protecting personal data and privacy is underscored, aligning with the General Data Protection Regulation (GDPR).
These ethical principles are woven throughout the regulation, influencing its rules on transparency, data management, and accountability mechanisms.
RISK CATEGORIZATION
A distinctive feature of the AI Act is its risk-based approach, categorizing AI systems according to the level of threat they may pose to rights and safety:
- Unacceptable risk: AI practices that manipulate human behavior, exploit vulnerable individuals, or enable social scoring are banned.
- High risk: AI applications in critical sectors (e.g. energy, financial services, healthcare) must comply with strict requirements, including risk assessments, data quality controls, and transparency obligations.
- Limited risk: AI systems like chatbots should disclose their non-human nature to users.
- Minimal or no risk: Many AI applications fall into this category, where the Act imposes minimal obligations, recognizing their low threat level.
This framework allows for a nuanced regulatory approach, tailoring requirements to the potential harm an AI system might cause.
Special mention is made to the providers of General Purpose AI (GPAI) models, with a requirement to put in place systemic risk prescribing documentation and policies, which must also be made available to the EU AI Office and national competent authorities.2
RISK MITIGATION
High-risk AI systems. Due to their potential impact on individuals’ rights and safety, high-risk AI systems require particularly stringent oversight. Organizations should focus on:
- Risk assessment and management: Conduct thorough risk assessments to identify and evaluate risks associated with AI systems and develop a risk management plan detailing mitigation strategies and contingency plans.
- Data governance: Implement robust data governance practices to ensure the quality, accuracy, and integrity of data used by AI systems; establish data collection, storage, processing, and sharing procedures in compliance with privacy regulations like GDPR.
- Transparency and documentation: Maintain comprehensive documentation for AI systems, including their design, development, deployment processes, and decision-making mechanisms; the documentation should be accessible to relevant stakeholders to ensure transparency.
- Ethical and legal compliance: Develop AI systems per established ethical guidelines and legal requirements; ensure non-discrimination, fairness, and the protection of fundamental rights.
- Human oversight: Ensure meaningful human oversight throughout the AI system's lifecycle; set up processes for human intervention in decision-making and mechanisms for users to challenge AI decisions.
- Security and reliability: Implement strong cybersecurity measures to protect AI systems from unauthorized access and attacks; regularly test and monitor AI systems for any vulnerabilities or failures.
- Auditability: Facilitate internal and external audits of AI systems to assess compliance with regulatory requirements and ethical standards; authorized auditors should have access to algorithms, data, and decision-making processes.
Limited risk AI systems. Limited risk AI systems, while posing less of a threat, still require certain safeguards, primarily focused on transparency and user information:
- Transparency to users: Disclose the use of AI, particularly in cases where it might not be apparent (e.g. chatbots); users should be informed that they are interacting with an AI system.
- User information and consent: Provide users with information about the AI system's capabilities, limitations, and the nature of its decision-making processes; where applicable, obtain user consent in accordance with privacy laws.
- Quality and safety standards: Even if the AI system poses limited risk, maintaining high quality and safety standards is essential; the system should be regularly reviewed and updated to ensure it functions as intended without posing unforeseen risks.
- Feedback mechanisms: Implement mechanisms for users to provide feedback on the AI system's performance and any issues encountered; use this feedback to make necessary adjustments and improvements.
Fostering a culture of ethical AI use within the organization is crucial across both categories. This includes training employees on AI ethics and legal requirements and establishing cross-functional teams to oversee AI governance, mitigate risks, and leverage AI's potential responsibly and ethically.
Figure 1: The risk-based approach of the EU AI Act
THE AI ACT: AN INDUSTRY PERSPECTIVE
Adopting a risk-based approach across a broad spectrum of applications and industries, the Act will inevitably impact the energy industry, particularly in respect of the ‘high-risk’ provisions outlined above. Specific use cases would include:
- Energy infrastructure maintenance and management systems: These systems ensure the stable and efficient operation of power grids and fuel pipelines. Application of AI in these areas is considered high-risk since these systems are crucial to a region's economy and security.
- Pipeline integrity monitoring: AI systems detecting equipment anomalies or external hazards to a pipeline are critical to the prevention, detection, and mitigation of catastrophic and costly commodity release events.
- Distributed Energy Resource Management Systems (DERMS): AI-driven algorithms that control and monitor resources participating in virtual power plants (VPPs), micro-grids, and smart grids should include sufficient transparency and oversite to ensure safety, fairness, and privacy.
- Energy Trading and Risk Management: As in financial services, AI algorithms that advise or make decisions that could impact markets and financial outcomes of individuals or businesses require appropriate governance.
Provisions on safety, transparency, data governance, human oversight, and risk management will be particularly relevant to these and other AI applications within the energy industry. Organizations in the sector must conduct thorough assessments to classify their AI systems according to the risk framework and comply with the corresponding regulatory requirements.
While the Act aims to be technology-neutral and flexible across different sectors, its impact on energy industry highlights the importance of aligning AI applications with safety, security, and ethical standards to protect consumers, maintain safety, and ensure energy reliability. As the Act progresses towards implementation, further guidance and standards specific to high-risk applications, including those in the energy industry, are expected to clarify compliance expectations.
IMPLEMENTATION AND NEXT STEPS
Incorporating the AI Act into national law across EU member states requires several steps:
Adoption and Entry into Force. It is expected that the European Council, scheduled for the end of June 2024, will adopt the AI Act, following the EU Parliament vote, and then it will be published to the EU Official Journal. After 20 days the Act will be in Force and the Transposition phase will start.
Transposition. Member states have a period, usually two years, to transpose the regulation into national law. At the start of this phase (mid 2024) prohibitions on AI systems with unacceptable risks will come into effect. By the end of this period (mid 2026) member states must have transposed the EU AI Act to local regulation.
Figure 2: The EU AI Act – selected milestones
CLOSING THOUGHTS
While the EU AI Act has been hailed for its proactive stance on AI governance, concerns have been voiced about its potential impact on innovation. Critics argue that the Act’s prescriptive regulations may stifle technological advancements and burden startups with compliance costs, hindering the bloc’s competitiveness in the global AI race. There is apprehension that, by prioritizing regulation, the EU might need to catch up in nurturing an environment conducive to cutting-edge AI research and development.
In summary, the EU AI Act represents a bold step towards ethical and regulated AI deployment. By establishing clear rules and principles, it aims to protect citizens and uphold democratic values. However, as noted, the balance between regulation and innovation remains a contentious issue. As the Act moves towards full implementation over the next few years, its impact on the AI landscape in Europe – and beyond – will be closely watched.
REFERENCES