The European Parliament has adopted regulations concerning artificial intelligence (AI), known as the Artificial Intelligence Act. These new regulations establish a legal framework that supports the development of AI while ensuring respect for human rights and minimizing the risk of discrimination.
Artificial intelligence is shaping the direction of business transformations. Companies face the challenge of adapting to and leveraging the potential of this groundbreaking technology, while maintaining pace with digitalization and automation trends and ensuring security.
The legislative proposal for AI regulation was submitted by the European Commission in April 2021. In December 2023, the European Parliament, Council, and Commission reached a compromise on the main principles of this document. On 13 March 2024, the European Parliament adopted the AI Regulation. The new provisions introduce a category of prohibited practices in AI, including the use of subliminal techniques and discriminatory solutions targeting specific groups. It will also be prohibited to use AI systems to track citizens’ lifestyles.
The AI Act Regulation represents the world’s first comprehensive document regulating the field of artificial intelligence. Its goal is to establish robust legal frameworks that ensure safe and ethical use of AI systems, protecting Europeans from the negative impacts of AI development. Ultimately, it aims to build widespread trust in this technology, which holds enormous potential for both individuals and organizations.
What to know about the principles of the AI Act Regulation?
The AI Act introduces a number of important regulations regarding artificial intelligence. Here are the key assumptions you should know:
1. Timeline for implementation:
- Publication: The Regulation will enter into force 20 days after publication in the Official Journal of the European Union.
- Full applicability: The provisions will be fully applicable 36 months after publication, by the first half of 2027.
- Early implementation: Most regulatory obligations must be met by the first half of 2026.
- High-risk systems: Restrictions on high-risk AI systems will come into effect six months after the Regulation begins.
- General-purpose systems: Rules regarding the use of general-purpose AI systems will apply after 12 months.
2. Definition scope of AI:
- The definition of artificial intelligence in the EU legal act covers a wide range of technologies and systems, significantly impacting business operations.
3. Risk classification:
- Unacceptable risk: AI systems that are prohibited, such as those using subliminal techniques or discriminating against groups of people.
- High risk: AI systems that can be used but are subject to strict limitations. Restrictions apply to both users and providers of these systems.
- Limited and minimal risk: AI systems with less stringent regulatory requirements.
- Responsibilities of providers and users:
- Providers: Entities developing AI systems, including organizations developing them for their own use, must comply with strict norms and standards.
- Users: Organizations using AI systems must also meet specific regulatory requirements.
4. Role of organizations:
- It is important to understand that an organization may act as both a user and provider of AI systems, imposing dual compliance obligations regarding new regulations.
Key actions for organizations in the medium and long term:
1. Forecasting the impact of new regulations on organizational operations:
- Regulatory analysis: Assessing how new regulations will impact company operations.
- Transparency: Maintaining transparency in operations and building customer trust.
- Industry dialogue: Engaging in open dialogue with other industry stakeholders.
- Adaptation strategy: Adapting the company’s strategy to new legal requirements to better anticipate future changes.
2. Expanding knowledge on ethical use of artificial intelligence:
- AI training: Planning and conducting training sessions for employees on the ethical use of AI.
- Ethics and compliance team: Establishing a dedicated team responsible for ensuring compliance with ethical standards and legislative requirements.
3. Utilizing trusted AI systems from the early stages of project implementation:
- Trusted technologies: Implementing proven and trusted AI systems from the outset of project implementation.
- Regular audits: Conducting regular audits and updates of the technology used to ensure compliance with the latest standards.
- Ethical priority: Maintaining compliance with ethical principles in supporting innovative AI-based solutions.
Undertaking these actions will help organizations not only adapt to new regulations but also strengthen their market position through an ethical and transparent approach to AI technology.