The EU Parliament passes the AI ACT: balancing innovation and protection with a risk-based approach

Today, March 13, 2024, the European Parliament approved the Artificial Intelligence Act (AI Act), the world’s first major set of regulatory rules to govern artificial intelligence.

The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.

The use of AI is proliferating globally across all sectors. While most AI systems pose limited or no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes.

For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme. As such, policymakers around the world are starting to propose legislation to manage these risks.

Expected to become the global gold standard for AI regulation, the EU AI Act sets out a risk-based approach, whereby the obligations for a system are proportionate to the level of risk that it poses. Under this approach, AI applications would be regulated only as strictly necessary to address specific levels of risk. The Regulatory Framework defines 4 levels of risk for AI systems: Low risk systems, Limited risk systems, systems with unacceptable risk.

Atomium-EISMD