We use cookies for marketing and profiling.
Agreement on European AI regulations
Back
The European Union has been a bit late in imposing rules on tech companies: Google and Facebook will only have to comply with strict rules for privacy and the removal of hate messages in the coming months. But when it comes to AI (artificial intelligence) regulation, the EU is a global leader. Partly spurred on by the rapid rise of ChatGPT, two European Parliament committees have agreed to a proposal to regulate AI.
The AI Act (Artificial Intelligence Act) divides AI systems into risk groups. At the top are systems that pose such a high risk that they are banned. This concerns, for example, systems that assess people's behavior and make use of real-time biometric identification, such as working with facial recognition. The European Parliament also wants to ban emotion recognition systems (also at work and in education). Finally, the law prohibits systems that predict crime and fraud based on profiling (as happened in the benefits affair).
The second risk group consists of AI systems that are referred to as 'high risk'. These are applications that can have an impact on your rights as a human being, but also on your health. These are systems that the police can use to track down suspects, or systems that scan CVs of applicants. These types of 'high risk' systems must meet strict requirements. For example, they must not discriminate, they must be transparent about what is happening and regulators must be able to see what is happening under the hood. A risk analysis becomes mandatory.
Seal of approval for AI
In addition, work is being done on a quality mark for AI. 'High risk' systems are only allowed on the European market if they meet certain requirements, including energy efficiency. Because developments are happening very quickly, the law has been made so flexible that things can be adjusted or added later.
A special position is given to the so-called 'foundation models', such as ChatGPT and image generator Midjourney. The rules for these systems will become almost as strict as for 'high risk' applications. The reason, for example, that ChatGPT does not fall into the 'high risk' category by default is that not every system that generates text or an image automatically entails additional risk. It is still unclear what exactly this will mean for ChatGPT, because the company behind ChatGPT is not transparent at all, and it is not entirely clear what they are doing. From next year there will therefore be a transparency obligation for tech companies.
The proposal that has now been submitted is the first step in a long process. A plenary vote in the European Parliament will follow next month. Negotiations will then begin with the Council of the European Union, which includes all member states, and the law is expected to be ready early next year. Implementation will then take another two years. But it is already an important proposal, because legislation has been made for the first time that will regulate and limit AI. And that is exactly what many scientists and business people are asking for.
Meanwhile, the Council of Europe is also working on legislation that is based on Human Rights and better protects citizens against discrimination by AI systems. With these laws, the European Union is at the forefront of regulating AI.