EU AI Act: 7 key principles
Recently, the European Council adopted the EU AI Act, the world’s first comprehensive AI law.
The main goal of the new legislation is to ensure proper risk assessment and the adoption of safe and reliable AI systems in the European Union market. This will be achieved by ensuring respect for the fundamental rights of citizens and promoting innovation in this area. Within the new law, reference is also made to the categorization of types of AI based on the risk they pose, and they are divided as follows:
- Limited-risk AI systems are subject to mild transparency obligations.
- High-risk AI systems will be allowed after a high level of scrutiny and will be subject to obligations in order to gain access to the EU market.
- AI systems that involve cognitive behavioral manipulation or social scoring are strictly prohibited by the EU due to their risk.
- AI systems that contribute to preventive policing and contain biometric data that promote categorization of people based on certain characteristics such as gender, religion and sexual orientation are prohibited.
To ensure the above, specific administrative bodies are set up:
- AI service primarily concerned with the submission of common rules within the EU.
- Establishment of a scientific group composed of experts involved in supporting enforcement activities.
- The AI Council consists of representatives of Member States with responsibilities to advise the Commission and Member States on the effective implementation of the rules.
- A stakeholder advisory forum to provide technical expertise to the Commission and the AI Council.
The new legislation also contains certain penalties in case of violation of the regulations. In case of non-compliance with AI practices, companies can be fined up to € 35 million or 7% of global annual turnover, whichever is higher. For certain violations the amount is up to 15,000,000€ or 3% of worldwide annual turnover, while for misleading information the amount is up to 7,500,000€ or 1% of worldwide annual turnover, whichever is higher.
7 key principles
For the reliable and correct use of AI systems, 7 basic principles have been defined and they are:
- Justice and mitigation of bias. AI systems should not discriminate or reinforce the prejudices of certain social groups.
- Transparency. There should be a full explanation of the decisions and actions of the systems.
- Accountability. Mechanisms should be put in place to assign responsibility in the event of failure or incorrect decisions by AI systems.
- Privacy. AI systems should not process or share citizens’ information and data in a way that does not respect data protection regulations.
- Security. The main goal is to create systems and mechanisms that are secure and reliable to deal with errors and risks.
- Social and environmental well-being. The social and environmental impact of AI must be considered.
- Human intervention. AI systems need to support human intervention and fundamental rights, while respecting human autonomy and dignity.