The European Council has recently approved the AI ​​Act, marking a crucial turning point in the regulation of artificial intelligence (AI) within the European Union. This legislative act represents a significant step towards establishing a clear and homogeneous legal framework for AI, while ensuring the protection of the fundamental rights of European citizens and promoting technological innovation.
The AI ​​Act aims to establish harmonized rules for the development, commercialization and use of AI. It classifies AI systems according to the risks they pose, dividing them into four main categories: unacceptable risks, high risks, limited risks and minimal risks. AI systems that pose unacceptable risks will be banned, while high-risk systems will have to comply with strict requirements for transparency, safety and human oversight.
With the approval of the European Council, the AI ​​Act enters a crucial phase: its implementation.
Implementation requires a series of key steps to ensure that established standards are effectively implemented and enforced.
- Adoption of Implementing Rules: Member States will have to adopt and integrate into their national legislation the provisions of the AI ​​Act. This process will require significant coordination to ensure the harmonization of national laws with the European regulatory framework.
- Creation of National Supervisory Authorities : Each Member State will have to establish or designate a competent authority to supervise AI systems. These authorities will be responsible for monitoring compliance with the rules, managing reports of infringements and applying the sanctions provided for.
- Training and Awareness : It will be essential to launch training and awareness programs for companies and professionals in the sector. These programs must provide the necessary skills to comply with the requirements of the AI ​​Act and promote a culture of responsibility in the use of AI.
- Development of Technological Infrastructures : Member States and European institutions will have to invest in the development of technological infrastructures that support compliance with the AI ​​Act. This includes platforms for the certification of AI systems, risk assessment tools and mechanisms for the transparency of algorithms.
- International Collaboration : The EU will need to continue to collaborate with other jurisdictions globally to promote shared international standards for AI. This is crucial to avoid regulatory fragmentation that could hinder innovation and international trade.
- Periodic Review : Finally, the AI ​​Act provides for a periodic review of its provisions to adapt them to the rapid technological developments in the field of AI. This review will allow the rules to be updated according to the new challenges and opportunities that will emerge in the sector.
The final approval of the AI ​​Act by the European Council is a decisive step towards the creation of a robust and forward-looking regulatory framework for AI in the EU. The next steps will be crucial to ensure that the rules established are effectively implemented and that AI can develop in a safe, transparent and fundamental rights-respecting way. With a joint commitment from institutions, businesses and citizens, the European Union can become a global leader in AI regulation, promoting an ethical and sustainable technological future.
DISCLAIMERÂ : This article provides general information only and does not constitute legal advice of any kind by Macchi di Cellere Gangemi, which assumes no responsibility for the content and accuracy of the newsletter. The author or your contact in the studio are at your disposal for any further clarification.