Introduction to the European AI Regulation and the Voluntary Code
The European Commission has introduced a set of best practices guidelines, crafted by experts from the European AI Office, to assist large corporations in adhering to the upcoming European regulations governing the risks of general-purpose AI systems, such as ChatGPT or Gemini. These regulations are set to take effect on August 2.
Henna Virkkunen, the European Commission’s vice-president for Technology Sovereignty, Security, and Democracy, stated, “This is a significant step to ensure that the most advanced AI models are available in Europe not only innovatively but also securely and transparently.” She has extended an invitation to providers to join this non-binding commitment.
Voluntary Nature of the Code and Potential Adoption
The guidelines are voluntary for companies that choose to sign the Code of Good Practices, a document presented by Brussels on Thursday. However, it still requires formal validation from both the European Executive and the 27 member states.
European sources refrain from estimating how many companies will join this non-binding pledge, as major tech firms have already distanced themselves. Nonetheless, they assert that the code will provide legal security to signatories and warn that non-adherence won’t exempt providers from complying with the new AI general-use regulations.
Moreover, the European Executive claims that signatories will experience reduced administrative burdens and enhanced legal security compared to providers demonstrating compliance through other means.
Complementary Guidelines and Future Publications
This guide will be supplemented with additional directives published by Brussels before the AI general-use regulations come into effect. The aim is to clarify who falls within and outside the scope of these new rules.
Through this initiative, the European Union seeks to aid AI model providers in resolving potential ambiguities that might hinder the application of the new community regulation. This way, a lack of legal security won’t impede European innovation in this field.
Key Focus Areas: Transparency, Copyright, and Security
The code will primarily focus on transparency, copyright, and security. Providers opting out of the initiative must demonstrate compliance through alternative means.
According to Brussels, as general-purpose AI models underpin numerous AI systems in the EU, the European law will mandate providers to ensure sufficient transparency. This can be achieved by integrating these models into products using a single, user-friendly documentation form.
The communication further explains that certain general-purpose AI models may pose systemic risks, including threats to fundamental rights and security. These risks might involve lowering barriers for chemical or biological weapons development or losing control over the model.
The European AI law requires providers to assess and mitigate these systemic risks. Consequently, the code’s security chapter proposes advanced relevant risk management practices.
Key Questions and Answers
- What is the purpose of the voluntary code? The code aims to guide large corporations in adhering to the upcoming European regulations governing general-purpose AI systems, ensuring transparency, security, and compliance.
- Who developed the code? The European AI Office, a part of the European Commission, created the guidelines in collaboration with experts.
- Is participation mandatory? No, the code is voluntary. However, signatories may benefit from reduced administrative burdens and enhanced legal security.
- What risks does the code address? The code tackles systemic risks, including threats to fundamental rights and security, such as potential lowering of barriers for chemical or biological weapons development.
- When will the regulations take effect? The new AI general-use regulations are set to enter into force on August 2.