A groundbreaking step forward in the EU – with significant new risks for US providers and market leaders

After three days of intense negotiations, the EU has taken a historic step by agreeing on the “AI Act”, a comprehensive set of rules for artificial intelligence (AI). This act aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. It not only represents a milestone for the future of our societies and economies, but also aims to stimulate investment and innovation in the field of AI in Europe. However, if you look at the regulations in detail, the question arises – especially with regard to the transparency regulations – as to whether this will really happen with this new law:

Transparency could become a major problem for generative basic systems

The regulation of general-purpose AI (GPAI) systems (such as ChatGPT or Bard) has now been included in a significant part of the “AI Act” – after intensive discussion.  Given the wide range of tasks that AI systems can fulfil and the rapid expansion of their capabilities, it was agreed that these GPAI systems and the models based on them must meet certain transparency requirements. These requirements were originally proposed by the European Parliament and now include several important aspects:

  • Technical documentation: LPAI systems must produce comprehensive technical documentation. This documentation should provide detailed information on the functioning, development and application of the AI systems.
  • Compliance with EU copyright law: The systems must demonstrate compliance with EU copyright law when using training data. This ensures that the development of AI technologies takes place in compliance with existing copyrights.
  • Detailed summaries of training content: GPAI systems must provide detailed summaries of the content used for training. This increases transparency and enables a better understanding of the basis on which the AI systems make their decisions.
  • Adversarial testing: This involves testing the models under difficult conditions to ensure their robustness and reliability.
  • Reporting of serious incidents: Operators must report serious incidents related to the use of the AI models to the Commission.
  • Cybersecurity: A high standard of cybersecurity is mandatory for these models.
  • Reporting on energy efficiency: The models must also report on their energy efficiency, which is important in the context of the sustainability and environmental friendliness of AI technology.

The introduction of transparency requirements in the AI Act poses a huge risk for leading AI systems such as ChatGPT and Bard. These systems, especially ChatGPT 4 with its 1.76 trillion training parameters, have been trained with a huge, almost unmanageable amount of data. All without the involvement of the originators or even a single permission or licence to use this data for training.

The new legislation reverses the previous practice: Instead of creators having to painstakingly prove copyright infringement, the onus is now on companies like OpenAI to disclose their training data. This will bring to light an incredible amount of previously unknown copyright infringements.

With the transparency obligation established, OpenAI is now directly confronted with a potential flood of lawsuits. This challenge has the potential to put the company in a threatening position, both legally and economically. Time will tell how OpenAI handles this unprecedented situation.

The other main elements of the provisional agreement:

  • Classification of AI systems: A horizontal layer of protection, including a high-risk classification, is intended to ensure that AI systems that do not pose serious fundamental rights violations or other significant risks are not covered.
  • Prohibited AI practices: Some AI applications are classified as unacceptably risky and therefore banned in the EU, including cognitive behavioural manipulation and emotion recognition in the workplace.
  • Exceptions for law enforcement authorities: Under certain conditions, law enforcement agencies may use real-time biometric recognition systems in publicly accessible spaces.
  • General purpose AI systems and basic models: New provisions have been added to address situations where AI systems may be used for many different purposes. These are now covered by the AI Act, which was one of the most controversial points in the legislative process,
  • A new governance architecture: An AI Office within the Commission will be set up to oversee the most advanced AI models and enforce the common rules across all Member States.
  • Transparency and protection of fundamental rights: Before launching a high-risk AI system on the market, providers must carry out an assessment of the impact on fundamental rights. The system here is modelled on the data protection impact assessment in accordance with Art. 35 of the GDPR.
  • Measures to promote innovation: The provisional compromise provides for a series of measures to create an innovation-friendly legal environment and to promote regulatory evidence-basing.

Next steps

Following the provisional agreement, the regulation will be formulated in detail and finalised in the coming weeks. This overall text will then be adopted after a final vote, so that implementation can be expected in the course of this year.

Conclusion

The EU’s “AI Act” is a pioneering step towards the safe, responsible use of AI in compliance with fundamental rights. With this law, the EU is positioning itself as a pioneer in the global debate on the regulation of artificial intelligence, pursuing a balanced approach between the promotion of innovation and the protection of civil rights. This law could serve as a model for other countries and thus promote the European approach to technology regulation on the global stage.

Subscribe to our newsletter

and stay always updated on data protection.