AI & Machine Learning
5 min
27 June 2023

Auteur

Lisanne Groot

Lisanne Groot

marketing consultant

Evaluation of Compliance with the EU AI Act by Foundation Model Providers

Evaluation of Compliance with the EU AI Act by Foundation Model Providers

Foundation models like ChatGPT are transforming society with their remarkable capabilities, serious risks, rapid implementation, unprecedented adoption, and ongoing controversy. Meanwhile, the

[@portabletext/react] Unknown block type "span", specify a component for it in the `components.types` prop

as the world's first comprehensive regulation for the regulation of AI, and the European Parliament has just adopted a draft of the law with 499 votes in favor, 28 against, and 93 abstentions. The law includes explicit obligations for foundation model providers such as OpenAI and

[@portabletext/react] Unknown block type "span", specify a component for it in the `components.types` prop

In this message, we assess the extent to which large foundation model providers currently meet these design requirements and find that this is largely not the case. Foundation model providers rarely provide sufficient information about the data, computational power, and implementation of their models, as well as the key characteristics of the models themselves. This generally does not meet the design requirements to describe the use of copyrighted training material, specify the hardware used and emissions during training, and explain how they evaluate and test models. As a result, we recommend that policymakers prioritize transparency, informed by the requirements of the AI Act. The assessment shows that foundation model providers can currently comply with the AI Act, and that disclosure regarding the development, use, and performance of foundation models would enhance transparency across the ecosystem.

To fully comply with the requirements of the EU AI Act, foundation model providers must take the following steps:

1. Improved transparency:

Providers must provide adequate information about the training data used, hardware, emissions, and evaluation methods of their models. This will ensure better accountability and compliance with regulations.

2. Copyright-related issues:

Providers must provide clarity regarding the use of copyrighted training materials and take measures to reduce the risk of copyright infringement. Legislators and regulators should offer guidelines on how copyright relates to the training process and the output of generative models.

3. Energy consumption and emission reporting:

Reporting on energy consumption, emissions, and measures to reduce emissions must become standard practice for foundation model providers. Clear guidelines and measurement methods should be developed to accurately assess the energy requirements for training foundation models and to make the reporting of these costs more reliable.

4. Risk Management and Evaluation:

Foundation model providers must conduct a thorough assessment of the potential risks posed by their models, both in terms of malicious use and unintended harm. They should be transparent about the measures they take to mitigate these risks and evaluate the effectiveness of these measures. Guidelines for evaluation standards should be developed to assess the performance of foundation models in a consistent and reliable manner.

5. Release strategies:

Providers need to consider their release strategies and the impact on transparency and accountability. Both open and limited releases have their advantages and disadvantages, but it is important for foundation model providers to be aware of the consequences of their choices. It is essential that policymakers take into account the different release strategies when formulating regulations to ensure that there is sufficient accountability and transparency throughout the ecosystem.

The evaluation of compliance with the EU AI Act by foundation model providers shows that there is significant room for improvement in terms of transparency and accountability. The implementation and enforcement of the AI Act will bring about a positive change in the foundation model ecosystem. It is crucial that foundation model providers take action to establish industry standards that enhance transparency, and that policymakers take measures to ensure that sufficient transparency underpins this overarching technology. This evaluation is just the beginning of a broader initiative to assess and improve the transparency of foundation model providers, complementing our efforts in holistic evaluation, ecosystem documentation, standard development, and policy recommendations.

[@portabletext/react] Unknown block type "span", specify a component for it in the `components.types` prop

Lisanne Groot  - Author

Over Lisanne Groot

marketing consultant