General-Purpose AI Code Of Practice (Working Groups 2, 3)

This contribution reflects input developed during Draft 3 of the General-Purpose AI Code of Practice (March 2025), prior to the finalization of the Code.

Following the Commission’s call for input, we participated in Working Groups 2 (Risk Assessment for Systemic Risk) and 3 (Technical Risk Mitigation for Systemic Risk) to contribute to the development of the General-Purpose AI (GPAI) Code of Practice.

The General-Purpose AI Code of Practice is a voluntary, co-regulatory framework developed under Article 56 of the EU AI Act to support providers of general-purpose AI (GPAI) models in aligning with legal obligations. During its drafting phase, the Code was collaboratively developed by over 1,000 stakeholders across AI development, civil society, academia, industry, and international institutions, through a process coordinated by the European Commission. The Code is structured around four specialized working groups: Working Group 1 addresses transparency requirements for all GPAI models (with limited exceptions for certain open-source systems), while Working Groups 2, 3, and 4 form the Safety and Security Section. These groups focus specifically on GPAI models considered to pose systemic risk, covering risk assessment methodologies (WG2), technical risk mitigation measures (WG3), and governance mechanisms (WG4). The final version of the Code of Practice was published in July 2025 and subsequently recognised by the EU AI Office and the European Artificial Intelligence Board (AI Board) as an adequate voluntary framework to support compliance with the AI Act. The Code is intended to evolve with future technological developments.

This contribution particularly focuses on systemic risk, accessibility, and the role of GPAI systems in public and critical infrastructure contexts.

Update: By late July 2025, Google, Microsoft, OpenAI, Mistral AI, and Anthropic had indicated their intention to sign the GPAI Code of Practice.

Related

Chronology of the EU General-Purpose AI Code of Practice

  • Initial call and launch of drafting process: July-October 2024

  • Draft 1 release: November 14, 2024

  • Draft 2 release: December 19, 2024

  • Draft 3 release: March 11, 2025

  • Final Code of Practice published: July 10, 2025

Input and proposals submitted during Draft 3 of the GPAI Code of Practice

The following reflects proposed extensions and interpretations of specific provisions of the EU AI Act, submitted during the Draft 3 phase of the General-Purpose AI Code of Practice. These proposals were intended to inform the development of the Code, particularly in relation to systemic risk, accessibility, and deployment in public and critical systems, and do not represent final legal requirements.

  • Article 5(1)(b) – Exploitation of vulnerabilities
    Proposed extending testing requirements to include multi-modal and multi-method evaluation of interaction patterns that may unintentionally exploit cognitive, sensory, or emotional vulnerabilities (e.g. minors, elderly persons, persons with disabilities). This includes testing across deployment environments (cloud, edge, intermittent connectivity), particularly where assistive technologies and public safety systems may operate under degraded conditions.

  • Article 9(9) – Risk assessment for vulnerable groups
    Proposed expanding risk assessment methodologies to capture system-level impacts across care and critical infrastructure ecosystems, including users with impairments, caregivers, clinical staff, and assistive technology providers, as well as dependencies within urban infrastructure (e.g. transport, emergency access, civic systems).

  • Article 13(3) – Transparency on system performance
    Proposed introducing granular reporting on system performance across user groups and interfaces, including interactions with assistive technologies (e.g. screen readers, AAC systems), and performance under disrupted or constrained infrastructure conditions.

  • Article 14(5) – Human oversight (biometric systems)
    Proposed extending human oversight concepts toward cascading and multi-stakeholder verification models in high-impact GPAI-assisted contexts (e.g. healthcare, assistive technologies, public safety systems).

  • Article 16 – Provider obligations (including compliance with EU accessibility frameworks)
    Proposed extending accessibility considerations toward multi-user, multi-device environments (e.g. smart homes, care coordination systems), and strengthening interoperability with existing assistive technologies and public digital infrastructure.

  • Article 26(7) – Information obligations in the workplace
    Proposed development of accessible communication protocols (text, audio, visual, plain language) to ensure AI system use is understandable to workers with disabilities and their support structures.

  • Article 27 – Fundamental rights impact assessments
    Proposed expanding assessment methodologies to include risks of dependency, exclusion, manipulation, and autonomy loss in domains such as healthcare, education, and civic infrastructure (e.g. benefits systems, voting, permits).

  • Research access – policy proposal, not directly tied to a single AI Act article
    Proposed establishing secure research access frameworks enabling civil society and academic evaluation of GPAI systems in real-world contexts (healthcare, education, assistive technologies), including scenarios involving crisis response and public infrastructure.

  • Article 50(3) – Transparency for biometric and emotion recognition systems
    Proposed extending transparency and explainability requirements to GPAI-mediated services, particularly where users rely on assistive or intermediary technologies, and in contexts such as public monitoring or resource allocation systems.

  • Article 50 – Transparency of AI system interaction
    Proposed introducing multi-modal notification standards (auditory, visual, haptic, simplified formats) to ensure that all users, including those with sensory or cognitive impairments, are aware of AI system interaction.

  • Articles 53–55 – Systemic-risk GPAI obligations
    Proposed development of domain-specific evaluation environments (e.g. healthcare, education, urban infrastructure) and testing infrastructures such as simulation environments or digital twins to assess real-world impacts on accessibility, service quality, and resilience.

  • Article 56 – Codes of practice
    Proposed development of domain-specific training and evaluation protocols within GPAI codes of practice, particularly for high-impact public and assistive domains.

  • Article 58 – Regulatory sandboxes
    Proposed dedicated sandboxes for GPAI applications in healthcare, education, and assistive technologies, focusing on identifying systemic risks, exclusion patterns, and unintended consequences.

  • AI Office capacity – policy proposal
    Proposed strengthening institutional expertise in accessibility, aging, and public systems within governance structures supporting GPAI oversight.

  • Article 95(2)(e) – Codes of conduct and vulnerable groups
    Proposed development of evaluation frameworks addressing manipulation, accessibility, and systemic risks across diverse user groups, with continuous updating mechanisms aligned with evolving assistive technologies and public systems.

General-Purpose AI Code of Practice Commitments (Draft 3)

While the following input primarily aims to shape the final version of the Code of Practice, it also supports implementation of existing AI Act provisions. In several cases, the current third draft of the Code already includes relevant commitments. Our input builds upon or extends those draft provisions.

  • Systemic Risk Assessment (Commitment II.1) "Signatories commit to adopting and implementing a Safety and Security Framework ... that will detail the systemic risk assessment, systemic risk mitigation, and governance risk mitigation measures." (Draft 3, Commitment II.1), Input: extending systemic risk assessments to include domain-specific models of care ecosystems, capturing the impacts of GPAI on vulnerable users and public infrastructure.

  • Documentation and Assistive Tech Performance (Commitment I.1) "Signatories commit to drawing up and keeping up-to-date model documentation ... providing relevant information to downstream providers." (Draft 3, Commitment I.1, Measure I.1.1). Input: Addition of granular performance reporting on how GPAI systems interact with adaptive technologies and interfaces.

  • Copyright and Research Access (Commitment I.2) "Signatories commit to drawing up, keeping up-to-date, and implementing a copyright policy ... and adopting Measures I.2.2–I.2.6." (Draft 3, Commitment I.2). Input: adding mechanisms for civil society and academic research access to GPAI systems.

  • Interoperability and Backward Compatibility (Commitment I.1 / I.3)
    "Signatories commit to drawing up and keeping up-to-date model documentation ... providing relevant information to downstream providers." (Draft 3, Commitment I.1), Input: require documentation of GPAI compatibility with legacy assistive technologies and APIs for integration in public and low-resource settings.

  • Human Oversight and Escalation Protocols (Commitment II.2 / II.3)
    "Signatories commit to implementing appropriate human oversight measures to ensure that the general-purpose AI model with systemic risk does not pose systemic risks." (Draft 3, Commitment II.2), Input: Define tiered human oversight roles and escalation chains for GPAI in public-facing services (e.g., healthcare, safety).

  • Evaluation Metrics (Commitment III.1)
    "Signatories commit to evaluating and testing general-purpose AI models with systemic risk in accordance with standardised protocols and tools." (Draft 3, Commitment III.1), Input: Mandate disaggregated accessibility performance metrics and civic usability indicators (e.g., screen reader latency, task completion).

  • Assistive-Context-Specific Pretraining (Commitment I.1)
    "Signatories commit to drawing up and keeping up-to-date model documentation ... providing relevant information to downstream providers." (Draft 3, Commitment I.1). Input: require GPAI pretraining on assistive and public service contexts (e.g., inclusive education, AT interaction patterns).

  • Regulatory Sandboxes and Testbeds (Article 58 AI Act). Input: GPAI-specific testbeds in domains like healthcare, education, and urban infrastructure.

  • Multi-modal Notifications and Transparency (Article 50 AI Act). Input: The Code should require auditory, visual, and haptic notifications for AI interaction to ensure accessibility.

Update: While the final version of the EU AI Code of Practice for General-Purpose AI models has now been published, a critical opportunity remains to further strengthen its provisions relating to critical infrastructure and public systems.

• • •

References

¹ European Commission. "General-Purpose AI Code of Practice." Shaping Europe's digital future. 2024.

² European Commission. "Third Draft of the General-Purpose AI Code of Practice published, written by independent experts." March 11, 2025.

³ European Union. "Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)." Official Journal of the European Union. June 13, 2024.

⁴ European Commission. "AI Act: Participate in the drawing-up of the first General-Purpose AI Code of Practice." July 30, 2024.

⁵ European Parliament and Council. "Directive (EU) 2016/2102 on the accessibility of the websites and mobile applications of public sector bodies." October 2016.

⁶ European Parliament and Council. "Directive (EU) 2019/882 - European Accessibility Act on accessibility requirements for products and services." April 2019.