General-Purpose AI Code Of Practice (Working Groups 2, 3)

Following the Commission’s call for input, we participated in Working Groups 2 (Risk Assessment for Systemic Risk) and 3 (Technical Risk Mitigation for Systemic Risk) to contribute to the development of the General-Purpose AI (GPAI) Code of Practice.

The General-Purpose AI Code of Practice is a voluntary, co-regulatory framework developed under Article 56 of the EU AI Act to support providers of general-purpose AI (GPAI) models in aligning with upcoming legal obligations. It is being collaboratively drafted by over 1,000 stakeholders—including AI developers, civil society, academia, industry representatives, and international observers—through a process coordinated by the European Commission. The Code is structured around four specialized working groups: Working Group 1 addresses transparency requirements for all GPAI models (with limited exceptions for certain open-source systems), while Working Groups 2, 3, and 4 form the Safety and Security Section. These groups focus specifically on GPAI models considered to pose systemic risk, covering risk assessment methodologies (WG2), technical risk mitigation measures (WG3), and governance mechanisms (WG4). Once finalized, the Code will be subject to approval by the EU AI Office and AI Board, and is intended to evolve with future technological developments.

Related

Chronology of the EU General-Purpose AI Code of Practice

  • Initial call/Launch of drafting process: July-October 2024

  • Draft 1 release: November 14, 2024

  • Draft 2 release: December 19, 2024

  • Draft 3 release: March 11, 2025

  • Final release (expected)

Previous input on GPAI and public systems under the AI Act

  • Article 5(1)(b) Prohibits exploitation of vulnerabilities – require multi-modal and multi-method testing to identify interaction patterns that might unintentionally exploit the cognitive, sensory, or emotional vulnerabilities of users (such as minors, elders, persons with disabilities) within GPAI systems. Testing must span deployment environments (cloud-based, edge devices, intermittent connectivity) to account for assistive tech in connectivity-limited settings and urban "dead zones" where public safety systems might experience degraded performance.

  • Article 9(9) Risk assessment for vulnerable groups – mandate system-wide impact assessments that capture how GPAI models affect entire critical infrastructure and care ecosystems — including users with impairments, informal caregivers, clinical staff, and assistive technology vendors. Extend assessment to urban infrastructure dependencies where GPAI systems may control public transportation, emergency services access, or civic participation technologies that disproportionately impact vulnerable populations.

  • Article 13(3) Transparency on group-specific performance – require granular performance reporting on how GPAI systems interact with adaptive hardware/software (e.g., screen readers, AAC devices), and ensure transparency in how different user interfaces may affect accessibility and reliability across the assistive technology spectrum. Include standardized metrics for performance during infrastructure disruptions, and other scenarios affecting public safety.

  • Article 14(5) Human oversight in biometric ID systems – implement cascading human oversight protocols in GPAI-powered assistive technologies, ensuring that outputs affecting critical care or autonomy (e.g., fall detection, medication reminders) are verified across stakeholder chains (e.g., clinicians, caregivers, end-users). Extend oversight to GPAI systems deployed in urban surveillance, emergency management, and mobility systems where automated decisions impact public safety and freedom of movement.

  • Article 16(l) Accessibility compliance with EU Directives – create GPAI-specific accessibility frameworks that go beyond single-user paradigms and account for complex care environments involving multi-device, multi-user interactions, such as smart homes or shared care coordination platforms. Strengthen interoperability standards between GPAI systems and existing assistive technologies, ensuring backward compatibility with established accessibility solutions and seamless integration with urban digital infrastructure used by diverse populations.

  • Article 26(7) Information obligation for workplace AI – develop accessible communication protocols (text, audio, plain language, visual) to ensure AI system usage is understandable to employees with disabilities and their support personnel (e.g., interpreters, workplace aides).

  • Article 27 Fundamental rights impact assessments – adopt assessment methodologies that account for how GPAI might introduce or amplify risks related to manipulation, dependency, informational exclusion, or autonomy loss in high-stakes care or educational contexts. Include specific assessment of how GPAI-powered civic infrastructure (voting systems, permit applications, public benefits) might inadvertently create discriminatory outcomes or uneven access to essential urban services.

  • Article 67 Data access for research and development – establish secure, transparent research access frameworks enabling civil society and academic researchers to study GPAI systems in real-world healthcare, education, and assistive tech scenarios, with protections for vulnerable users. Include provisions for studying GPAI performance in urban crisis management, resource allocation during emergencies, and public infrastructure operation to identify potential biases or failure modes affecting community resilience.

  • Article 50(3) Transparency in biometric/emotion systems – require explicit disclosure and explainability protocols when GPAI systems mediate access to public or private services (e.g., mental health screening, eligibility assessment), particularly for users relying on assistive or intermediary technologies. Extend these protocols to urban monitoring systems that use sentiment analysis, crowd behavior prediction, or other inference technologies that shape public safety responses or resource allocation.

  • Article 52(1) Notification of AI system interaction – implement multi-modal notification standards (auditory, haptic, visual, simplified text) so all users, including those with sensory or cognitive impairments, are made aware when they are interacting with or being influenced by a GPAI system.

  • Article 54(1)(a) Evaluation of systemic-risk GPAI – establish domain-specific GPAI testbeds that simulate critical environments (hospitals, special education classrooms, aging-in-place settings) to test real-world effects on service quality, access, and user autonomy. Develop urban digital twins for testing GPAI performance in city-scale applications, including emergency management, utility operations, and public transit systems, with particular attention to accessibility during system strain or failure conditions.

  • Article 56(2)(c) Training guidelines for GPAI providers – develop specialized GPAI training protocols for high-impact domains, requiring simulation of stakeholder-rich environments (e.g., disability services, special education) and focused pretraining on accessibility and ethical considerations.

  • Article 58(1) Regulatory sandboxes for innovation – create dedicated regulatory sandboxes for GPAI applications in assistive technology, education, and healthcare, focused on identifying manipulation vectors, equity gaps, and unintended exclusionary consequences in real-world deployments.

  • Article 69(1) AI Office public accessibility – ensure the AI Office includes specialists in public technology, aging, and accessibility engineering, capable of interpreting how GPAI intersects with care systems and mediates access to rights, services, or autonomy.

  • Article 95(2)(e) Code of conduct for vulnerable groups – develop robust evaluation frameworks to assess GPAI systems against known manipulation and accessibility risks, measuring effectiveness across diverse user profiles and ensuring compatibility with the full assistive technology ecosystem. Implement continuous updating mechanisms to keep frameworks current with evolving assistive technologies and urban systems, including provisions for rapid assessment of novel GPAI deployments in critical public infrastructure.

The GPAI Code Commitments (Draft 3)

While the following input primarily aims to shape the final version of the Code of Practice, it also supports implementation of existing AI Act provisions. In several cases, the current third draft of the Code already includes relevant commitments. Our input builds upon or extends those draft provisions.

  • Systemic Risk Assessment (Commitment II.1) "Signatories commit to adopting and implementing a Safety and Security Framework ... that will detail the systemic risk assessment, systemic risk mitigation, and governance risk mitigation measures." (Draft 3, Commitment II.1), Input: extending systemic risk assessments to include domain-specific models of care ecosystems, capturing the impacts of GPAI on vulnerable users and public infrastructure.

  • Documentation and Assistive Tech Performance (Commitment I.1) "Signatories commit to drawing up and keeping up-to-date model documentation ... providing relevant information to downstream providers." (Draft 3, Commitment I.1, Measure I.1.1). Input: Addition of granular performance reporting on how GPAI systems interact with adaptive technologies and interfaces.

  • Copyright and Research Access (Commitment I.2) "Signatories commit to drawing up, keeping up-to-date, and implementing a copyright policy ... and adopting Measures I.2.2–I.2.6." (Draft 3, Commitment I.2). Input: adding mechanisms for civil society and academic research access to GPAI systems.

  • Interoperability and Backward Compatibility (Commitment I.1 / I.3)
    "Signatories commit to drawing up and keeping up-to-date model documentation ... providing relevant information to downstream providers." (Draft 3, Commitment I.1), Input: require documentation of GPAI compatibility with legacy assistive technologies and APIs for integration in public and low-resource settings.

  • Human Oversight and Escalation Protocols (Commitment II.2 / II.3)
    "Signatories commit to implementing appropriate human oversight measures to ensure that the general-purpose AI model with systemic risk does not pose systemic risks." (Draft 3, Commitment II.2), Input: Define tiered human oversight roles and escalation chains for GPAI in public-facing services (e.g., healthcare, safety).

  • Evaluation Metrics (Commitment III.1)
    "Signatories commit to evaluating and testing general-purpose AI models with systemic risk in accordance with standardised protocols and tools." (Draft 3, Commitment III.1), Input: Mandate disaggregated accessibility performance metrics and civic usability indicators (e.g., screen reader latency, task completion).

  • Assistive-Context-Specific Pretraining (Commitment I.1)
    "Signatories commit to drawing up and keeping up-to-date model documentation ... providing relevant information to downstream providers." (Draft 3, Commitment I.1). Input: require GPAI pretraining on assistive and public service contexts (e.g., inclusive education, AT interaction patterns).

  • Regulatory Sandboxes and Testbeds (Article 58 AI Act). Input: GPAI-specific testbeds in domains like healthcare, education, and urban infrastructure.

  • Multi-modal Notifications and Transparency (Article 52 AI Act). Input: The Code should require auditory, visual, and haptic notifications for AI interaction to ensure accessibility.

While we await the final draft of the EU AI Code of Practice for General-Purpose AI models (final date to be confirmed), a critical opportunity remains to strengthen its provisions relating to public and critical infrastructure and public systems.

• • •

References

¹ European Commission. "General-Purpose AI Code of Practice." Shaping Europe's digital future. 2024.

² European Commission. "Third Draft of the General-Purpose AI Code of Practice published, written by independent experts." March 11, 2025.

³ European Union. "Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)." Official Journal of the European Union. June 13, 2024.

⁴ European Commission. "AI Act: Participate in the drawing-up of the first General-Purpose AI Code of Practice." July 30, 2024.

⁵ European Parliament and Council. "Directive (EU) 2016/2102 on the accessibility of the websites and mobile applications of public sector bodies." October 2016.

⁶ European Parliament and Council. "Directive (EU) 2019/882 - European Accessibility Act on accessibility requirements for products and services." April 2019.