AI Pact and the Next Steps for Implementing the AI Act: Access And Public Systems

We will join in the AI Office’s upcoming session on the AI Pact, dedicated to cross-sector interaction on both sovereign and human-centered AI policy implementation. It echoes our input to the General-Purpose AI Code Of Practice.

Chronology of AI Act Implementation

The timeline for the AI Act’s implementation is critical in understanding the scope and pace of its deployment:

  • August 1, 2024: AI Act enters into force

  • February 2025: Chapters I (General Provisions) & II (Prohibited AI Systems) will apply.

  • August 2025: Chapter III Section 4 (Notifying Authorities), Chapter V (General Purpose AI Models), Chapter VII (Governance), Chapter XII (Confidentiality and Penalties), and Article 78 (Confidentiality) will apply, except for Article 101 (Fines for General Purpose AI providers).

  • August 2026: The whole AI Act will apply, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems).

  • August 2027: Article 6(1) & corresponding obligations will apply.

AI Pact

The EU AI Pact represents a voluntary commitment from over 100 organizations to implement the principles of the EU AI Act before its official legal enforcement. This coalition encompasses AI developers, public agencies, and digital infrastructure providers working proactively to establish responsible AI practices. While not replacing the binding obligations of the AI Act itself, the Pact serves to accelerate the early adoption and fosters trust across sectors. Signatories commit to developing comprehensive AI governance strategies, identifying high-risk AI systems within their operations, and promoting AI literacy and ethical practices among their workforce.

The Pact operates through two key pillars: encouraging organizations to transparently share their voluntary commitments toward meeting high-risk AI requirements, and systematically collecting and publishing these commitments to facilitate accountability. This framework enables engagement between the EU AI Office and diverse stakeholders—including industry, civil society organizations, and academic institutions.

Previous input and discourse

How the AI Act Addresses Public Systems and Vulnerable Contexts

The final text of the AI Act now includes multiple touchpoints directly referencing disability, dependency, age, and other social vulnerabilities. Key provisions include:

Articles

  • Article 5(1)(b) - Prohibits AI that exploits vulnerabilities:

    "the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation... in a manner that causes or is reasonably likely to cause... significant harm;"

  • Article 9(9) - Requires risk assessment for children and vulnerable groups:

    "...providers shall give consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse impact on persons under the age of 18 and, as appropriate, other vulnerable groups."

  • Article 13(3) - Transparency requirements for specific groups:

    "...its performance regarding specific persons or groups of persons on which the system is intended to be used;"

  • Article 14(5) - Special human oversight for biometric identification systems:

    "...no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons..."

  • Article 16(l) - Accessibility requirements for providers:

    "...ensure that the high-risk AI system complies with accessibility requirements in accordance with Directives (EU) 2016/2102 and (EU) 2019/882."

  • Article 26(7) - Obligation to inform workers:

    "Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers' representatives and the affected workers..."

  • Article 27 - Fundamental rights impact assessment:

    "...deployers shall perform an assessment of the impact on fundamental rights that the use of such system may produce."

  • Article 50(3) - Transparency for emotion recognition and biometric categorization:

    "Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system..."

  • Article 95(2)(e) - Codes of conduct for vulnerable persons:

    "...assessing and preventing the negative impact of AI systems on vulnerable persons or groups of vulnerable persons, including as regards accessibility for persons with a disability, as well as on gender equality."

Recitals

  • Recital 29 - Vulnerability exploitation in manipulative AI practices:

    "AI systems may also otherwise exploit the vulnerabilities of a person... due to their age, disability... or a specific social or economic situation... such as persons living in extreme poverty, ethnic or religious minorities."

  • Recital 48 - Special consideration for children:

    "...children have specific rights... which require consideration of the children's vulnerabilities and provision of such protection and care as necessary for their well-being."

  • Recital 56 - Education and vulnerable groups:

    "...AI systems used in education or vocational training... may be particularly intrusive and may violate the right to education... and perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation."

  • Recital 57 - Employment-related vulnerabilities:

    "AI systems used in employment... should also be classified as high-risk, since those systems may have an appreciable impact on future career prospects, livelihoods of those persons and workers' rights."

  • Recital 58 - Vulnerabilities in public services:

    "...natural persons applying for or receiving essential public assistance benefits... are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities."

  • Recital 80 - UN Convention on Rights of Persons with Disabilities:

    "...the Union and the Member States are legally obliged to protect persons with disabilities from discrimination... to ensure that persons with disabilities have access, on an equal basis with others, to information and communications technologies... and to ensure respect for privacy for persons with disabilities."

  • Recital 132 - Accessibility considerations for vulnerable groups:

    "...natural persons should be notified that they are interacting with an AI system... When implementing that obligation, the characteristics of natural persons belonging to vulnerable groups due to their age or disability should be taken into account..."

  • Annex III, point 3 - Education as high-risk domain:

    "AI systems intended to be used to determine access or admission... to assess the appropriate level of education... or for monitoring and detecting prohibited behaviour of students..."

  • Annex III, point 4 - Employment as high-risk:

    "AI systems intended to be used for the recruitment or selection of natural persons... to make decisions affecting terms of work-related relationships... to monitor and evaluate the performance and behaviour of persons..."

  • Annex III, point 5 - Access to essential services as high-risk:

    "AI systems intended to be used by public authorities... to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services..."

This breadth matters. It reflects the fact that harm is contextual — not simply technological. The same algorithm, when applied to a school admissions platform or an unemployment agency, may produce entirely different consequences for different people.

AI Literacy and furher implementation

Noticable focus on the AI Pact is literacy and knowledge sharing, reinforcing. Article 4 of the EU AI Act, dedicated to AI literacy: "Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf."

Further progress should address monitoring of how the AI Pact organizations interpret "voluntary" standards, ensuring early implementation through regulatory sandboxes that support public interest systems and infrastructure, and strengthening interoperability with complementary EU frameworks and broader digital legislation.

• • •

References

¹ European Union. "Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)." Official Journal of the European Union. June 13, 2024.

² European Commission. "AI Pact." Shaping Europe's digital future. 2024.

³ European Parliament and Council. "Directive (EU) 2016/2102 on the accessibility of the websites and mobile applications of public sector bodies." October 2016.

⁴ European Parliament and Council. "Directive (EU) 2019/882 on the accessibility requirements for products and services (European Accessibility Act)." April 2019.

⁵ United Nations. "Convention on the Rights of Persons with Disabilities." Article 9 - Accessibility. 2006.

⁶ European Commission. "Over a hundred companies sign EU AI Pact pledges to drive trustworthy and safe AI development." Press Release IP/24/4864. September 25, 2024.