The Bletchley Declaration And AI Safety

We have contributed to the open call and public discourse on the The Bletchley Declaration, which aims to further expand multi stakeholder view on AI safety, bring cases and scenarios, including public and critical infrastructure, areas of healthcare, education, assistive and accessibility technologies.

The Bletchley Declaration is a landmark international agreement signed on November 1, 2023, during the AI Safety Summit held at Bletchley Park, UK. Endorsed by 28 countries—including the United States, China, and members of the European Union—as well as the EU itself, the declaration establishes a shared commitment to the safe and responsible development of artificial intelligence (AI), particularly focusing on “frontier AI” systems, which are highly capable general-purpose models with potentially significant societal impacts.

Related

The Declaration’ Principles and Objectives

  • Human-Centric and Responsible AI: The declaration emphasizes that AI should be designed, developed, deployed, and used in a manner that is safe, human-centric, trustworthy, and responsible.

  • Recognition of Risks: It acknowledges the potential for serious, even catastrophic, harm from the most significant capabilities of frontier AI models, whether through deliberate misuse or unintended consequences.

  • International Cooperation: The signatories agree that these risks are best addressed through international collaboration, aiming to build a shared scientific and evidence-based understanding of AI risks and to develop risk-based policies to ensure safety.

  • Balancing Innovation and Regulation: The declaration underscores the importance of a pro-innovation and proportionate governance and regulatory approach that maximizes the benefits of AI while addressing its associated risks.

Public Discourse

Though the lens of public systems and critical infrastructure, we’ve highlighted:

  • Scenarios and Relevance: Emphasize the importance of focusing on actual use cases and applications of AI, as illustrated by your contributions to the OECD AI Safety Tools and Accidents Repositories. AI safety and governance must be grounded in lived experiences and tangible deployments.

  • Open Access and Transparency: Advocate for open access to repositories of public and assistive technologies, including not just applications but also the underlying models and infrastructure. Your input to the OECD repository of assistive technologies reflects this commitment to accessibility and transparency.

  • Risk Assessments: Call for comprehensive and inclusive risk evaluations that explicitly consider minors, patients, caregivers, and other vulnerable populations—groups often overlooked in current regulatory frameworks.

  • Geographic and Socioeconomic Context: Stress the importance of considering socio-economic patterns and historical contexts across different regions when implementing and evaluating AI systems, ensuring globally relevant governance.

  • Software and Hardware Consideration in Frontier Models: Recommend a holistic approach to evaluating Frontier AI models—such as large language models (LLMs), vision-language models (VLMs), and 3D foundation models—by including the supporting hardware ecosystems, including sensors and haptic interfaces used in public and assistive infrastructure.

  • Training Environments and Regulatory Sandboxes: Support the development of real-world, interdisciplinary training environments that integrate both software and hardware components. These should include testbeds and regulatory sandboxes to safely develop, test, and validate AI systems in controlled but realistic conditions.

  • Multi-Modal Accessibility Standards: Propose the implementation of standardized, multi-modal notification and communication systems (e.g. auditory, visual, haptic, and simplified textual cues) to ensure AI systems are accessible to users with sensory or cognitive impairments.

  • Human Oversight and Accountability: Recommend cascading human oversight protocols for AI-powered assistive technologies, especially those that influence autonomy or critical decisions. This includes multi-level verification processes across stakeholders to ensure reliability and safety.

  • Critical Environment Testbeds: Encourage the creation of domain-specific testbeds that replicate essential service settings—such as hospitals, schools, and aging-in-place facilities—to assess real-world impacts on access, quality of service, and user autonomy.

  • Research Access for Civil Society: Advocate for secure and transparent research access frameworks that allow civil society organizations and independent researchers to study AI systems in real-world scenarios, with special protections for vulnerable users.

In signing the Bletchley Declaration open letter, we reaffirm our commitment to strengthening global dialogue on AI safety and accessibility through a maltitude of complementary approaches. We advocate for the harmonization of emerging governance frameworks—such as NIST’s AI Risk Management Framework, ISO/IEC standards, and the EU AI Act—while emphasizing that effective oversight must integrate multi-stakeholder perspectives and methodologies. By bridging technical standards with human-centered design and embedding accessibility across the AI lifecycle, we can ensure that frontier models not only mitigate risks but actively expand capabilities for those most in need of assistive technologies. This integrated approach is essential to ensuring that AI advances human dignity, autonomy, and access in every community and context.

• • •

References

¹ UK Government. "The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023." GOV.UK. November 1, 2023.

² UK Government. "Chair's Summary of the AI Safety Summit 2023, Bletchley Park." GOV.UK. November 2, 2023.

³ NIST. "Artificial Intelligence Risk Management Framework (AI RMF 1.0)." NIST AI.100-1. National Institute of Standards and Technology. January 26, 2023.

⁴ International Organization for Standardization. "ISO/IEC 23894:2023 - Information technology — Artificial intelligence — Guidance on risk management." 2023.

⁵ OECD.AI. "AI Incidents Monitor (AIM) - Global AI Incidents and Hazards Platform." 2023.

⁶ OECD. "Defining AI incidents and related terms." Organisation for Economic Co-operation and Development. 2024.