The Paris AI Safety and Action Declaration

Following the outcomes of the Seoul and Bletchley declarations, we contrbuted to the public discourse of the Paris AI Action Summit (including the letter, initiated by the Renaissance Numerique) and related Statement. During the Summit we presented parts of the MOOC, dedicated to accessibility-centered AI, as well as witnessed two reports and public consultations with our input - the OECD-supported G7 Hiroshima AI Process Reporting Framework (HAIP) (1) and The Future Society Report.

Related

Following the work of the Bletchley and Seoul Declarations, the Paris Summit has gathered over 1,000 participants from more than 100 countries, including government leaders, international organizations, civil society, the private sector, and academia. Unlike its predecessors, which heavily emphasized safety risks, the Paris Summit shifted its focus to accelerating economic opportunities, and practical implementation of AI, alongside addressing a broader range of risks, including energy and environmental impact, abor market implications. This was reflected in the summit's key outcome, the "Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet."

This Statement, signed by 58 countries (a notable fact given the US and UK did not sign), outlines general principles such as promoting AI accessibility, bridging digital divides, ensuring AI systems are ethical and trustworthy, fostering innovation while avoiding market concentration, positively shaping labor markets, and making AI sustainable. It specifically highlighted the need for AI to serve "the public interest of all, for all and by all."

From Seoul To Paris Declaration and Committments

Despite built on outcomes of previous commitments, the the summit and declaration made some noticable pivots from “risk“-oriented initiatives and commitments to more “opportunity“-focused projects:

  • International Network of AI Safety Institutes: Established at Seoul, this network (including UK, US, Japan, EU, and more) aims for collaboration on safety research and best practices. Post-Paris, its cohesion faces challenges due to diverging priorities between some nations (e.g., US/UK focusing on frontier safety vs. France prioritizing broader AI applications).

  • Frontier AI Safety Commitments: Companies like Amazon, Google, and Microsoft pledged at Seoul to define "intolerable risks" and publish safety frameworks by Paris. Some new commitments were made around the Paris Summit, but a notable criticism was the lack of concrete, unified progress on common risk thresholds and corporate accountability mechanisms.

  • International Scientific Report on Advanced AI Safety: Commissioned at Bletchley, its final version (by 96 experts) was published on January 29, 2025. Presented at Paris, some critics felt its scientific findings on AI risks were downplayed in favor of a pro-innovation narrative.

Initiatives Launched at Paris

  • Current AI Foundation: A new public interest AI partnership with an initial $400 million endowment and a €2.5 billion funding target over five years. It aims to develop open, ethically governed AI models for public good (e.g., health, education datasets).

  • InvestAI (EU Initiative): Launched by the European Commission, this €200 billion public-private initiative includes €20 billion for four "AI gigafactories" across Europe (operational by 2027-2028), boosting compute power for European AI development.

  • Observatories on AI's Impact on Work & Labor: A new network (across 11 nations) to understand AI's effects on job markets, workplaces, and education.

  • Coalition for Sustainable AI: A global partnership of 91 members (37 tech companies, 10 countries, int. orgs) led by France, UNEP, and ITU, focused on measuring and reducing AI's carbon footprint.

  • Global Observatory on Energy, AI, and Data Centres (IEA): Announced by the IEA, set to launch April 10, 2025, to collect data on AI's electricity needs and track energy sector applications.

  • Selection of 50 Innovative AI Projects: The Paris Peace Forum showcased 50 projects (from 770 applications across 111 countries) focused on diverse applications from visually impaired assistance to SDGs

Our discussions were specificailly addressed towards taxonomies of public and assistive systems, encompassing both software and models like Vision-Language Models (VLMs), and the underlying hardware. This includes components behind complex assistive and rehabilitation technologies such as sensors (e.g., EMG for prosthetic control, Lidar for navigation, IMUs for gait analysis), haptic elements (e.g., vibrotactile actuators for alerts, force feedback for rehabilitation), and actuators and effectors (e.g., electric motors for robotic exoskeletons, soft grippers for manipulation, shape memory alloys for adaptive devices).

• • •

References

¹ French Government. "Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet." AI Action Summit Paris. February 11, 2025.

² UK Government. "International Scientific Report on Advanced AI Safety: principles and procedures." GOV.UK. January 29, 2025

³ Center for Strategic and International Studies. "France's AI Action Summit." February 20, 2025.

⁴ The Future Society. "Did the Paris AI Action Summit Deliver on the Priorities of Citizens and Experts?" February 28, 2025.

⁵ Australian Government Department of Industry Science and Resources. "Australia signs Paris AI Action Summit statement." February 16, 2025.

⁶ Atlantic Council. "At the Paris AI Action Summit, the Global South rises." February 13, 2025.

⁷ Anthropic. "Statement from Dario Amodei on the Paris AI Action Summit." February 2025.