OECD: G7 Hiroshima AI Process Reporting Framework (HAIP)

We contributed our input to the working group and call for the OECD-supported G7 Hiroshima AI Process Reporting Framework (HAIP) (1). It was presented during the Paris AI Action Summit, where we demonstrated parts of the MOOC, dedicated to accessibility-centered AI, as well as witnessed two reports and public consultations with our input.

This Reporting Framework is a direct outcome of the  G7  Hiroshima  AI  Process, initiated under the Japanese G7 Presidency in 2023, and further advanced under the Italian G7 Presidency in 2024. It builds on the Hiroshima AI Process International Code of Conduct for Organisations Developing Advanced AI Systems, a landmark initiative to foster transparency and accountability in the development of advanced AI systems. At the request of the G7, and in line with the Trento Declaration, the OECD was tasked with identifying mechanisms to monitor voluntary adoption of the Code.

Related

The G7 Hiroshima AI Process

The Hiroshima AI Process responds to the need to establish common principles and practices in governing advanced AI systems—particularly foundation models and generative AI. The process builds upon the OECD AI Principles and seeks to codify risk-based, human-rights-aligned practices in AI development, use, and deployment.

The Hiroshima Principles include 11 key areas ranging from lifecycle risk mitigation, post-deployment monitoring, transparency, and security controls, to responsible use of AI for global challenges such as health, climate, and education. These principles are envisioned as a living, adaptable framework, developed through broad stakeholder input.

The HAIP Reporting Framework

The HAIP Reporting Framework is a voluntary reporting tool which is designed to help organizations document and disclose their risk mitigation measures, safety protocols, governance mechanisms, and transparency practices.

Key features of the HAIP Reporting Framework include:

  • Comprehensive coverage of AI governance domains, ensuring alignment with international best practices.

  • Transparency reporting on system capabilities, limitations, and misuse scenarios.

  • Documentation of governance policies, data stewardship, and post-market incident monitoring.

  • Mechanisms for content provenance and authentication, including watermarking where feasible.

Pilot Phase Implementation Data

The pilot phase, conducted July 19 to September 6, 2024, generated substantive multi-stakeholder feedback from 20 organizations across 10 countries: Canada, Denmark, Germany, Israel, Japan, Netherlands, Spain, Switzerland, United Kingdom, and United States. Participating entities ranged from large technology companies to start-ups, encompassing AI system developers, global technology corporations, research institutes, academic institutions, and AI compliance consulting firms.

Pilot Findings Summary:

  • 14 organizations requested consolidation of repetitive questions to streamline reporting processes

  • 13 organizations demanded enhanced formatting options including bullet points and hyperlinks

  • 12 organizations required clearer survey instructions with word limits and examples

  • 9 respondents requested explanations of ambiguous key terms including "advanced AI systems," "unreasonable risks," and "significant incidents"

  • 6 organizations suggested improved alignment with existing voluntary reporting mechanisms (Frontier AI Safety Commitments, White House AI Voluntary Commitments)

Governance Principles and Technical Requirements

The framework mandates comprehensive risk assessment across the AI lifecycle, requiring organizations to implement diverse internal and independent external testing measures including red-teaming methodologies. Technical specifications include robust security controls encompassing physical security, cybersecurity, and insider threat safeguards for model weights, algorithms, servers, and datasets.

Key Technical Commitments:

  • Content authentication and provenance mechanisms through watermarking and identification techniques

  • Traceability systems for datasets, processes, and development decisions

  • Transparency reporting for all significant advanced AI system releases

  • Responsible information sharing protocols across industry, government, civil society, and academia

  • International technical standards development collaboration with Standards Development Organizations (SDOs)

Risk Mitigation and Monitoring Framework

Organizations must establish risk-based governance policies encompassing privacy measures and post-deployment monitoring for vulnerability identification, incident analysis, and misuse detection. The framework prohibits AI systems that undermine democratic values, harm communities, facilitate terrorism, enable criminal activities, or pose substantial safety and security risks, aligning with international human rights law and UN Guiding Principles on Business and Human Rights.

International Coordination and Standards Development

The initiative advances international technical standards through OECD, Global Partnership on AI (GPAI), and multi-stakeholder collaboration. Jurisdictions maintain implementation flexibility while adhering to core monitoring principles. Business at OECD (BIAC) endorses the framework for regulatory interoperability, cross-border collaboration, and transparent AI technology access across jurisdictions.

Data Protection and Ethical AI Development

The framework requires data quality management including training data oversight to mitigate harmful biases, with transparency obligations for datasets under applicable legal frameworks. Priority development targets global challenges—climate crisis, health, education—supporting UN Sustainable Development Goals through responsible, human-centric AI stewardship and digital literacy advancement.

Implementation Timeline and Current Status

The operational reporting framework was launched on February 7, 2025, during the AI Action Summit in Paris, with the first reporting cycle submissions due by April 15, 2025, and publication scheduled for June 2025.

Leading AI developers including Amazon, Anthropic, Fujitsu, Google, KDDI Corporation, Microsoft, NEC Corporation, NTT, OpenAI, Preferred Networks Inc., Rakuten Group Inc., Salesforce, and Softbank Corp. have committed to participate in the inaugural framework. The submissions are expected to be made publicly available by the OECD via a designated transparency platform, excluding any commercially sensitive information.

The comprehensive framework represents international consensus on advanced AI governance, establishing measurable accountability mechanisms while maintaining innovation-friendly approaches to emerging technology development and deployment across diverse organizational contexts and jurisdictional frameworks. This initiative exemplifies how international cooperation can foster progress in AI governance, creating standardized expectations for transparency and risk management practices globally.

• • •

References

¹ Organisation for Economic Co-operation and Development. "G7 Hiroshima AI Process: International Code of Conduct for Organizations Developing Advanced AI Systems." OECD. 2023.

² Organisation for Economic Co-operation and Development. "HAIP: Hiroshima AI Process Reporting Framework." OECD. 2024.

³ G7 Italy. "Ministerial Declaration on Hiroshima AI Process and Trento Dialogue Outcomes." Italian G7 Presidency. 2024.

⁴ OECD.AI Policy Observatory. "OECD Framework for the Classification of AI Systems." OECD. 2022.

⁵ OECD.AI Policy Observatory. "OECD AI Principles: Recommendation of the Council on Artificial Intelligence." OECD. 2019.

⁶ Organisation for Economic Co-operation and Development. "Pilot Phase Summary of HAIP Reporting Framework." OECD. 2025.