OECD launches Pilot to Monitor Application of G7 Code of Conduct on Advanced AI Development

OECD launches Pilot to Monitor Application of G7 Code of Conduct on Advanced AI Development
Published on

The Organisation for Economic Co-operation and Development (OECD) announced a pilot phase to monitor the application of the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems. This initiative will test a reporting framework intended to gather information about how organisations developing advanced Artificial Intelligence (AI) systems align with the Actions of the Code of Conduct and is a significant milestone under the G7's ongoing commitment to promoting safe, secure and trustworthy development, deployment and use of advanced AI systems.


 The G7 Hiroshima AI Process, launched in May 2023, delivered a Comprehensive Policy Framework that included several elements: the OECD's report Towards a G7 Common Understanding of Generative AI, International Guiding Principles for All AI Actors and for Organisations Developing Advanced AI Systems, the International Code of Conduct for Organisations Developing Advanced AI Systems, and project-based co-operation on AI. Under Italy's current G7 Presidency, G7 members have focused on advancing these outcomes. 

The pilot phase of the reporting framework, available until 6 September 2024, marks a critical first step towards establishing a robust monitoring mechanism for the Code of Conduct as called for by G7 Leaders. The draft reporting framework was designed with input from leading AI developers across G7 countries and supported by the G7 under the Italian Presidency. It includes a set of questions based on the Code of Conduct's 11 Actions. A finalised reporting framework will facilitate transparency and comparability around measures to mitigate risks of advanced AI systems and contribute to identifying and disseminating good practices.


Organisations developing advanced AI systems are welcome to participate in the pilot. Responses provided during this period will be used to refine and improve the reporting framework, with the aim of launching a final version later this year. A common framework could improve the comparability of information available to the public and simplify reporting for organisations operating in multiple jurisdictions.

The OECD has been at the forefront of AI policy making since 2016. The OECD Recommendation on AI, adopted in 2019 as the first intergovernmental standard on AI and updated in 2024, serves as a global reference for AI policy. The OECD has a track record for global intergovernmental collaboration on an equal footing to tackle challenging public policy issues that transcend national borders.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net