Beyond the Hype: Australia's 10 AI Guidelines

A Blueprint for Ethical AI Development
Beyond the Hype: Australia's 10 AI Guidelines
Published on

Australia's center-left government is going to make artificial intelligence safe and responsible. It intends to join other nations in implementing targeted regulations centering on human oversight and transparency. Industry and Science Minister Ed Husic last week released 10 new AI voluntary standards to protect the rollout of AI systems following the rapid proliferation of AI tools in business and daily life.

"Australians know AI can do great things, but people want to know there are protections in place if things go off the rails," Husic said. The government has opened a month-long consultation to consider making these voluntary guidelines mandatory in high-risk settings.

The 10 AI Guardrails for Safe AI Use

To support such efforts, the NAIC developed and published the Voluntary AI Safety Standard that offers the following 10 critical guardrails to make sure AI is being used responsibly in service of the public interest by protecting people, reducing risks, and fostering trust in the use of AI systems:

1. Accountability: Organizations should establish governance by naming an AI owner and putting in place a plan for AI safety.

2. Risk Management: Organizations should establish ongoing processes for understanding and mitigating the risk of harm from AI.

3. Data Protection: Businesses are required to protect AI systems using data governance and cybersecurity.

4. Model Testing: AI systems need rigorous testing and monitoring of their unintended consequences both pre- and post-deployment.

5. Human Oversight: Human intervention mechanisms must exist within the full life of an AI system.

6. Transparency: Organizations are meant to indicate when AI is used in decision-making or content creation.

7. Challenge Mechanism: There should be clarity on how the procedures for challenging AI-driven decisions will be made.

8. Supply Chain Transparency: The partners shall be informed about all information in AI systems to take the necessary risk management measures.

9. Record Keeping: Records shall be appropriately maintained in detail to show how the standards in AI safety have been adhered to.

10. Stakeholder Engagement: A platform of continuous engagement shall be done with the stakeholders to make decisions and ensure that fairness, equity, and no or reduced bias are achieved.

These guardrails would help Australian businesses responsibly develop AI systems. Standards would make Australian organizations on par with future regulations and international standards.

International Context of AI Regulation

These guidelines have come at a time when regulators worldwide while being impressed with the growing popularity of generative AI tools such as ChatGPT and Google's Gemini, raise concerns that AI is helping to spread misinformation and fake news.

For instance, the European Union in May approved sweeping AI legislation that would force high-risk AI systems to be vastly more transparent, setting a high bar for responsible AI development.

"We don't think that there is a right to self-regulation anymore. I think we've passed that threshold," Husic told ABC News. Australia currently operates off voluntary guidelines but is well-positioning itself for future mandatory regulations in high-risk AI settings.

A Path to Mandatory AI Regulations

Australia does not have any specific AI legislation yet, but the country did publish a set of eight principles of responsible AI use voluntarily in 2019. The government reported last year, however, that this cannot conceivably meet the requirements of high-risk scenarios. Husic added that only about a third of the firms using artificial intelligence are deploying it responsibly-a clear indication of tightening the law.

In fact, with AI potentially creating up to 200,000 jobs in Australia by 2030, the guidelines would help businesses responsibly develop AI and position Australia as a global leader in AI innovation. One such step toward that goal is ensuring that the government's Voluntary AI Safety Standard will make sure businesses can align with both local legal expectations and international best practices.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net