The Australian Government’s Voluntary AI Safety Standard equips organisations with a framework to help harness artificial intelligence responsibly and manage potential risks to their businesses, employees, and communities.

By adopting this standard, organisations can use AI safely, ethically, and transparently—mitigating risks, protecting data, and building trust. This voluntary framework also prepares organisations for forthcoming mandatory standards, enabling a smoother transition to future regulations.

This standard promotes responsible AI use across organisations of all sizes—from small businesses using tools like GPT-based chat for content creation and customer support, to large enterprises deploying AI models for data analysis and predictive analytics in finance and beyond.

The standard sets out ten essential guardrails that address governance, transparency, training, strategic planning, and risk management. It also includes key controls like testing, monitoring, impact assessment, and human oversight.

Aligned with Australian laws and the International standard ISO/IEC 42001:2023 AI management system, this voluntary standard is small and medium enterprise friendly to support flexiblity while advancing best practices in AI safety.

What should leaders do?

To proactively align your organisation’s AI practices with the Voluntary AI Safety Standard, leaders can:

  • Assess current AI use: Review where and how AI is used within your organisation, identifying high-risk areas and opportunities for improvement.
  • Implement the guardrails: Begin adopting the ten voluntary guardrails below, focusing on record-keeping, transparency, and testing to ensure ethical and safe AI practices.
  • Build internal capability: Invest in training and resources to strengthen your team’s understanding and application of responsible AI practices.
  • Engage stakeholders: Communicate with employees, customers, and partners about your AI initiatives and the steps you’re taking to ensure safety and accountability.
  • Stay informed on regulations: Monitor regulatory changes and adjust your AI practices to stay prepared for future requirements.

The Voluntary AI Safety Standard 10 Guardrails

  • Guardrail 1: Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.

  • Guardrail 2: Establish and implement a risk management process to identify and mitigate risks.

  • Guardrail 3: Protect AI systems and implement data governance measures to manage data quality and provenance.

  • Guardrail 4: Test AI models and systems to evaluate model performance and monitor the system once deployed.

  • Guardrail 5: Enable human control or intervention in an AI system to achieve meaningful human oversight.

  • Guardrail 6: Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.

  • Guardrail 7: Establish processes for people impacted by AI systems to challenge use or outcomes.

  • Guardrail 8: Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.

  • Guardrail 9: Keep and maintain records to allow third parties to assess compliance with guardrails.

  • Guardrail 10: Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.

    Source: https://www.industry.gov.au/publications/voluntary-ai-safety-standard

To download the standard and view practical guidance material for each of the guardrails, visit www.industry.gov.au for more info or click below.