Australia’s Artificial Intelligence Ethics Framework

Artificial Intelligence (AI) has become a part of our daily lives, from organising our schedules using voice assistance to taking recommendations on the movies we should watch. It can also predict and prevent the spread of bushfires.

Given our increasing reliance on AI, it is important to ensure that AI is safe, secure and reliable. The Australian Government’s Artificial Intelligence (AI) Ethics Framework outlines eight principles that will guide businesses and governments to responsibly design, develop and implement AI, to ensure that Australia becomes a global leader in responsible and inclusive AI.

 

Australia’s AI Ethics Principles

Australia’s eight AI Ethics Principles will help:

  • Achieve safer, more reliable and fairer outcomes for all Australians.
  • Reduce the risk of negative impact on those affected by AI applications.
  • Businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.

The voluntary principles that are recommended to be followed throughout the AI lifecycle are outlined below.

 

Human, social and environmental wellbeing

AI systems should benefit individuals, society and the environment.

For example, AI systems designed for internal business purposes like increasing efficiency can have broader impacts on individual, social and environmental wellbeing. Those impacts need to be accounted for.

AI systems that help address areas of global concern should be encouraged, like the United Nation’s Sustainable Development Goals.

 

Human-centred values

AI systems should respect human rights, diversity, and the autonomy of individuals.

Human rights risks need to be carefully considered, as AI systems can equally enable and hamper such fundamental rights.

AI systems should not undermine the democratic process, and should not undertake actions that threaten individual autonomy, like deception, unfair manipulation, unjustified surveillance, and failing to maintain alignment between a disclosed purpose and true action.

 

Fairness

AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.

This is particularly important given concerns about the potential for AI to perpetuate societal injustices and have a disparate impact on vulnerable and underrepresented groups. AI must be compliant with anti‐discrimination laws.

 

Privacy protection and security

AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.

This includes ensuring proper data governance, and management, for all data used and generated by the AI system throughout its lifecycle. Security vulnerabilities need to be identified and addressed to combat potential malicious cyber attacks.

 

Reliability and safety

AI systems should reliably operate in accordance with their intended purpose.

AI systems should not pose unreasonable safety risks, and should adopt safety measures that are proportionate to the magnitude of potential risks. AI systems should be monitored and tested to ensure they continue to meet their intended purpose, and any identified problems should be addressed with ongoing risk management as appropriate.

 

Transparency and explainability

There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.

Users should be able to understand what the AI system is doing and why, and be able to obtain reasonable disclosures regarding the AI system in a timely manner.

 

Contestability

When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.

Being able to remedy harms when things go wrong should be possible, especially for vulnerable persons or groups. There should be sufficient access to the information available to the algorithm, and inferences drawn, to make contestability effective.

 

Accountability

Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.

Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. The application of legal principles regarding accountability for AI systems is still developing.

 

When should these principles be applied?

The principles are entirely voluntary, and not every use of AI (e.g. accounting software that uses AI) requires comprehensive analysis against all of the principles.

It will be most important to follow the principles where the AI use involves or affects human beings, the environment or society.

 

Key takeaways

Given our increased reliance on AI in our everyday lives, it is important to ensure that AI is safe, secure and reliable. The AI Ethics Framework outlines the principles of: human, social and environmental wellbeing; human-centred values; fairness; privacy protection and security; reliability and safety; transparency and explainability; contestability; and accountability. By following these principles, businesses and governments can help to ensure that Australia becomes a global leader in responsible and inclusive AI.

Nyman Gibson Miralis provides expert advice and representation in complex cases involving the use of AI.

Contact us if you require assistance.