How to Ensure Your AI Efforts Are and Remain Responsible
Australian businesses are enthusiastic about the value that can be created by experimenting with and implementing AI. But they are counterbalancing this enthusiasm with caution: advancing down the AI path with an emphasis on using AI responsibly. A risk managed approach to AI is sensible to avoid AI disasters, as highlighted by the CIO magazine.
They are supported in this effort by well thought out resources that provide guidance on responsible and safe use of the technology. In particular, the guidance developed by the Federal, State and Territory Governments is an important foundation for many organisations’ thinking and strategies.
Given the quality and effort that has gone into developing this guidance, it makes sense for all organisations to leverage this work, rather than trying to reinvent the wheel on what constitutes responsible and safe AI use. What’s important is that any Australian organisation that’s either experimenting or working substantively with AI puts guardrails in place.
They should also be able to demonstrate the extent to which they’ve been able to stay within the guardrails and meet the standards. Why? Because anyone that comes into contact with the organisation or its specific AI should be aware of how the interaction, and any data associated with it, is to be treated. This then helps to build comfort and trust with the growing use of AI tools in a range of operational contexts.
The challenge of translating guidelines into guardrails
While Australian organisations, by and large, have bought into the need for safe and responsible use of AI, the application of these guidelines or rules in different operational contexts is still a work-in-progress. A recent government-commissioned survey found that while there’s broad support for responsible AI practices, and most organisations think they have implemented responsible AI practices as defined by the government correctly, on average Australian organisations adopt only one-third of the practices successfully.
“Australian businesses consistently overestimate their capability to employ responsible AI practices,” the report found. With “78% of Australian businesses believing they were implementing AI safely and responsibly but in only 29% of cases was this correct.” It added: “There is a gap between perception and practice.”
This, of course, does not suggest that organisations are adopting AI in an irresponsible way, nor are they doing so with any malicious intent. Rather, it is still very early days for enterprise AI, and with some additional practice, organisations will get better at meeting best-practice principles over time.
Additionally, one of the things to note about responsible AI practices is that although they can be expressed in relatively simple terms, there is often a lot of behind-the-scenes work that is required to address them. Maturity, like AI implementation itself, will get better over time, but remains a work-in-progress.
Atturra as a responsible AI case study
As we move more into the AI space, exploring both internal and customer-facing applications for the technology, Atturra itself has leveraged the official government guidance to create three core principles for AI solution development. These principles – Clarity, Accuracy and Disclosure – aim to ensure our clients benefit from AI solutions they can trust to deliver on their business objectives, and that we continue to operate responsibly within the broader expectations of our regulators, technology partners, and the wider community.
When it comes to clarity, the intent is for AI solutions to be transparent and explainable to our clients. Likewise, we require such clarity from our partners across our AI supply chain to ensure that the way an AI algorithm works is understood, and that no one is being asked to put their trust in a ‘black box’ process.
The second principle, accuracy, considers how we can justify belief in the correctness and reliability of an AI solution. It includes both the rigorous verification and validation processes we perform internally on the algorithms and source data, as well as the facilitation of a client’s analysis to ensure they can satisfy their own burden of proof. Inappropriate biases within a solution must be identified, understood and mitigated in line with client and broader expectations; and meaningful human control and oversight is required for all solutions to ensure they continue to produce the desired outputs.
Finally, disclosure is about ensuring that the people impacted by our AI solutions have the information they require for their decision making. Users of our AI solutions need to know they are engaging with AI and must be able to change and contest the outcome if they believe it to be inaccurate or unfair.
For clients, disclosure also includes an understanding of not just the strengths but also the weaknesses of the solution – in particular any limitations of the solution or key assumptions upon which it relies. We also share details of our internal governance and accountability processes to ensure clients, suppliers and regulators can assess our ability to manage risks and comply with AI guardrails.
This represents our efforts to customise and contextualise a set of common responsible AI guidelines into guardrails for our specific AI activity – and is shared in the hopes it assists other Australian organisations to do the same.
About the author
Petar Bielovich is Director, Data & Analytics for Atturra. He leads a team delivering data, analytics and AI solutions, enabling digital transformation and generating more value from all forms of data. Petar has more than 25 years’ experience working with clients, including Australian Defence, Boral, Telstra and Nestle, and has worked for large professional services organisations and start-ups.