Think piece: What does it mean for AI to be ‘trustworthy’?

By: Dr Zena Assad

Posted on

While technology isn’t the answer to all our problems, it has improved many aspects of our lives over the course of history. A lack of trust in emerging AI technologies can inhibit and deter the benefits these technologies may afford society.

Hi I am AI
Photo: Yutong Liu & Kingston School of Art

Interest and investment in autonomous computing has skyrocketed in recent years. This has sparked well-intentioned discussions of how to ensure AI is ‘trustworthy’.

The sweeping narratives around emerging computing technologies have directed focus away from the purely technical, with socio-technical concepts like trust becoming prominent focal points.

Trustworthy AI has become a pillar of technological safety discussions. So much so, that the concept is now referenced directly in many AI policy documents.

But how can a machine be ‘trustworthy’?

Trust is a human concept, fundamentally subjective and socially defined. This makes it difficult to describe. We all experience trust differently. This is partly why the definition of trust, when applied to AI, is such a point of contention.

Despite this lack of consensus, the UN AI Actions, the EU AI Act, the US Executive Order on AI and Australia’s National Framework for the Assurance of AI in Government all promote trustworthy AI. However, not one of these initiatives defines what they mean by ‘trust’.

The increasing prominence of a term without a widely agreed definition is creating an ambiguous ecosystem in which many policy initiatives are emerging.

If there’s no consensus on what trustworthy AI is, and if we all define trust differently, how can a policy document help us achieve it?

To address this, I ran a workshop in June in collaboration with Defence, Science and Technology Group (DSTG) and InSpace on ‘trustworthy AI’. Its purpose was to explore what we mean by trust in the context of AI systems.

We started at the very beginning: What is AI?

AI, in its broadest sense, is a system of inputs and outputs. It analyses large amounts of data, finding patterns that are used to inform outputs. While the technology underpinning AI is not new, the recent applications of AI (such as ChatGPT) are new and have scaled at an exponential pace.

Then we asked ourselves: Why does AI need to be trustworthy?

While technology isn’t the answer to all our problems, it has improved many aspects of our lives over the course of history. A lack of trust in emerging AI technologies can inhibit and deter the benefits these technologies may afford society.

Put simply, if the public doesn’t trust AI, then some important applications of the technology will face implementation barriers.

For example, AI-enabled drones are being strategically used to access high risk and inaccessible areas during fires. And despite the seemingly obvious benefits, their uptake has been slow because of a lack of community trust in these technologies.

While scaled technologies have their benefits, they also come with limitations. Some examples include racial bias in image recognition systems, gender bias in recruitment systems, and driverless cars hitting pedestrians, among others.

It’s important to maintain a healthy skepticism around any technology, and to avoid the blindness that comes with intellectual complacency. Finding a ‘Goldilocks balance’ where we have just the right amount of trust in AI is important.

Policies around trustworthy AI can help with achieving a calibrated level of trust, but only if we first understand what trust means.

But if these initiatives are to be successful, they first need to define, through consensus, what trust and distrust are.

Trust in technology is built on the confidence of that technology being used safely and reliably for its intended purpose. The wide-reaching impacts of rapidly scaled AI systems have led to high levels of distrust among the public.

Policy initiatives attempting to address trustworthy AI must come to agreed-upon definitions for these concepts, or risk shooting without a target. If they don’t, then at best they’ll be rendered ineffective by their ambiguity, and at worst, will languish as unable to be actioned, and soon to be forgotten.

Dr Zena Assad is a member of the Integrated AI Network. This article was originally published on the ANU Policy Brief page.

news thumbnail image

Regulation and Governance

Learn more about AI Regulation and Governance research in ANU

Learn More
news thumbnail image

Contact

Get in touch with us and be a part of the Network.

Learn More

You may also like

news thumbnail image

10
Oct

Think piece: Statistics at the Interface of Health and AI

Machine learning has a lot offer but also traps people into areas that nobody wants to fall into in statistics