Despite the hype and documented successful use cases, scepticism continues to threaten the potential of AI in Australia and worldwide.

Regulation that addresses Australians’ concerns without constraining AI’s rich potential can help close the trust gap and realise the technology’s benefits.

The AI trust gap in Australia is worryingly large, with new Workday research, Closing the AI Trust Gap, revealing Australians’ scepticism of AI is higher than the global average. Sixty percent of Australians are worried about the trustworthiness of AI, the highest among all countries surveyed, and only half (51%) are confident their organisation can clearly say AI will not improve work.

This distrust potentially translates to lower levels of adoption; just 40 percent of Australians welcome AI in their organisation, compared to a global average of 54 percent.

Furthermore, more than one-third (37%) of Australian respondents do not trust their organisation to identify and mitigate against bias on sensitive characteristics when using AI, and 30 percent of Australian respondents are not confident their company puts employee interests over its own when implementing AI.

 

Only 40 percent of Australians want AI in their workplace.

 

Closing the trust gap with effective regulation

The Australian workforce views effective regulation of AI and data used in industries such as financial systems, healthcare and autonomous vehicles as a step towards increasing the trustworthiness of AI. In fact, Australian survey respondents are second only to those from Switzerland supporting regulatory measures.

However, with 51 percent of Australians agreeing companies cannot be trusted to fully self-regulate the use of AI and three in four saying their organisation is not collaborating on AI regulation, there is a clear gap between expectation and current reality.

Fortunately, the Australian Government is moving to address the regulatory gap. In June 2023, the Department of Industry, Science and Resources (DISR) published its Safe and Responsible AI discussion paper and sought feedback on potential regulatory approaches and safeguards the Government should put in place to address risks arising from the use of AI.

At Workday, we engaged with this process and provided DISR with our thoughts on a path forward for AI governance and meaningful safeguards. DISR recently published an interim response to the consultation that echoed themes consistent with Workday’s AI policy advocacy. The response advocates a risk-based approach to AI regulation.

DISR makes clear that the Government will strengthen existing laws that deal with known AI harms such as privacy and copyright concerns, and work on new initiatives to ensure AI design, development and deployment in legitimate high-risk settings is safe and reliable.

These initiatives include using testing, transparency and accountability measures to prevent harm from occurring in high-risk settings; clarifying and strengthening laws to safeguard citizens; working internationally to support the safe development and deployment of AI; and maximising the benefits of AI. For low-risk settings, the Government aims to ensure the continued use of AI is largely unimpeded.

Workday is all-in on the possibilities for AI to unlock human potential and we believe smart safeguards are key to bridging the AI trust gap and unleashing the full benefits of the technology. Recent AI policy developments in the European Union and the United States where both have taken a risk-based approach to setting guardrails, have been encouraging.

We are pleased to see greater attention being paid to AI policy issues in the region and Australia taking the lead in making progress. We look forward to Australia similarly taking a nuanced, risk-based approach to regulating AI that aims to close the trust gap while realising the full potential of this innovative technology.

 

 

Establishing trust through transparency, consistency and meeting user expectations

While the policy landscape related to AI continues to evolve around the world, there is much that companies can do today. As Jim Stratton, our chief technology officer, says, policy (including regulation) is only one of the pillars, along with principles, practices and people, that underpin an effective responsible AI program at organisations.

Principles serve as the ethical compass that steers AI initiatives towards responsible outcomes; practices operationalise ethical considerations throughout the AI lifecycle and people who work with AI shape and guide its development in accordance with ethical considerations. (you can read more about how your organisation can drive the right outcomes in each of these pillars here.)

At Workday, we realised nearly a decade ago the importance of leadership in the responsible development and deployment of AI and how this helps our customers, employees and society more broadly.

We believe AI should elevate, not displace humans, and trust in the technology must be earned through transparency.

 

Frameworks for ethical AI and risk-based regulation closes the AI trust gap

While AI will ultimately bring immense benefits to Australian businesses, the latest Workday research shows bridging the trust gap remains a considerable task. Organisations can make progress by developing and adhering to their own frameworks for ethical AI.

However, while principles, practices and people establish an internal foundation for responsible AI, there is a level of trust that can only be achieved by underpinning these efforts with effective public policy.

Regulation that establishes the infrastructure to enforce ethical principles and best practices, and shape organisational cultures surrounding AI, will effectively close the AI trust gap.

We are at a crucial point in the development of AI, and the right approach to regulation from the Australian Government, combined with proactive measures from organisations, can drive us forward.

 

This article was written by Eunice Lim, Director, Corporate Affairs, APJ, Workday (above).