Skip to main content
Australia Blog

Harnessing AI's Potential: The Path to Opportunity

An image depicting a map of Australia

Harnessing AI's Potential: The Path to Opportunity

Last week, I traveled to Australia to meet with policymakers, academics, and industry representatives to discuss how we can make the most of the AI opportunity.

And what an opportunity it is: After a decade of progress, artificial intelligence burst into public consciousness. And the early chatbots barely scratch the surface of what is to come.

We are seeing AI's potential to accelerate leaps in discovery–like Google DeepMind’s AlphaFold program predicting the shapes of nearly all proteins known to science, giving us the equivalent of nearly 400 million years of progress in a matter of weeks.

We also see where those breakthroughs can lead: More than two million researchers are using AlphaFold to advance biology research, including researchers in Australia who are, with the help of AlphaFold, examining early-onset Parkinson’s and paving the way for new treatments.

As we transition into this next phase of AI, public-private collaboration is one way we can ensure we’re making the most of this opportunity: It’s why Google and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) have been working together to develop new tools for climate and data scientists, enabling them to do things like analyse the impact of climate change, pollution, and fishing on the Great Barrier Reef or measure how seagrass ecosystems absorb and sequester carbon.

But with AI promising significant potential in preventive medicine, precision agriculture, economic productivity, and more, getting the regulation piece right is also essential.

Through our Digital Future Initiative, we’re investing $1 billion in Australian AI research, partnerships, and infrastructure.

Through our Digital Future Initiative, we’re investing $1 billion in Australian AI research, partnerships, and infrastructure.

At a visit to the Australian National University, I spoke about how Australia can accelerate AI progress.

I told the audience we can start by getting the regulation piece right with a three-pronged approach that’s balanced, aligned, and targeted.

Balanced. We need to protect the public interest while promoting AI innovation and economic growth. We do that, not by reinventing the wheel, but by working to improve our current legal frameworks and identifying and filling gaps where existing laws don’t adequately cover AI applications. The goal should not be perfection, especially at this early stage, but improving our current systems and striving for fairness.

Taking a balanced approach to regulation means applying fair use, copyright exceptions, and rules governing publicly available data to unlock scientific advances and the ability to learn from prior knowledge while still ensuring website owners can use machine-readable tools to opt out of having content on their sites used for AI training.

Aligned. We need national laws that align with the policy frameworks of other leading democracies promoting AI innovation. Consistent regulations will help us to avoid a patchwork of conflicting rules that hamper international AI collaboration and innovation.

The good news is countries don’t need to start from scratch: International organisations like the OECD have already set out AI frameworks. And governments are harmonising rules to support a cohesive approach to AI governance, from the G7’s Hiroshima Process to the AI Safety Summits in the UK and Korea to the UN High Level Advisory Body on AI.

Finally, international organisations like the International Organisation for Standardization (ISO), are developing standards to ensure AI systems are safe, secure, and trustworthy.

We can incorporate these standards into national regulations and provide companies at the forefront of AI innovation with uniform benchmarks against which they can be assessed and compared, while serving as a “seal of assurance” recognised by users or purchasers of AI systems.

Targeted. We need rules of the road for AI that are proportionate based on risk–recognising that high-risk activities are also high-value activities and that there’s a cost to unduly slowing implementation.

To support broadly beneficial AI advances, we should focus on regulating outputs–and let regulators intervene where risks and harms happen, rather than trying to manage fast-evolving computer science and deep-learning techniques.

Issues in banking will differ from issues in pharmaceuticals or transportation, which is why regulators in each sector should draw on their unique expertise, while ramping up their understanding of novel AI issues.

In short, every agency will need to become an AI agency. We certainly don’t need one AI agency to rule them all, any more than we need a Department of Engines or a single law governing all the uses of electricity. Instead, we should adopt a hub-and-spoke model with a center of technical expertise at an agency like America’s NIST that can advance government understanding of AI and support sectoral agencies.

Lastly we should also distinguish AI developers from AI deployers from AI users. With AI touching every industry and every facet of our daily lives, model developers won’t be able to anticipate and protect against all possible misuses of AI, but that doesn’t mean there shouldn’t be safeguards. Liability regimes should focus on reasonable development processes and communication of model limits, while making product deployers (who have more control and greater knowledge of specific applications) and users accountable for misuse that they control.

Finally, to ensure wide support for broad AI adoption, we also need to get serious about laying the groundwork for AI-driven job transitions. AI will give a boost to businesses of all sizes and will allow workers to focus on non-routine and more rewarding elements of their jobs. But in the short term, jobs will change–and it will take public-private partnerships and collaboration to prepare workers through AI upskilling courses and programs.

  • Discussing how Australia can make the most of the AI Opportunity during a talk at ANU

    Discussing how Australia can make the most of the AI Opportunity during a talk at ANU

  • Meeting with policymakers at the Parliament of Australia with Google’s Lucinda Longcroft and Bec Turner

    Meeting with policymakers at the Parliament of Australia with Google’s Lucinda Longcroft and Bec Turner

  • Meeting with industry representatives about how Australia can be a leader in AI

    Meeting with industry representatives about how Australia can be a leader in AI

  • Discussing how Australia can make the most of the AI Opportunity during a talk at ANU

    Discussing how Australia can make the most of the AI Opportunity during a talk at ANU

Final Thoughts

AI can improve people’s lives by powering progress at digital speed in everything from wildlife recovery to progress on the Sustainable Development Goals.

By working together, across geographies and sectors, countries like Australia can be a leader on AI, fostering ongoing innovation and shaping a future transformed by scientific breakthroughs.