Skip to main content
Australia Blog


Celebrating extraordinary Australian AI stories

Alt text required

For over a decade, we’ve been working on artificial intelligence to make more of the world’s information accessible and useful. Today, AI is infused in almost all our products – helping you to snap the perfect selfie with Portrait Mode on Pixel 2, breaking down language barriers in Google Translate, and making it easier to respond to an email with Smart Reply in Gmail. We’ve even taken AI into the kitchen, teaming up with a bakery to whip up the world’s first smart cookie.

But the impact of AI goes beyond making products more useful (or tasty). We’re starting to see how AI can be a positive force for society––from high school students who built a machine learning tool to detect diseases on plants and skin, to conservationists preventing illegal logging in the Amazon rainforest with AI – and researchers partnering with astronomers hunting new planets with machine learning. The potential of AI to solve complex, real-world problems is huge. To help more people tackle challenges with AI, we’ve open-sourced machine learning tools like TensorFlow, we help others innovate with Cloud AI – and collaborate with researchers around the globe.

And here in Australia, researchers, developers and businesses are using these AI tools to solve difficult problems in the fields of health, conservation, linguistics and more. Today, we celebrated some of these stories with an event in our Sydney office, to show how AI is driving impact in extraordinary, unexpected and tangible ways––here and now.

Here’s a snapshot of some Aussie AI-powered products and projects in a range of fields:

Saving dugongs with AI

Alt text required

Credit: Ahmed M. Shawky

Dugongs are the gentle giants of the sea and despite their size, are hard to keep track of. This has presented a challenge for conservation researchers working to save this endangered species.  For decades, scientists had to spend days peering out of small planes to count populations, which was expensive, time consuming and often hazardous. Researchers then analysed imagery manually, zooming in to count dugongs one-by-one.

Dr. Amanda Hodgson of Murdoch University and Dr. Frederic Maire of Queensland University of Technology knew there must be a better way. In 2010, they began testing drones, which take aerial photography of the ocean––and in 2014, they applied the magic of machine learning in their quest to make the processing of images from drones faster and cheaper, and therefore make drone surveys a realistic option. Using TensorFlow, Google’s free open source machine learning platform, the team built a detector that could learn to find dugongs in these photos automatically.

Alt text required

So far, the team have processed 37k+ images––identifying 70% of the sea cows they’d found manually in images. This analysis took 18 hours to complete, compared to the 377 hours required for manual analysis. Hodgson and Frederic have now integrated this detection software with mapping software to plot all sightings, giving them richer data about the volume and locations of dugongs.

Preserving precious languages 

While there are 6,000+ languages in the world, more than 50% of web content is in English. AI can help us make this content accessible, break down language barriers and even preserve endangered languages. Since 2012, Google's language technology teams have been using neural networks to make the world’s diverse language content universally accessible and useful.

Professor Janet Wiles and Ben Foley, researchers with the ARC Centre of Excellence for the Dynamics of Language (CoEDL) are working to transcribe and preserve endangered languages. There are over 300 Indigenous languages in Australia––which can be as distinct as Japanese is to German. Indigenous languages are also inextricably connected to the land, imbued with history and sacred songlines––passed down through oral tradition.

Alt text required

CoEDL conducting traditional methods of fieldwork

With many indigenous languages endangered, research and transcription is both time sensitive and intensive. CoEDL has fieldworkers working with ~130 languages, recording mountains of data (almost 50K hours of language audio in archives), which could take 1.9M hours to transcribe using traditional methods. Recognising the importance and the sheer enormity of the work, Wiles and Foley realized AI could help provide a new solution to harness the contributions of community members and linguists, while protecting the integrity of this precious language data.

Alt text required

CoEDL and Google teams building language models at a recent workshop

In 2016, Wiles and Foley looked to Google’s open-source AI technology to build unique models for several Indigenous languages––allowing for faster transcription and a bespoke solution. While this project is still in its early stages, Google and CoEDL are delighted to announce this partnership to implement TensorFlow and Kaldi to transcribe indigenous languages. So far, we’ve co-hosted workshops with 35 linguists, and have built initial models for 12 Indigenous languages including Bininj Kunwok, Kriol, Mangarayi, Nakkara, Pitjantjatjara, Warlpiri, Wubuy – as well as indigenous languages in regions surrounding Australia, such as Abui (spoken in Indonesia) and Cook Islands Maori.

CoEDL aim to train more language workers to contribute to the models, and build an even simpler interface. Long-term, the team has a dream to integrate recognition and synthesis systems into their social robot Opie, designed with the Ngukurr Language Center to promote community engagement and the revitalisation of endangered languages.

Enhancing healthcare with AI

There’s a huge opportunity for AI to help solve numerous difficult problems, and in healthcare we’re already seeing some really encouraging applications that could benefit billions of people.  Working closely with clinicians and medical providers, we’re developing tools that we hope will improve the availability and accuracy of medical services across a range of conditions, from diabetic eye disease, to cardiovascular health, and cancer.

In early 2017 we partnered with Dr Elliot Smith, of Brisbane-based medical data specialist Maxwell Plus, to combine deep learning with medical imaging to diagnose prostate cancer in a faster, more affordable and accurate way.  Dr Smith, an expert in magnetic resonance imaging (MRI) systems with a Ph.D. in Electrical Engineering, was troubled that highly trained diagnostics are a scarce resource and unevenly distributed in Australia. Moreover, prostate cancer diagnostic methods can take up to seven days to process results. Smith felt compelled to find a scalable solution and make this clinical brainpower available to all doctors, to offer patients better care.

Dr Smith leveraged Google Cloud AI to ‘train’ a system to analyse hundreds of thousands of prostate cancer images using AI. This delivered results to clinicians in 10-15 minutes rather than 2-7 days by traditional diagnostic methods. Maxwell Plus has since expanded to cover breast and lung cancer diagnostics – and has a goal to process 150,000 cases by the end of 2018.

Alt text required

Maxwell Plus’ interface running cancer diagnostics, powered by Google Cloud Platform

Cherishing memories, art and culture

Advances in computer vision and mobile photography allow you to search, stylise and share your photo––and learn about the world around you.

Three years ago, we introduced Google Photos as a home for all of your pictures and videos, organized and brought to life. One AI powered feature which will be rolling out soon is Colour Pop, which detects the subject of your photo through machine learning and leaves it in colour–– while the background is set to black and white.

We’ve also seen breakthroughs in computational photography when AI, software and hardware come together. HDR+ (which runs on all recent Pixel and Nexus phones) produces photos and videos with low noise and sharp details, even in dim lighting. To see this in action, here’s a video by Aussie surf photography leader Aquabumps, showcasing HDR+ quality in video.

The Pixel 2 also contains a specialized neural network to produce portrait mode, which researchers trained on almost a million images. By using machine learning, the device can make predictions about what should stay sharp in the photo and create a mask around it, producing professional-looking shallow depth-of-field image. Here are a few portrait mode snaps taken in Australia by photographers @karin_samsovona and @samscrim.

Alt text required
Alt text required

Australian AI stories offer a glimpse of the potential of AI to improve people’s lives––giving you the tools to capture your most precious moments, supporting doctors to serve their patients, helping animal populations grow, and preserving languages to live on. It is a privilege to partner with so many brilliant minds and creative thinkers to discover new applications of AI, and uncover new avenues to tackle some of our most pressing social issues.