Skip to main content
The Keyword

A new course to teach people about fairness in machine learning



In my undergraduate studies, I majored in philosophy with a focus on ethics, spending countless hours grappling with the notion of fairness: both how to define it and how to effect it in society. Little did I know then how critical these studies would be to my current work on the machine learning education team where I support efforts related to the responsible development and use of AI.


As ML practitioners build, evaluate, and deploy machine learning models, they should keep fairness considerations (such as how different demographics of people will be affected by a model’s predictions) in the forefront of their minds. Additionally, they should proactively develop strategies to identify and ameliorate the effects of algorithmic bias.


To help practitioners achieve these goals, Google’s engineering education and ML fairness teams developed a 60-minute self-study training module on fairness, which is now available publicly as part of our popular Machine Learning Crash Course (MLCC).

ML bias

The MLCC Fairness module explores how human biases affect data sets. For example, people asked to describe a photo of bananas may not remark on their color (“yellow bananas”) unless they perceive it as atypical.

Students who complete this training will learn:

  • Different types of human biases that can manifest in machine learning models via data
  • How to identify potential areas of human bias in data before training a model
  • Methods for evaluating a model’s predictions not just for overall performance, but also for bias

In conjunction with the release of this new Fairness module, we’ve added more than a dozen new fairness entries to our Machine Learning Glossary (tagged with a scale icon in the right margin). These entries provide clear, concise definitions of the key fairness concepts discussed in our curriculum, designed to serve as a go-to reference for both beginners and experienced practitioners. We also hope these glossary entries will help further socialize fairness concerns within the ML community.


We’re excited to share this module with you, and hope that it provides additional tools and frameworks that aid in building systems that are fair and inclusive for all. You can learn more about our work in fairness and on other responsible AI practices on our website.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe