Bias In, Bias Out

Machine Learning Discrimination: Bias In, Bias Out
Featuring: Dr. Toon Calders
University of Antwerp
July 8, 2020 11:00 AM to 12:30 PM (EST)
Artificial intelligence is more and more responsible for decisions that have a huge impact on our lives. But predictions made using data mining and algorithms can affect population subgroups differently. Academic researchers and journalists have shown that decisions taken by predictive algorithms sometimes lead to biased outcomes, reproducing inequalities already present in society. Is it possible to make a fairness-aware data mining process? Are algorithms biased because people are too? Or is it how machine learning works at the most fundamental level?

Bias In, Bias Out webinar video

Earn a Learner badge

Machine learning, a subset of artificial intelligence (AI), depends on the quality, objectivity and size of training data used to teach it. We Count encourages participants and learners to explore this concept to help inform more equitable decisions and supports by understanding data gaps and biases.
You will learn:

  • How predictive algorithms and data mining affect different populations in a discriminatory manner
  • How specific data resources are used to train and reinforce machine learning models to produce biased outputs
    Learn and earn badges from this event:
  1. Watch the accessible Bias In, Bias Out webinar
  2. Apply for your Learner badge

Comments

*

Indicates required field

*Required
*Required
Thank you for submitting your comment. It will be posted on the page once it is approved