How AI And Machine Learning Addressed Transparency And Bias In 2020

The proliferation of artificial intelligence and machine learning in 2020 was ubiquitous. These technologies propelled forward advanced quantum computing systems, leading-edge medical diagnostic systems, and especially consumer electronics.

Become a Subscriber

Please purchase a subscription to continue reading this article.

Subscribe Now

And the pandemic only sped up the adoption of AI in the enterprise sector. In September 2019, the IDC proposed that spending on AI technologies was set to grow more than two and a half times to $97.9 billion by 2023. Since then, the effects of COVID-19 have only increased the potential value of AI. According to McKinsey’s State of AI survey published in November 2020, half of respondents say their organizations have already adopted AI in at least one way.

AI is a centerpiece of the coming “new normal” in all our lives, but what exactly did that new normal look like in 2020?

In 2020, there was a growing concern about how AI works, and the issue of bias within AI. The Black Lives Matter movement brought the issue of bias to the forefront. In the 2020 State of Data Science report by Anaconda, results revealed that the social impact that stems from implicit bias in data is an urgent issue that needs to be addressed in AI as well as machine learning. In fact, 27% of respondents said it was their top concern.

In 2018, a recruiting algorithm at Amazon was flagged for penalizing applications that contained the word “women’s.” In 2019 Apple’s credit card algorithm was so biased against women that founder Steve Wozniak’s wife was given a credit limit 10 times lower than his, despite them sharing all assets and accounts.

Then, as protests erupted across the U.S. and across the world in response to the murder of George Floyd, facial recognition quickly became a hot button issue.

In April, Washington state passed legislation that demanded upfront testing, transparency, and accountability for facial recognition. The law also requires that government agencies can only deploy facial recognition software if they make an API available for testing of “accuracy and unfair performance differences across distinct subpopulations.”

Then, IBM closed its facial recognition software in June in response to its technology being used by law enforcement. “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms,” IBM CEO Arvind Krishna wrote in a letter to congress.

While there were other transformations in AI and machine learning, 2020 will be remembered as the year that ethics was brought to the forefront of technology, and tech giants were made to answer for being complicit in bias and rising to the occasion with solutions.