The authority that administers A-Level college entrance exams in the UK, Ofqual, recently found itself mired in scandal. Unable to hold live exams because of Covid-19, it designed and employed an algorithm that based scores partly on the historical performance of the schools students attended. The outcry was immediate, as students who were already disadvantaged found themselves further penalized by artificially deflated scores, their efforts disregarded and their futures thrown into disarray.
AI Fairness Isn’t Just an Ethical Issue
There is often an assumption that technology is neutral, but the reality is far from it. Machine learning algorithms are created by people, who all have biases. They are never fully “objective”; rather they reflect the world view of those who build them. And unless there is concerted intervention, algorithms will continue to reflect and reinforce the prejudices that hold society and business back. We can preempt some of the damage by utilizing ethical AI design principles. We also need to ensure that our algorithms are explainable, auditable, and transparent. Just as we wouldn’t accept humans making major decisions that affect others without any oversight or accountability, we should not accept it from algorithms. We need to start looking at eliminating AI bias less as merely a “nice thing to do,” and more as an economic and competitive imperative. Business leaders take note: By making our AI systems more fair, we also make our organizations more profitable and productive.