Biases in AI


Artificial intelligence has emerged as a new and very powerful tool in many aspects of our lives. There is hardly an area in which an AI algorithm does not touch our lives. Be it understanding our sleep patterns, recognizing our voice, keeping a tab on our health, recommending which movie to watch, a song to listen, job to apply and even recommending whom to date. The AI algorithms are set to dominate all areas of mankind in the decades to come. While there is no doubt that the algorithms are an essential part of our life but we need to guard against biases that might get built inside the algorithms.

An AI algorithm simply learns from the data. For example, a credit card approval algorithm is fed profile data and performance data of the past customers. It finds out the most significant factors such as age, income, debt to income ratio, current utilization of credit lines, etc impacting default rates and computes the probability of default on a new credit card application basis its profile. It thus decides whether to approve or decline it. While it is proven that the default rates of the applications screened by the algorithms are far lower than those screened by humans, there are strong chances that the algorithm develops a bias against a certain race or community. It might happen that we see the algorithm approving applications only for the white population and rejecting for the Hispanic population. While it may have happened that the whites historically had a better credit behaviour versus the Hispanic population but algorithm might pick up race as a significant variable predicting the default rates and this only gets reinforced this further leading to all Hispanic applications getting declined.

Similarly, a Google PPC algorithm while trying to maximize the chances of conversion might show an advertisement of a high paying corporate job only to men. While this is completely unjustified but the fact may be that the algorithm is just trying to maximize the output of the clicks. It may have happened that historically more men would have clicked such advertisements or a similar advertisement in the past was shown to housewives and men, resulting in more men clicking on it (assuming a housewife would not be interested in such a job). While the algorithm has simply learnt from the past and is trying to maximize the chances of conversion, the bias that has been built into it simply reinforces itself further, thus leading to such advertisements being shown only to men in the future.

These problems will become more significant as AI algorithms are employed in critical areas such as Healthcare, Driverless cars etc. As we move ahead, we must ensure that sufficient safeguards are built in the AI systems.