Round Table India
You Are Reading
The Darker Side of AI
7
pranav jeevan p

Pranav Jeevan P

AI (Artificial Intelligence) models have become part of everyday use. Generative AI models, especially those using latest diffusion models capable of producing realistic images and videos are becoming increasingly popular. Since the advent of ChatGPT from OpenAI, the use of LLMs (Large Language Models) are becoming the standard practice in industry and everyday usage. People perceive AI systems to be neutral and free from the subjective biases that they associate with humans, and expect them to give objective results free from stereotypes. This has led to rapid use of AI in medicine, recommendation systems, and even for the criminal justice systems, surveillance, and policing.

Even though use of AI can significantly improve our lives by reducing our burden in a variety of tasks, we should be cautious that like any other technology, AI also comes with its pitfall. It causes representational harm by affecting the narratives we construct about certain social groups by amplifying stereotypical views, degrading their social status, defining the status quo, and manufacturing unwarranted justifications for oppressive practices.

AI models are only as good as the data that it is trained on and it carries with it all the biases that comes with those datasets. A recent example of this bias was seen in images generated by AI models such as Stability AI’s Stable Diffusion and OpenAI’s Dall-E. Images generated for prompts asking for high-paying jobs were dominated by subjects with fairer skin tones, while subjects with darker skin tones were more common in images generated by prompts for low paying jobs. A clear gender bias was also observed with high paying jobs like “politician,” “lawyer,” “judge” and “CEO” where almost all subjects generated were dominated by males while jobs like “nurse”, “teacher”, “housekeeper” were female subjects. When the average color of all the pixels in the generated images for different prompts were used to create an “average face”, it showed that the AI model painted a picture of the world where certain jobs belonged to certain groups and not others.

Since women are underrepresented in high paying jobs, it is natural to expect that the data which was used to train these models would have those same biases. But what was surprising is that the models exaggerated these biases in their generated results.  For example, Stable Diffusion results show only women in only 3% of the images generated for the prompt “judge” while 34% of the US judges are women. This shows that these models not only reflect the biases in data, they exaggerate the biases and makes it worse. This exaggeration of bias was also visible when this model overrepresented people with darker skin tones in low-paying jobs. For example, the model generated images with darker skin toned subjects 70% of the time for the prompt “fast-food worker,” even though 70% of fast-food workers in the US are White. The amplification of both gender and racial stereotypes by Stable Diffusion also causes further marginalization of black women who belongs to the intersection of both these biases. More than 80% of the images generated for the prompt “inmate” were of people with darker skin, even though non-white people make up less than half of the US prison population. The analysis images generated with Stable Diffusion found that it takes racial and gender disparities to extremes, even worse than those found in the real world.

Even though far-right extremists such as white supremacists committed 3 times more terrorist attacks in US than radical Islamic extremists, when prompted to generate images of a “terrorist,” the model consistently generated men with dark facial hair, often wearing head coverings, clearly pointing to stereotypes of Muslim men. Perpetuating such stereotypes and misrepresentations through imagery can pose significant educational and professional barriers for individuals from marginalized communities.

Also, since these models are mostly trained on datasets where majority of the people are white, the models fail to generate images of people belonging to other racial groups. Such behaviour of AI models was visible when face recognition AI was misclassifying other racial groups (Google’s image classification system has labelled black people as gorillas). Use of AI systems with such bias, for example, in self-driving cars, will eventually result in accidents where they are more likely to hit a black person than a white one.

Most AI models are a black box which only provides a decision and fails to provide the reasoning for arriving at that decision. Using these models in criminal justice system where unbiased good datasets are difficult to create will lead to further harm of marginalized and oppressed communities. In the United States, black people are much more likely to be arrested on drug charges than white people. This does not mean that black people are more likely to commit crimes, but that black people are more likely to be arrested. Thus, any inferences about crime from such data will necessarily repeat and reinforce injustices which brought about the data itself. Models trained on historical data can thus lead to unjust outcomes, especially when used in the criminal justice system, where, for example, a text-to-image AI can generate sketches of suspected offenders leading to wrongful convictions, or, racial biases in the training data can lead to harsher punishments for marginalized communities. Biased AI systems, like facial-recognition tools, are already being used by thousands of US police departments and have led to wrongful arrests.

Recommendation systems, for example, which rates CVs of potential applicants can easily exclude candidates due to their gender, race, caste, or religion due to the inherent exaggeration of bias. In 2014, Amazon’s internal AI system used for streamlining recruitment rated candidates in a gender-biased way. In 2018, the AI system of the credit institution Svea Ekonomi denied credit to individuals based on their gender, mother tongue and rural living. These AI systems contribute to allocative harms, where an individual is made worse off in terms of the resources, services, and opportunities available to them.

Even though the technology is publicized as objective and without human intervention, these AI models require large amount of human labour and human choices, from creating the dataset, to deployment, interpretation, design and maintenance and these choices can create biases in the system.

ChatGPT, which was celebrated as one of 2022’s most impressive technological innovations, relies on massive supply chains of human labor and scraped data, most of which is unattributed and used without consent. To make the model less toxic, Open AI outsourced the work to workers in Kenya, Uganda and India who were made to view toxic textual data under exploitative working conditions with less pay. AI growth relies on such underpaid and hidden human labor from third world countries while their work fuels these billion-dollar companies.

The use of such models in a hierarchical society like India where the menial jobs are earmarked for marginalized communities and where criminal justice systems hunts certain communities, will only exacerbate their marginalization. Experts predict more than 90% of content on the internet could be generated by AI a few years, further adding to the pandemic of fake and hate news. Unless we ensure that AI technology is fair and representative, especially as they become more widely adopted, these technologies will keep aiding the oppression of the underprivileged communities.

“All data is historical data: the product of a time, place, political, economic, technical, and social climate. If you are not considering why your data exists, and other datasets do not, you are doing data science wrong”.

~ Melissa Terras (2019)

~~~

Pranav Jeevan P is currently a PhD candidate in Artificial Intelligence at IIT Bombay. He has earlier studied quantum computing in IIT Madras and Robotics at IIT Kanpur.

Leave a Reply