top of page

Is AI a Threat to Gender Equality?

AI touches all corners of our lives, from social media recommendations to self-driving cars to facial recognition, and its influence is only growing. However, what happens if AI is biased? What happens if AI learns to discriminate against people based on race, religion, or gender? As it turns out, AI already does, and it’s putting women in particular at risk by reinforcing gender stereotypes and endangering women’s wellbeing.


Gender Bias: Case Studies

Hiring Algorithms

Manually sifting through thousands of resumes and deciding who to grant an interview to sounds both exhausting and inefficient. Thus, more and more companies, such as Google, Microsoft, and Amazon, are resorting to hiring and firing algorithms to automate the process, especially in light of the pandemic, which has fueled digitization. However, even if the use of AI in the hiring and firing process seems reasonable and harmless, numerous studies have found that these algorithms discriminate against women.


The most infamous example of such gender bias is from Amazon’s hiring algorithm. Developed in 2014, the company’s hiring tool gave applicants a score from one to five stars, and, theoretically, the top five scores would belong to the top five best-suited candidates, who would then be hired for the job. However, after a year of using the tool, Amazon found that the algorithm penalized resumes with the word “women’s,” favored resumes with words that were found to appear more often on men’s resumes, such as “executed,” and overall significantly favored males over females.


Why was Amazon’s algorithm so biased? To have the algorithm perform actions on its own, without any human supervision, the developers trained the algorithm using resumes that were submitted to Amazon over the last ten years. Most of the resumes were men’s resumes, a reflection of the tech-industry’s male-dominated nature. Because it was trained on biased data, the algorithm itself learned to be biased. The use of this hiring algorithm was soon halted, though many companies still use their own versions of hiring algorithms. (2)


Facial Recognition

Studies such as MIT and Microsoft’s Gender Shades project have found that computer vision systems, such as facial recognition, misidentify women and minority groups significantly higher than they do with men--the Gender Shades project found that darker-skinned females were the most misclassified group with an error rate of about 34.7%, while the highest error rate for lighter-skinned males was only 0.8%. These error rates not only indicate large biases but also underscore concerns for accountability, privacy, and security, especially with facial recognition’s growing role in law enforcement. (3)


Much of computer vision’s gender and racial bias comes from a lack of representation in data. Minority groups have much more restricted access to technology, which is becoming increasingly crucial in gathering information. According to a 2017 study by the International Telecommunication Union, women worldwide were 12% less likely to use the Internet than men and as high as 32% less likely to use the Internet than men in developing countries. Along with white privilege, datasets end up consisting of mainly white men and end up creating models that struggle to recognize women and minority groups. Companies that use facial recognition, such as Google, have since decided to drop gender labeling and other forms of labeling from their systems altogether, eliminating but not addressing the problem at hand.


Natural Language Processing

Another area of machine learning with concerning levels of gender bias is natural language processing (NLP), which allows systems like Google Translate, Alexa, and Siri to understand human speech and text. Gender biases in such systems have been found to reinforce gender stereotypes and, inevitably, reflect the biases present in modern society. For instance, in the past, Google Translate would take a sentence with gender-neutral pronouns and, when translating the sentence into a language without gender-neutral pronouns, return a sentence with gender-specific pronouns, often one based on gender stereotypes. Google has since addressed this issue.(1)


GPT-3, regarded as one of the best, most cutting-edge language models to date, has also been found to perpetuate gender bias. A 2020 research paper from Cornell University found that of 388 selected occupations, GPT-3 associated 83% of them with a male job-holder. Jobs that often require high levels of education, such as medical occupations, were also heavily associated with men. NLP was also responsible for the bias in Amazon’s hiring algorithm. With NLP’s growing advancements, allowing machines to write even entire news articles on their own, it is imperative to be able to identify and mitigate all forms of algorithmic bias.


Why is AI Biased?

The most prominent source of bias in all of the above examples was the data itself--AI systems are biased because the world and their creators are inherently biased. Even after carefully sifting through and preparing data points, researchers can still end up with biased datasets that are accurate but simply representative of an unjust society.


Aside from society itself, there are numerous pathways that can introduce both intentional and unintentional bias. A portion of gender bias can be attributed to unconscious bias and the lack of diversity in the male-dominated tech industry--a report from The Alan Turing Institute this year found that only 22% of professionals in AI and data science fields are women, and numerous other studies point out the benefits of incorporating diversity in the workplace. Data collection methods, such as surveys and interviews, can also introduce bias if they are misleading or exclude certain groups, which can easily be done in oversight with the prominent lack of equal Internet and technology access. Without representation, excluded groups may experience life-threatening consequences, which the medical field is particularly familiar with due to its history of excluding women from studies. Other pathways where bias can be introduced into an algorithm include determining an algorithm’s actual purpose, data selection, and data labeling, all of which require human discretion.


How Can We Combat AI Bias?

Combatting AI bias is a daunting task. To start, we must acknowledge the harmful implications of algorithmic bias and advocate for healthier data practices. Data Feminism by Catherine D’Ignazio and Lauren Klein, for example, outlines data practices that confront power structures, challenge the gender binary, and underscore the importance of empathy and feminism in bringing justice. We must also each do our part to educate ourselves and our peers about the consequences of gender bias both inside and outside of the AI field, and we must listen to and raise the voices of women and marginalized groups. Machine learning developers specifically can incorporate diversity into their teams, carefully assess datasets for over or under-representation, and support women and marginalized groups in their work. In an increasingly AI-dominated world, it is crucial that we make today’s AI systems fair and just so that the next generations can live in a fair and just world.


Recent Posts

See All

Over the course of the last century, women have made great strides in bridging the gap between gender inequalities around the world. One of those being, succeeding in male dominated fields/workplaces

The United Nations defines violence against women as “any act of gender-based violence that results in, or is likely to result in, physical, sexual, mental harm or suffering to women, including threat

Femicide – the killing of women and girls – is the most extreme form of violence on a continuum of violence and discrimination against women and girls all over the world. Various typologies of femicid

bottom of page