In her book “Weapons of Math Destruction”, Cathy O’Neil (2016) describes how algorithms and data science rule our world. When we browse online for new clothes, pick a movie to stream on Netflix or apply for a car loan, an algorithm likely has a crucial role to play in the outcome of any of these processes. O’Neil (2016) claims that these algorithms have been responsible for causing destruction and threatening the democracy along the way while trying to make our lives more efficient. She asserts that the algorithms are not always unbiased and objective as they often absorb the bias of the humans who designed them with or without their own knowledge. O’Neil states that “while promising efficiency and fairness, they (WMD’s) distort higher education, drive up debt, spur mass incarceration, pummel the poor at nearly every juncture, and undermine democracy”.
In today’s world, the complex mathematical formulas or algorithms that are used for making decisions such as who gets hired, how police resources are deployed, insurance rates for people, and who gets a mortgage are inherently responsible for causing inequality to some extent. O’Neil (2016) explains how for those living in poverty, it is becoming more and more dangerous to be living in the world of algorithms or as she likes to call it as weapons of math destructions (WMD). She further explains that poor people are likely to have a bad credit and are more likely to live in high-crime neighborhoods. Once they are in the system of WMD’s, these people are often targeted with advertisements such as the ones for subprime loans with unfavorable conditions and high-interest rates as well as advertisements for for-profit colleges (O’Neil, 2016). These people are also more likely to be targeted for more arrests and convicted for longer sentences, making them unfavorable candidates for future jobs and causing their credit scores to take a plunge. All in all, these algorithms and the system fueled by data science works against them, contributing to increased inequality in the society.
The director of Data Privacy Lab at Harvard University Dr. Latanya Sweeney conducted a study on 120,000 internet search ads by Google AdSense and found the cases of racial bias and discrimination in the advertisement delivery. In this study, Sweeney (2013) performed a search for the different names on the Google and analyzed the advertisements that showed up along with the results. She found out that when a search was performed on racially associated names, a greater number of advertisements appeared related to arrest records for African-American identifying names than white-identifying names. She states “A greater percentage of ads having “arrest” in ad text appeared for black identifying first names than for white identifying first names in searches on Reuters.com, on Google.com, and in subsets of the sample” (Sweeney,2013). She further describes that Google AdSense receives different templates of ads from the advertiser for the same search term and Google AdSense algorithm over time learns which template or ad gets should appear more depending on the number of times viewers click on a particular template by assigning weight to each one. The template or ad which is clicked frequently over others gets more weight and appears more often than others. Unfortunately, viewer’s bias is absorbed by the AdSense algorithms over time and the search results are often a display of inequality and human bias. The consequences of this discrimination can prove to be very damaging. For instance, if an employer is looking for a background check on a prospective hire and looked up their name on Google, such arrest ads may hamper their chances of getting an employment and affect the applicant’s future career prospects.
Source: Sweeney (2013)
This study makes me wonder if algorithms will ever be objective enough to not discriminate against diverse people. Will it be able to not absorb the bias and discriminatory behavior of the human beings? The prejudice in the algorithm may not be necessarily embedded in itself but it is present in the massive amount of big data from which the algorithm adapts and learns. With ever-developing technology, can we really expect the algorithms to learn societal norms and biases?
O’Neil, C. (2016). Weapons of math destruction: how big data increases inequality and threatens democracy. New York: Crown.
Sweeney, L. (2013). Discrimination in Online Ad Delivery. Queue,11(3), 10. doi:10.1145/2460276.2460278