The caption “Black Women Against Tech,” is featured on a TikTok video capturing an automatic sink unable to detect the hand of a black woman but perfectly capable of detecting a paper towel and dispensing water. This phrase at first encounter may seem like people trying to make a problem out of nothing. However, racism in technology is a very real issue with dangerous implications for the criminal justice system, which is already biased towards racial minorities. The video features many comments in which people feel tired of explaining why sink sensors do not work for their darker skinned hands. While there is no literal sigh, their exhaustion is evident.
Racial bias in media technology is presented in numerous ways, each with their own problems. Not only can AI’s (Artificial Intelligence) be trained to be racist, but data can encode systematic racism and error rates disproportionately affect people of color, sometimes leading to issues of innocent people being jailed for a crime they didn’t commit. Racism is integrated into AI and facial recognition software and the impact can be fatal.
American face recognition algorithms, according to this NIST study, indicate a high rate of false positive matches for Asian and African-American faces. This is in contrast with face recognition algorithms in Asian countries which had fewer false positives. American algorithms are consistently inaccurate when matching African-American, Asian, and especially Native American faces. The system has the worst false positive rates for African-American women, putting them at the highest risk for being falsely accused of a crime. In fact, a study conducted by MIT researcher Joy Buolamwini found that gender classification systems sold by major companies such as Microsoft had an error rate as high as 34.4% higher for darker-skinned females than lighter-skinned males. This was later updated as Microsoft reduced its error rate to below 2%, but we must ask why the error rate was so high to begin with. Who is not taken into consideration when developing algorithms? Studies have shown that technology must be held accountable through external audits, proving that developing inclusive algorithms for the benefit of the whole population may be an afterthought.
Figure is from the Gender Shades Project
Increasing publications have forced companies into addressing bias in facial recognition. However, these systems, even when accurate, can still infringe upon people’s civil liberties. An investigation found that Amazon had been pitching its facial surveillance platform to ICE to aid its violent crackdown on migrant communities. The dangerous implications of this surveillance include the ability to identify the ethnicity of faces as part of a partnership with the New York Police Department. Surveillance is actively aiding in discrimination against minorities and migrants, subtly demonstrating the racist undertones in technology that affect us without us knowing. What could ICE and the police be doing with this surveillance information? In these programs where corruption already runs rampant, whatever it is cannot be good. There have been calls for these surveillance systems to be regulated and Google suspended the sale of these systems until they can prevent abuse and the weaponization of AI. Some cities, such as Boston and San Francisco, have banned the use of face recognition by the police. Why? Surveillance, if used with or without racist intentions, can widen the pre-existing inequalities that plague our society and deprive us of progress.
Marginalized populations such as undocumented immigrants by ICE or muslim citizens by the NYPD are harmed when facial recognition software helps to solidify existing racist patterns of law enforcements. Another example of a discriminatory law practice is the database of “gang affiliates” maintained by the NYPD which is 99% Black and Latinx with no pre-existing causes to be suspected of gang affiliation.
The examples are endless, but where are the solutions to this war on technology? As our society becomes increasingly dependent and reliant on new forms of technology, these issues of racism must be taken into account before being implemented on a large scale.
