"Cyber racism" is a term used to capture the phenomenon of racism online. The term encompasses racist rhetoric that is distributed through computer-mediated means and includes some or all of the following characteristics: ideas of racial uniqueness, racist attitudes towards specific social categories, racist stereotypes, hate-speech, nationalism and common destiny, racial supremacy, superiority and separation, conceptions of racial otherness, and anti-establishment world-view.[1][2][3][4][5] Racism online can have the same effects as offensive remarks not online.[6]
The term "cyber racism" was coined by Les Back in 2002.[7] Cyber racism has been interpreted to be more than a phenomenon featuring racist acts displayed online. According to the Australian Human Rights Commission, Cyber-Racism involves online activity that can include "jokes or comments that cause offence or hurt; name-calling or verbal abuse; harassment or intimidation, or public commentary that inflames hostility towards certain groups".[8]
Though there have been studies and strategies for thwarting and confronting cyber racism on the individual level there have not been many studies that expand on how cyber racism's roots in institutional racism can be combated.[9] An increase in literature on cyber racism's relationship with institutional racism will provide new avenues for research on combatting cyber racism on a systemic level.[10] For example, cyber racism's connections to institutional racism have been noted in the work of Jessie Daniels, a professor of sociology at Hunter College.[11]
Although some tech companies have taken steps to combat cyber racism on their sites, most tech companies are hesitant to take action over fears of limiting free speech.[12] A Declaration of the Independence of Cyberspace, a document that declares the internet as a place free from control by "governments of the industrial world",[13] continues to influence and reflect the views of Silicon Valley.
Online stereotypes can cause racist prejudice and lead to cyber racism. For example, scientists and activists have warned that the use of the stereotype "Nigerian Prince" for referring to advance-fee scammers is racist, i.e. "reducing Nigeria to a nation of scammers and fraudulent princes, as some people still do online, is a stereotype that needs to be called out".[14]
Racist views are common and often more extreme on the Internet due to a level of anonymity offered by the Internet.[15][16] In a 2009 book about "common misconceptions about white supremacy online, [its] threats to today's youth; and possible solutions on navigating through the Internet, a large space where so much information is easily accessible (including hate-speech and other offensive content)", City University of New York associate professor Jessie Daniels claimed that the number of white supremacy sites online was then rising; especially in the United States after the 2008 presidential elections.[17]
The popularity of sites used by alt-right communities has allowed cyber racism to garner attention from mainstream media. For instance, the alt-right claimed the "Pepe the frog" meme as a hate symbol after mixing "Pepe in with Nazi propaganda" on 4chan.[12][18] This gained major attention on Twitter after a journalist tweeted about the association. Alt-right users considered this a "victory" because it caused the public to discuss their ideology.
In her article "Rise of the Alt-Right",[12] Daniels explains how algorithms "speed up the spread of White supremacist ideology" by producing search results that reinforce cyber racism.[12] Daniels posits that algorithms direct alt-right users to sites that echo their views. This allows users to connect and build communities on platforms that place little to no restrictions on speech, such as Reddit and 4chan. Daniels points to the internet searches of Dylann Roof, a white supremacist, as an example of how algorithms perpetuate cyber racism. She claims that his internet search for "black on white crime" directed him to racist sites that reinforced and strengthened his racist views.[12] Moreover, Latanya Sweeney, a Harvard professor, has found that online advertisements generated by algorithms tend to display more advertisements for arrest records with African American-sounding names than Caucasian-sounding names.
Daniels writes in her 2009 book Cyber Racism that "white supremacy has entered the digital era" further confronting the idea of technology's "inherently democratizing" nature.[10] Yet, according to Ruha Benjamin, researchers have concentrated on cyber racism's focus on "how the Internet perpetuates or mediates racial prejudice at the individual level rather than analyze how racism shapes infrastructure and design."[10] Benjamin continues by stating the importance of investigating "how algorithms perpetuate or disrupt racism…in any study of discriminatory design."[10]
In Australia, cyber-racism is unlawful under S 18C of the Racial Discrimination Act 1975 (Cth). As it involves a misuse of telecommunications equipment, it may also be criminal under S 474.17 of the Criminal Code Act 1995 (Cth).[19] State laws in each Australian State make racial vilification unlawful, and in most states serious racial vilification is a criminal offense. These laws also generally apply to cyber-racism, for example S 7 "Racial vilification unlawful" and S 24 "Offence of serious racial vilification" of the Racial and Religious Tolerance Act 2001 (Vic) both explicitly state that the conduct being referred to may include the use of the Internet.[20]
In May 2000, the League Against Racism and Anti-Semitism (la Ligue Internationale Contre le Racisme et I'Antisemitisme-LICRA) and the Union of French Jewish Students (UEJF) brought an action against Yahoo! Inc., which hosted an auction website to sell items of Nazi paraphernalia and Yahoo! France provided the link accessed to the content.[21]