Everyone's A Little Bit Racist, Especially On The Internet
Internet culture tends to reward prejudice and suppress equity. And AI, unfortunately, is only amplifying this.
Content Warning: This piece discusses suicide, online harassment, racism, and anti-Black violence. Please proceed with care.
If you have been following me, you know I enjoy the internet, and the culture that comes along with it.
Since I was small, my computer and I were like best friends, and I discovered my first love of writing, coding and fandom through the internet.
Especially fandom: my older cousin would rip anime episodes from shows like Witch Hunter Robin, Serial Experiments Lain, and Ergo Proxy in between SEGA games.
He would go to conventions every year and tell me stories of long days and even longer nights of parties, seminars and all around fun.
The thing that I loved the most however, when I was finally okay enough to go to conventions myself, were the cosplayers.
It was finally cool to dress up as your favorite characters on a day other than Halloween, and with the seriousness of a true actor or actress.
But...only if you were white.
Now, my cousin told me cosplay was for everyone.
But when I was maybe twenty-one, standing in line at my first convention, heart pounding inside a custom outfit I fashioned myself, three white girls were in front of me.
One of them turned around, looked me up and down, and said, “Cute. But you know anime doesn’t have Black people, right?”
I was flabbergasted. And confused. Anime is for everyone. Right?
That didn’t matter to them though. What mattered was that I’d broken an unspoken rule. Fandom was supposed to be a sanctuary, a place where outcasts could belong. But even among the misfits, there were hierarchies. And I was at the bottom.
Years later, a 19 year old that went by Ash (they/them) committed suicide because the bottom was not enough for those racists, they had to be in a grave to satisfy them.
Tragedies like this aren’t isolated. They are patterns. Like algorithms. And that pattern is now being automated, amplified, and distributed at a global scale by AI.
Online hate campaigns rarely start with slurs. They start with “jokes.”
I learned this the hard way when I saw Black people post their cosplays and nerdy interests online.
The comments weren’t always overtly racist.
They were “ironic.”
Pepe the Frog with a noose.
A “joke” about how Black men should just cosplay as Piccolo and Black women as Canary from Hunter X Hunter. Mind you, Canary (as cool as the character is) was written to be a servant.
If I got upset, I wasn’t “getting the joke.” If I stayed silent, the jokes got worse.
This wasn’t an accident. It was a strategy.
White men openly admitted to using “non-ironic Nazism masquerading as ironic Nazism” to spread white supremacist ideology.
The protective layer of irony meant you couldn’t challenge them without being accused of taking things too seriously. It was gatekeeping disguised as humor.
And the consequences of constant exposure to this? They’re not virtual. They’re written in the bodies and minds of people who have to witness it.
I think of the SJWs (yep, remember that term?) on Tumblr, warning young people of how easy it was to be sucked into white supremacist ideology on social media. The code words and phrases meant to beat the “algorithm” were not apparent to a teenager trying to read Hamilton fanfiction online, but the fight against bigotry things were serious to us.
These battles were waged by young people for the most part as well, and as furiously as the shipping wars of our favorite TV couples.
But the same infrastructure that let non-white users build digital safe harbors and enclaves on Tumblr also enabled Stormfront and 4Chan.
Far from curing bigotry, the internet gave it a global distribution network.
Racist rhetoric wasn’t confined to fringe forums anymore. It was in the mainstream public sphere.
But here’s the shift that matters: racism on the internet went from being posted to being processed.
In the early days, spreading hate required active human intent. Someone had to type the message, upload the file, press send.
But with algorithmic curation, systems began to actively amplify and distribute this content based on engagement metrics. The algorithm didn’t care if the engagement was outrage or agreement.
Attention was attention. And hate, it turns out, is very engaging.
This set the stage for something worse: the era of AI acceleration, where making hate more efficient became a feature, not a bug.
Consider the Kenyan content moderators hired to filter toxic content from AI training data. These workers, paid less than $2 an hour, were forced to read graphic descriptions of hate speech and anti-Black violence so that AI systems could learn what not to say.
Many developed severe PTSD.
Their trauma became the invisible foundation upon which “safe” AI was built.
Let that sink in. The psychological safety of AI users in the Global North is being purchased with the trauma and exploitation of Black and Brown labor in the Global South.
AI is not creating new forms of bigotry.
It’s taking the vast, historical archive of human prejudice available online and making it more efficient, scalable, and difficult to detect.
Generative AI models are trained on the poisoned well of the internet. They ingest massive, un-curated datasets containing the entire history of online racism, hate speech, and stereotypes.
And LLMs can be jailbroken to bypass safety filters, enabling bad actors to generate thousands of unique, coherent racist comments.
They can automate harassment campaigns.
They can manufacture a false consensus of bigotry.
This is not incidental to the AI supply chain. It is built into its foundation.
Because despite everything, a global movement for algorithmic justice is growing.
Change is possible. But it requires us to stop treating bias as a bug and start recognizing it as a feature we actively chose not to prevent.
First, acknowledging the problem is not enough. Action is required from every part of our digital society.
For AI companies: End exploitative labor practices. Provide ethical working conditions, fair pay, and mental health support for content moderators. Your product’s safety cannot be built on someone else’s trauma. Invest in global safety beyond an English-centric approach. Embrace radical transparency with independent audits.
For online communities: Learn from how “ironic” racism co-opted nerd spaces. Robust moderation that protects marginalized members isn’t censorship. It’s a prerequisite for a healthy community. Practice bystander intervention. Refuse to let “jokes” serve as a cloak for bigotry.
For educators and technologists (and this is my commitment): I will teach the ethical and social implications of AI with the same rigor as the technical skills. I will teach people to dissect “common sense” narratives that dismiss systemic racism as individual prejudice.
Because racism is not a glitch. It’s the operating system.
Because that girl at the convention?
She wasn’t inventing a new form of cruelty. She was enforcing an old one. The same hierarchies that told her a Black girl couldn’t be a magical warrior princess are now encoded in algorithms that decide who looks like a CEO and who looks like an inmate.
AI is not creating new bigotry. It’s scaling, obscuring, and validating centuries of historical racism with unprecedented efficiency. It takes the prejudices embedded in our language, social structures, and media, and presents them back to us as objective, mathematical truth.
The challenge we face is not merely technical. It’s political and social.
If we do not actively and deliberately code for equity, if we do not center the needs and safety of the most marginalized, we will default to the historical inequity embedded in our data and our society.
The goal cannot be to patch the bugs.
The goal must be to dismantle the old system and rebuild a new one that centers Black liberation and human dignity over algorithmic efficiency.
I still cosplay, by the way. And I’m still here.


