"Humans in the Loop, Africans in Limbo": The Exploitation of Black Bodies in techno-capitalism
Every time you scroll past a violent or offensive post on social media, a moderator has likely shielded you from its full impact.
But who are these digital gatekeepers?
Not the basement dweller dudes that you see in South Park or The Simpsons, that's for sure.
They are mostly educated, family men and women trying to make ends meet.
They are also mostly black African, and severely underutilized and even abused by the companies that employ them.
This exploitation isn’t just economic—it’s a modern iteration of historical patterns of commodifying Black bodies in service of a tech-driven world serving a global audience.
Many companies see these moderator jobs as a way to create better opportunities for Black people in technology.
Rather than pooling resources to fund preachy programs, grants, and initiatives that incorrectly assume that there aren't a large number of Black engineers already, why not focus on cultivating talent that already exists?
But they are sorely missing the mark.
In the shadows of global techno-capitalism, content moderation jobs are increasingly outsourced to African countries, where Black workers face low pay, psychological trauma, and minimal labor protections.
These so-called altruistic companies don't understand the history and impact of the Black diaspora in technology and are trying to exploit Black labor yet again, this time on a new stage--artificial intelligence.
Big tech companies go to countries like Kenya, Uganda, and Rwanda to advertise jobs in a way that seems positive; they are often called "a ticket into the future".
For young Kenyan adults who are working to train AI chatbots every day, it is hell.
It is reported that these young adult workers make as little as $1.32 per hour.
The unemployment rate among young Kenyans is as high as 67 percent, and those desperate for jobs often do what they must to make ends meet--even if they are more than qualified.
Most of these jobs entail eight-hour-a-day shifts "labeling" images and videos to "teach" AI.
This work is very tedious and often grueling, yet extremely important.
To understand "labeling," one first has to understand what artificial intelligence is and how AI chatbot models work.
Artificial intelligence means the ability of a computer to perform tasks that humans can do, such as perceiving its environment and not only learning from it but using that information to achieve a goal.
The AIs that we call ChatGPT, Gemini, and the like are called large language models (or LLMs). These models have existed since the 1960s. The first of its kind was called ELIZA, an attempt by MIT researcher Joseph Weizenbaum to figure out how to program a supercomputer to output information the way that a human would.
The problem? He trained ELIZA to think the way that Dr. Weizenbaum would think, not a regular person, a diverse group of regular persons, or even a diverse group of doctors.
Humans will always be essential to the training data of AI models, at least in the generative phase.
Computational power, lack of algorithmic complexity, data storage, and hardware advancements made the advancement of these models difficult but didn't stop the research.
They are not only labeling simple scenes like say sheep running across a field but also training the models to check to see if cars are parked in the right places, what racial category of human beings are walking on crowded streets, and abnormalities in brain scans.
You can already see why some complain about the accuracy of these models in theory. Laypeople not trained in medicine shouldn't be interpreting X-rays.
Can you blame the humans doing the work behind the scenes? Of course not.
One of these men featured in a recent 60 MINUTES report is a graduate with a mathematics degree with two kids. He wasn't trained in guessing whether an abnormal scan was a tumor or just a shadow.
Why are companies outsourcing these jobs anyway? To save money.
One of the main companies that used to outsource to Africa was Sama. Created in 2008 by the late Leila Janah, Sama was supposed to be a solution for Africa as a way to encourage other companies to create tech jobs on the continent.
Janah graduated with a degree in African Development Studies from Harvard in 2005, as well as conducted fieldwork in Mozambique, Senegal, and Rwanda.
Her dream was to help the continent revitalize itself through the power of technology.
In a document released about her company for Harvard Business School, Janah drew inspiration from her work in Ghana as an English teacher at 17.
She would often get letters with messages in them from her students asking for monetary assistance in the forms of school supplies and other nicer things like Nintendo Gameboy systems.
She lamented the aid culture of the West that made people of the global South get used to receiving handouts rather than pursuing work that often didn't even exist.
Her efforts after these formative years led her to create the non-profit Samasource.
Some of her other partners in the venture included:
Claire Hunsaker gave a keynote speech on the possibilities of Android mobile devices in Kenya driving new innovations.
Jill Isenstadt who worked with The World Bank after leaving Samasource co-wrote a document about the benefits of online outsourcing.
Rosalyn Mahanshin, who has worked with Linkedin and Meta, headed several other startups since her time at Samasource and has served as a mental health advocate as seen in this video for her podcast Marginal.
Jen Cantwell, who served as the Managing Director in East Africa for Sama.
Patricia Li, who served as the Director of Delivery.
All of the major players of this company were non-Black, though most of their operations were based in Africa.
By 2019, the non-profit organization started working on a hybrid model where the non-profit portion became a stakeholder in a larger for-profit company.
But cracks were already beginning to show.
That same year, Daniel Motaung filed a lawsuit against Sama for the mistreatment of workers and failure to give proper compensation.
One year later, Leila Janah died of cancer. Many of the people who started Sama with her already moved on.
These successors decided to make a huge change at the advent of the generative AI chatbot boom that happened in 2022-23.
Sama made a big push in the AI market and received contracts to outsource their services in content moderation.
But things quickly got out of hand.
The employees were promised wages that aren't even considered minimum in the United States but were lifelines to these people who otherwise would have been unemployed.
These people have to sift through content that is harmful and hurtful to the mind so that graphic and traumatic material isn’t used as output to these machines. This included watching videos depicting pornography, gore, and suicide.
Facing grueling days and evenings, without even being paid enough to support their families, led seven employees to file a lawsuit against the company.
Historically, low-paying, high-risk, and emotionally taxing jobs have been disproportionately assigned to Black and Brown workers worldwide.
The outsourcing of content moderation to African countries reflects a modern extension of this colonial labor dynamic.
If these jobs were instead performed by white workers in wealthier countries, there would likely be stronger labor protections, higher pay, better mental health support, and increased public outrage over exploitative conditions.
For example, compare the plight of Kenyan AI content moderators to people in the United States who do similar work, like DataAnnotation.
DataAnnotation pays on average $20 to $40 hourly, with some jobs that require labeling paying less and others paying more than $50.
On the Reddit pages describing this work, it seemed too good to be true, and indeed it was. One Redditor described one job as labeling parts of a fun movie for minimal sexual scenes.
Not only is the pay good, but you can cash out weekly through a secure payment method, with funds deposited into your account immediately.
Companies like Sama and SourceAI 's Remotasks, another outsourcing remote job group, didn't even pay workers after their contracts were up.
Remotasks doesn't operate in Kenya anymore for this very reason, but there is a branch that is now in neighboring Uganda.
Companies often justify outsourcing based on “cost-effectiveness,” but this rationale is intertwined with global racial inequalities.
African nations are targeted for such work precisely because Black workers there are economically marginalized and thus more vulnerable to exploitation.
The willingness of tech giants to exploit this economic disparity while failing to provide adequate wages, mental health support, and humane working conditions can be seen as racially discriminatory, even if not explicitly labeled as such.
African countries have a lot of talent and smarts, but they are not being used to enrich the continent as quickly as they could.
There are many reasons, the chief being a lack of investment in tech infrastructure and education on the African continent.
The biggest investments now are by foreign governments and corporations, who often have ulterior and nefarious motives.
The marginalization and undervaluation of certain groups create a cycle of dependency and inequity.
African people provide services that make modern AI models more functional, but this isn’t acknowledged.
How can we change this?
Tech companies must ensure that content moderators receive living wages that reflect the psychological toll and critical nature of their work.
This includes fair compensation aligned with global standards rather than exploiting wage disparities in lower-income countries.
Living wages help break the cycle of poverty and ensure economic stability for workers and their families.
The labor of content moderators should not remain invisible.
Tech companies should publicly acknowledge the crucial role these workers play in keeping platforms safe.
This recognition can be integrated into transparency reports, marketing campaigns, and public-facing materials.
Valuing their contributions combats erasure and reshapes public narratives around digital labor.
Governments, NGOs, and international organizations must enforce global labor standards and hold tech companies accountable for violations.
This includes legally binding contracts, labor audits, and corporate social responsibility (CSR) initiatives.
Strengthening international labor protections can prevent companies from exploiting regulatory loopholes in different countries.
By implementing these solutions, the tech industry can move toward ethical labor practices that honor the dignity and humanity of Black workers, ensuring that technological progress doesn’t come at the expense of those who sustain it.
Great macro of the micro of global colonization.
Nations who stacked the first piles of lucor won and gained advantage over those brown & black cultures that were exploited.
This pattern has been around longer then the historians have told us.
Thanks for highlighting and making room for dislogues of justice.