Image générée avec une IA / Image generated with AI
Artificial IntelligenceEditorial

Behind the AI magic: the hidden exploitation of millions of workers 💔🤖

Cliquez ici pour lire en français

You chat with ChatGPT, generate images with Gemini, automate tasks with Claude. Generative AI has become part of your daily routine. But have you ever thought about the hands that made these tools possible? Behind the illusion of autonomous technology lies a far more human reality—and one that’s infinitely less glamorous.

In a weathered high-rise in Nairobi, Kenya, Michael Geoffrey is crying. This 30-year-old has spent years training algorithms for tech giants. His job: viewing, labeling, and classifying thousands of images and texts per day. Scenes of extreme violence. Child sexual abuse material. Beheadings. Torture. « I’m an annotator, I work on their data, they make millions, » he tells Swiss broadcaster RTS. « Why can’t they ensure I live decently? Do they have to sacrifice me for their growth? »

This testimony isn’t isolated. It reveals the existence of an invisible army of workers fueling the generative AI revolution. « We’re like ghosts, nobody knows we exist even though we contribute to society’s technological advancement, » laments Oskarina Fuentes, a 35-year-old Venezuelan based in Medellín, Colombia.

The automation illusion: when AI hides the human 🤖

When OpenAI unveiled ChatGPT in November 2022, the world was dazzled. The conversational tool responded with impressive fluidity, avoided toxic remarks, and refused to generate violent or discriminatory content. The magic of artificial intelligence, we thought. The reality is quite different.

A TIME magazine investigation published in January 2023 revealed that OpenAI had employed Kenyan workers, paid less than $2 per hour, to « clean » ChatGPT’s training data. Their mission: wade through tens of thousands of text fragments from the darkest corners of the internet—graphic violence, torture, child sexual abuse—and label them to teach the AI to recognize and avoid them.

« Without human intervention, it would be a disaster, »

explains Robert West, an EPFL professor and AI specialist. « There’s a lot of content on the internet that we don’t want ChatGPT to repeat: extremism, conspiracy theories, etc. So you have to tame the model. » And to do that, you need humans. Lots of humans.

The director of EPFL’s Data Science Lab goes further: « If I had to estimate, I’d say that annotators and prompt writers invest more time than engineers and programmers » in these generative AI tools. In other words, behind every smooth interaction with ChatGPT, there’s more invisible human labor than computer code.

A booming market, plummeting conditions 📈

The global data annotation market was worth $3.77 billion in 2024. It’s expected to reach $17.1 billion by 2030, according to Grand View Research. This explosive growth relies on massive outsourcing to low-income countries: Kenya, Uganda, India, the Philippines, Colombia, Venezuela.

The numbers are staggering. In Kenya, Remotasks, a subsidiary of Scale AI (which works with OpenAI, Meta, Microsoft, and the Pentagon, among others), pays its annotators roughly $0.01 for a task that can take several hours. Oskarina Fuentes, in Colombia, works for five annotation platforms that pay her between 5 and 25 cents per task.

Ephantus Kanyugi, vice president of the Data Labelers Association in Kenya, doesn’t mince words: it’s « modern slavery. » The 30-year-old has been classifying and labeling images to train algorithms since 2018.

« You have to spend the day looking at corpses, zooming in on wounds to outline them to help the AI identify these images, without any psychological support, » he recounts.

Working conditions are catastrophic. No stable contracts, no social protection, no paid leave. Work weeks of 15 to 20 hours a day just to survive. Unpaid wages with no recourse. In March 2024, Remotasks suddenly cut off access to its platform for many Kenyan annotators without paying them what they were owed. Scale AI admits to a « reduction in activity » but insists all completed tasks were paid. The workers tell a different story.

The hidden cost of your favorite AI: trauma and exploitation 💔

« Watching pornography for eight hours, it’s really not a joke. After 4 or 5 days, my body completely shut down, » an anonymous annotator tells RTS. Content moderators and data annotators are exposed daily to the worst humanity produces.

Angela Chukunzira, a sociologist at the Mozilla Foundation, warns about mental health risks: « When workers are permanently exposed to very harmful content, they become desensitized. Some of them lose their humanity. And their sense of reality is also very altered. »

For tech companies, the calculation is simple: outsource this drudgery to countries where labor costs are low, go through subcontractors to absolve themselves of any direct responsibility. OpenAI worked with Sama in Kenya. Meta, TikTok, Google, Tesla—all massively outsource these essential but thankless tasks.

Mercy Mutemi, a Kenyan lawyer, is pursuing multiple cases against Meta. Her accusations are extremely serious: « In Kenya, the AI sector—in terms of data annotation, content moderation, and algorithm training—relies on an exploitative model that involves two things: human trafficking or forced labor. »

These issues aren’t new. They’re part of a broader history of content moderation on the internet. Sarah T. Roberts, a professor and researcher, documented this phenomenon in her seminal book Behind the Screen: Content Moderation in the Shadows of Social Media. She analyzes how this human activity, often outsourced, has become an invisible pillar of digital platform operations, employing over 100,000 workers tasked with viewing, sorting, and deleting toxic content.

A glimmer of hope: the revolt of the invisible 🔥

Faced with this exploitation, workers are beginning to organize. In late 2023, Ephantus Kanyugi and nine other Kenyan annotators formed a collective. In January 2025, they founded the Data Labelers Association (DLA). Within a few months, the association already has 800 members.

« Most keep their membership secret because they fear repercussions from the platforms, » Kanyugi confides. The association is currently working on a code of conduct for employers, in collaboration with the Kenyan Ministry of Labor, the Ministry of ICT, and human rights organizations.

The code proposes concrete measures: fair compensation, formal employment contracts, rights to sick leave and maternity leave, freedom of association, mandatory breaks, and psychological support for those exposed to harmful content. CloudFactory, one platform, has already agreed to offer better conditions: longer contracts, better pay, reimbursed travel expenses.

In the United States, too, resistance is mounting. In September 2024, nearly 250 people working for GlobalLogic, a Google subcontractor training Gemini, were laid off after denouncing pay disparities and demanding better conditions. « They want docile data annotators, » fumes Andrew Lauzon, 31, a member of the Alphabet Workers Union.

Tech giants face their responsibilities ⚖️

Contacted by various media outlets, tech giants respond differently. Microsoft, Meta, and the Pentagon haven’t responded to inquiries. OpenAI claims it no longer works with Scale AI and now has strict regulations for its subcontractors. Anthropic, which develops Claude, says it requires partners like Surge AI to follow rules related to worker well-being and set rates equivalent to or higher than $16 per hour.

But are these statements enough? The reality on the ground tells a different story. Subcontractors multiply, chains of responsibility dilute. And meanwhile, millions of workers continue to feed generative AI in the shadows.

Professor Antonio Casilli, a sociologist specializing in digital labor, highlights a fundamental contradiction: « Data annotators are often between 18 and 30 years old and poorly paid despite high levels of education. They come mostly from low-income countries, although this activity is also growing in the United States or Europe, where pay is much higher. »

A striking example: in Switzerland, workers are recruited to « teach » AIs to speak Swiss French via the Outlier platform, with advertised rates of $30 to $50 per hour. In Kenya, for the same type of annotation work, Remotasks pays $0.01 for tasks lasting several hours. Same technology, same contribution, but a pay difference ranging from 1 to 3,000.

Who really pays the price for AI? 💭

Every time you use ChatGPT, Gemini, Claude, or any other generative AI tool, you benefit from the invisible work of thousands of people you’ll never see. Workers who sacrifice their mental health, their well-being, sometimes their humanity, for a few cents. Ghosts who make your technological revolution possible.

The dominant discourse on AI highlights innovation, sophisticated algorithms, computing power. It systematically forgets to mention this reality: generative AI isn’t magic. It relies on massive exploitation of human labor, outsourced to the most vulnerable countries, kept invisible through opaque subcontracting chains.

The question is no longer whether AI will replace humans, but rather recognizing that it simply cannot function without them. And deciding collectively whether we accept this technological revolution being built on the backs of exploited workers, or whether we demand that tech giants assume their responsibilities.

Joan Kinyua, president of the Data Labelers Association, puts it best: « Behind every AI advancement are human hands and hearts. It’s time they receive the respect and protection they deserve. »

💬 Were you aware of these invisible AI workers? Do you think tech giants should be legally required to guarantee decent working conditions throughout their production chain? Would you be willing to pay more for your ChatGPT or Claude subscriptions if it meant fair wages for these workers? Share your thoughts in the comments.


📱 Get our latest updates every day on WhatsApp, directly in the “Updates” tab by subscribing to our channel here  ➡️ TechGriot WhatsApp Channel Link  😉

Qu'en avez-vous pensé?

Excité
0
Joyeux
0
Je suis fan
0
Je me questionne
0
Bof
0

Vous pourriez aussi aimer

Laisser une réponse

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *