The Debate

Why Tech Firms Can’t Afford to Ignore Southeast Asia’s Cyber Scam Slavery Crisis

Recent Features

The Debate | Opinion

Why Tech Firms Can’t Afford to Ignore Southeast Asia’s Cyber Scam Slavery Crisis

There are many business and reputational risks of ignoring forced criminality in tech supply chains.

Why Tech Firms Can’t Afford to Ignore Southeast Asia’s Cyber Scam Slavery Crisis
Credit: ID 329735361 © Dusit Chomdao | Dreamstime.com

Late last month, the U.S. Department of State released its annual Trafficking in Persons Report, warning of one of the most urgent crises of our time. Transnational organized crime syndicates, led primarily by Chinese mafia networks, are luring an estimated 300,000 victims with fake job ads on social media, trafficking them into heavily guarded compounds, and coercing them to run online scams ranging from crypto-investment fraud and AI-generated voice schemes to romance scams on dating apps.

This form of modern slavery hides behind screens, rather than brothels or factory walls, drawing global technology companies into its orbit. Recent data from the civil society platform Freedom Collaborative shows that traffickers are now moving victims from East Africa into Southeast Asia’s cyber scam compounds, where they are forced under threat of violence to perpetrate fraud operations worth billions. Last month, U.S. Treasury sanctions underscored the scale of the crisis: in 2024 alone, Treasury said, Americans lost more than $10 billion to cyber scams powered by human trafficking networks, a perfect storm of crime, technology, and exploitation. But this is not a remote human rights issue for NGOs to fix. It is a profound systemic threat that tech firms, AI companies, social media platforms must take seriously as part of their reputational, legal, and financial risk matrix.

Increasingly, these operations are harnessing AI platforms such as ChatGPT, DeepSeek, and Gemini to generate convincing scam messages, automate victim interactions, and scale their schemes at unprecedented speed. One recent investigation showed how trafficked workers were forced to use AI tools to defraud unsuspecting users, a chilling intersection of modern slavery and advanced technology. If a company’s facial-recognition software, chat models, dating algorithms, or ad networks become even indirectly tied to these abuses, they risk being cast in the same light as the criminals themselves.

Consumers and regulators no longer distinguish between “neutral infrastructure provider” and complicit actor; they see systems of harm. The only way to avoid that reputational spiral is to know and control exposure proactively.

Put simply, the more large-scale scams succeed, the more tech platforms’ accounts, payment rails, and partners suffer. Every coerced scammer is generating fraudulent activity, which ripples across credit systems, payment gateways, banks, and insurance providers. Tech firms may end up absorbing chargebacks, being delisted by banks, or see partner institutions crack down on them.

Worse, the line between scam and “legitimate business” can blur. Some traffickers cultivate fronts, shell companies, or seemingly legitimate services to funnel illicit profits through corporate software or cloud services. Since earlier this year, the U.S. government has been sanctioning corrupt ethnic militias in Myanmar that offer protection to the Chinese criminal syndicates tied to digital scams and forced labor.

Governments are waking up. The U.S. Treasury has now sanctioned networks, operators, and even infrastructure providers supporting these scams. Some targets include persons and companies in Myanmar’s “scam city” Shwe Kokko, tied to forced labor, gambling, and online fraud.

Soon, regulators may demand that tech firms have visibility into whether users are coerced or trafficked. Failure to act could invite new laws, making platforms responsible for screening, reporting, or even blocking high-risk accounts, such as in the recent PornHub ruling.

What does action actually look like? It starts with companies taking a hard, unflinching look at their own systems. Tech firms must understand where their platforms and tools intersect with this crisis, whether in terms of dating apps, payment systems, ad networks, or AI models that could be co-opted by traffickers. From there, the responsibility grows: investing in smarter detection tools that can recognize coercion through patterns like sudden location shifts, irregular withdrawals, or accounts operating under obvious duress.

But companies must work hand-in-hand with civil society groups, law enforcement, and financial institutions. These are the groups and the people on the frontlines who know what trafficking signals actually look like. When platforms uncover risks, they must handle them with transparency and care, reporting openly on red flags while also building pathways for victims to get help rather than simply cutting them off.

And finally, the scrutiny must reach down the supply chain. Infrastructure providers, data centers, and vendors should be required to check for signs of misuse, with companies following the money to ensure they aren’t indirectly funding or enabling these networks.

The transnational threat of online scamming is evolving at breakneck speed. As traffickers change their routes overnight, exploiting weak links across borders, regulators are moving just as swiftly to trace the money, sanction enablers, and tighten oversight. Tech and AI companies cannot afford to dismiss this as someone else’s problem. Caring about this crisis is a matter of corporate responsibility. It is also about safeguarding reputations, averting legal and financial fallout, and ensuring that tech platforms don’t become unwitting accomplices to the scourge of cyber-slavery.