Our 2025 Trust & Safety Snapshot breaks down the biggest shifts in child safety, AI risks, regulatory changes, and election integrity. Get the expert insights driving decisions at leading tech companies.
In March 2024 we rolled out our first Trust & Safety Market Research Report. Since then, teams at top platforms have moved beyond pilot projects - building safety into core product flows and opening up community-driven toolkits like ROOST to power broader protections.
Teams now face a triple threat: rising political polarization fuels disinformation that can spill into real-world conflicts, while generative AI speeds up both defense and attack tactics. At the same time, as companies train ever-more powerful models, they’re leaning on local and cultural insights to keep AI-driven tools on track - and bad actors are already adapting to every innovation.
Against this backdrop, demand for Trust & Safety solutions is poised to nearly double from 2023 to 2028 as in-house teams buckle under staffing cuts. That gap is opening doors for software and specialist services across four core areas - child safety, shifting regulations, AI safeguards and election integrity - that will drive the conversation at TrustCon 2025.
"As the Internet becomes ever more immersive for users through influencer content, generative AI, the metaverse, and beyond, we need to think more old school about child safety, perhaps counter-intuitively. What are the common-sense, practical boundaries that have long been in place in the brick and mortar world in the US to keep kids on playgrounds and playing fields, while out of strip clubs, drug markets, and other danger zones? How can we translate these trusted safeguards, which set kids up to explore in reasonably protected spaces, into the digital world kids are growing up in today and the near future?"
”AI governance ought not to be complicated. It’s straightforward when
you have moral clarity. With billions of users across cultures, how do we
ensure shared values are baked into products and services? How do we
innovate and ensure legal compliance amid political chaos and legal
uncertainty? We can’t set business objectives for every passing lawsuit,
or executive order. Administrations come and go, but human rights are
cross-cultural, and they are here to stay. A simple human rights impact
assessment can ensure business longevity."
"A growing consensus of research findings have now concluded that large language models aren’t as capable as originally predicted and that we aren’t as close to a world-ending artificial superintelligence as doomsayers continue to insinuate. AI video generation techniques, which have seen substantial recent improvements, are probably something to watch with regard to believable disinformation and false narratives. But ultimately, it’s the simpler machine learning mechanisms, the ones that drive things like social media recommendations, that are still causing the most harm to our society."
”Social media has had an unprecedented effect on how we engage
with and make sense of the information that underpins a free and open democracy, in both positive and negative ways. The growth and availability of increasingly sophisticated AI technologies will just make the work of trust and safety professionals and transparency and accountability advocates ever more important. Like with all technologies, there will be both positive and negative impacts. More sophisticated AI systems could contribute to better and more accurate content moderation, fact-checking, and content recommendations, depending on what the companies decide to prioritize. The advancement and public availability of generative artificial intelligence tools, such as synthetic image and audio generators, could help political campaigns communicate more effectively. However, these same tools will not only provide bad actors with easier and cheaper methods to sow division and spread disinformation, but they will also make an already complicated information landscape potentially even more volatile and ripe for manipulation. Understanding how these companies’ products and business decisions affect our health, safety, and democracy is a key ingredient for crafting appropriate legislation, education, and our own consumer behavior to shape a more free and safe internet."
Trust and safety work won’t go away—but the way we talk about it will: “We’re entering a quieter phase publicly, even as the work behind the scenes deepens. Political scrutiny, especially in the US, is pushing companies to retreat from public conversations about moderation and harm reduction. Some policymakers now frame inaction as virtue, claiming to protect free speech—while avoiding tough questions about digital harm. For practitioners, this means: your work still matters. But you may need to navigate tighter communications policies, shifting internal priorities, and more pressure to "stay in your lane." Don’t mistake the silence for a signal to stop—it’s a signal to get smarter about how we frame, measure, and defend what we do.”
This snapshot builds on our comprehensive 2024 Trust & Safety Market Research Report, which predicted the T&S software market would double by 2028. That analysis covered market sizing, vendor landscapes, and strategic frameworks still driving investment decisions today.
Download 2024 Report