A Jury Just Dropped a $375M Bomb on Meta, And It’s All About Child Safety
A Jury Just Ordered Meta to Pay $375 Million for Failing to Protect Children. Here Is What You Need to Know.
In a landmark verdict that could reshape how Big Tech companies are held accountable, a New Mexico jury ruled that Meta knowingly chose profit over the safety of children on Facebook and Instagram — and ordered the company to pay hundreds of millions of dollars in penalties.
The Verdict That Shook Silicon Valley
On Tuesday, March 24, 2026, a jury in New Mexico delivered a verdict that sent shockwaves through the technology industry. After six weeks of testimony, hundreds of documents, and deeply emotional accounts from witnesses, the jury found that Meta — the company that owns Facebook and Instagram — had failed to protect children on its platforms from sexual predators. It had also misled users about how safe those platforms really were.
The punishment handed down was significant: $375 million in civil penalties. That is 375 million dollars that Meta must pay to the state of New Mexico for violating state law, specifically a law that bars companies from engaging in unfair trade practices.
New Mexico had originally asked for $2 billion. Meta’s own lawyers called that figure “shocking.” The jury landed somewhere in between — but the $375 million verdict is still one of the largest penalties ever handed down against a social media company in a case involving child safety.
Meta said it respectfully disagreed with the verdict and announced it would appeal. But for now, the ruling stands as a powerful statement: that the courts — and the public — are no longer willing to simply take the word of tech giants when they say their platforms are safe.
What Was This Case Actually About?
The case was brought by New Mexico’s Attorney General, Raúl Torrez. At its heart, it was about a simple but deeply troubling allegation: that Meta knew its platforms were being used by adults to target and exploit children — and instead of fixing the problem, the company hid it, downplayed it, and continued putting growth and profit first.
New Mexico prosecutors argued several things in court. First, that Meta had long been aware of serious safety problems affecting children on Facebook and Instagram. Second, that the company had publicly claimed to enforce a minimum age limit of 13 for its users — but in practice did very little to actually keep younger children off the platforms. Third, and perhaps most alarming, that Meta’s own algorithms actively made it easier for predators to find and connect with child victims.
State attorney Linda Singer laid out the core of the argument in her closing statement. She told the jury that the problems they had heard about during the trial were not accidents or oversights. They were not mistakes made by well-meaning engineers who simply missed something. They were, she said, the result of deliberate choices made at the highest levels of a powerful company.
“The safety issues that you’ve heard about in this case weren’t mistakes,” Singer told the jury. “They were a product of a corporate philosophy that chose growth and engagement over children’s safety. And young people in this state and around the country have borne the cost.”
Those are serious words. And the jury, after hearing weeks of evidence and testimony, agreed with them.
The Sting Operation That Exposed the Problem
One of the most dramatic pieces of evidence presented in the trial came from a sting operation conducted by New Mexico state officials. Investigators set up fake test accounts on Meta’s platforms specifically to see what would happen — to see whether the platforms would allow young users to be exposed to adult sexual content and contact from predators.
What they found was deeply troubling. According to the lawsuit, these test accounts were quickly bombarded with explicit adult content, including graphic images and videos. The accounts also received outreach from adults who appeared to be sexual predators — including at least one person who allegedly offered a six-figure payment in exchange for the account holder appearing in a pornographic video.
The sting operation was not just a research exercise. It led to real consequences: local police made at least three arrests as a result of what investigators uncovered.
The fact that officials were able to set up fake accounts and within a short period attract this kind of harmful contact raised serious questions about how effective Meta’s safety systems actually are. If trained investigators can expose the problem so easily, what does that mean for the real children who are on these platforms every day?
The Whistleblower: A Father’s Heartbreaking Testimony
Among all the witnesses who testified during the six-week trial, perhaps none left a more lasting impression than Arturo Béjar.
Béjar is not an outside critic of Meta. He is a former insider — someone who worked at the company as a safety researcher and spent years trying to make its platforms safer from within. When he left the company, he became a whistleblower, speaking publicly about what he says he witnessed behind closed doors.
On the witness stand in New Mexico, Béjar’s testimony became deeply personal. He described the moment he discovered that his own daughter — who was 14 years old at the time — had received unsolicited explicit messages from strangers shortly after creating her first Instagram account. The messages included graphic images that no child should ever receive.
For Béjar, this was not just a professional issue. It was a deeply personal one. Here was a man who had worked inside one of the most powerful companies in the world, trying to make it safer — and his own teenage daughter had still been exposed to exactly the kind of harm he had been warning about.
But Béjar also made a more technical point — one that strikes at the heart of the case against Meta. He testified that the company’s recommendation algorithms were not just failing to prevent predators from finding children. They were actively helping predators find them.
“The product is very good at connecting people with interests,” Béjar testified, “and if your interest is little girls, it will be really good at connecting you with little girls.”
That single sentence, delivered in a courtroom, captured what many child safety experts have been saying for years: that the same powerful technology designed to connect people with things they like can become a tool for exploitation when the “interest” involved is the targeting of children.
Half a Million Cases a Day: The Internal Email That Said It All
One of the most damaging pieces of evidence presented during the trial was not something discovered by outside investigators. It came from inside Meta itself.
Court documents unsealed as part of the New Mexico case included an internal company email in which a researcher warned Meta’s own executives that there could be as many as 500,000 cases of online sexual exploitation of children happening every single day on Facebook and Instagram.
Let that number sit for a moment. Five hundred thousand. Per day. On platforms that the company publicly markets as safe places for people to connect.
The existence of this email is significant for one key reason: it shows that the problem was not unknown to the company. Someone inside Meta saw the data, understood what it meant, and sent a warning to senior leadership. What happened after that warning was sent — whether it was acted on, filed away, or ignored — is a question that has enormous legal and moral weight.
Prosecutors argued that evidence like this proved their core claim: that Meta was not simply failing to stop harm — it was aware of the harm and continued operating in ways that allowed it to continue.
What Meta Said in Its Own Defense
Meta did not go quietly into this verdict. The company fought back hard throughout the trial, and its defense team made a series of arguments that they hoped would convince the jury to reject New Mexico’s case — or at least significantly reduce the penalty.
Meta’s attorney, Kevin Huff, emphasised the scale of the company’s safety efforts. He told the jury that Meta employs 40,000 people whose job is to make its platforms as safe as possible. He pointed to the company’s investments in automated tools designed to detect and remove harmful content. He argued that Meta is constantly working to improve and that the challenges it faces are enormous — because the bad actors it is fighting against are constantly adapting and finding new ways to evade detection.
“Meta has built innovative, automated tools to protect people,” Huff told the jury. He described the $2 billion penalty New Mexico originally sought as “a shocking number” and argued that it was wildly out of proportion to any actual harm.
After the verdict was announced, Meta released a statement saying it “respectfully disagreed” with the jury’s finding and planned to appeal. A company spokesperson said: “We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content.”
The company maintains that it has taken many steps to improve child safety and that the challenges involved in policing a platform used by billions of people around the world are genuinely complex. These are not entirely unreasonable points. But the jury, after hearing both sides in detail, was not persuaded that Meta had done enough — or that it had been honest with users about the risks.
The Age Limit That Was Never Really Enforced
One of the central accusations in the New Mexico case concerned Meta’s claimed minimum age limit of 13 years old for users of Facebook and Instagram.
Both platforms publicly state that users must be at least 13 to create an account. This age requirement is widely known and has been in place for many years. But prosecutors argued that in reality, Meta did very little to enforce it — that children younger than 13 were routinely able to sign up and use the platforms without any meaningful checks in place.
This matters for a very specific reason. Children below the age of 13 are protected by a federal law called the Children’s Online Privacy Protection Act, commonly known as COPPA. Under COPPA, companies are required to obtain verified parental consent before collecting data from children under 13. By allowing underage children to sign up without proper checks, Meta may have been violating federal law while also exposing very young children to the kinds of harm documented throughout the trial.
Critics of the social media industry have long argued that the 13-year minimum age requirement is essentially a piece of legal fiction — a rule that exists on paper but is not enforced in practice because enforcing it would reduce the number of users on the platform, which would reduce advertising revenue. Meta has denied this characterisation, but the jury’s verdict suggests at least some of those arguments landed with the people who heard the evidence.
Growth Over Safety: The Business Logic Behind the Harm
To understand why a company as sophisticated and well-resourced as Meta would allow these problems to persist, it helps to understand how its business model works.
Meta does not charge users to create accounts on Facebook or Instagram. The service is free. Instead, the company makes its money from advertising. Advertisers pay to show their products and messages to Meta’s users. The more users there are, the more time they spend on the platform, and the more data Meta can collect about their interests and behaviour — all of which makes the advertising more valuable and more precisely targeted.
This creates a powerful incentive to maximise user numbers and engagement time. Every new user — including every young teenager who creates an account — adds to the pool of people who can be shown advertisements. Every extra minute a user spends scrolling through their feed is another opportunity for Meta to show them an ad.
Prosecutors in New Mexico argued that this business logic — prioritise growth, maximise engagement, keep people on the platform as long as possible — directly conflicted with the goal of keeping children safe. Safety measures often reduce engagement. Restricting what content users can see, limiting who can contact them, and adding friction to the sign-up process all potentially reduce the time people spend on the app. And less time on the app means less advertising revenue.
“They were a product of a corporate philosophy that chose growth and engagement over children’s safety,” Singer told the jury in her closing argument. Whether or not you agree with the full verdict, that framing captures a tension that sits at the core of how advertising-funded social media platforms operate.
New Mexico Is Not Alone: The Bigger Legal Battle Facing Meta
The New Mexico verdict is a major moment. But it is also just one battle in a much larger war being fought in courtrooms across the United States — and potentially beyond.
Meta is currently facing lawsuits and legal challenges from dozens of state attorneys general, as well as private lawsuits brought on behalf of families whose children have allegedly been harmed by its platforms. At the centre of many of these cases is the same fundamental question: when a technology company knows that its platform is causing harm to users — particularly young users — what legal responsibility does it bear?
In California state court, Meta and Google-owned YouTube are currently awaiting a jury’s verdict in a separate case. That lawsuit claims that both companies fueled social media addiction in young people, knowing full well that their products were harming mental health. Both companies deny wrongdoing, but the trial has produced its own set of uncomfortable revelations about what the companies knew and when they knew it.
Together, these cases represent a turning point in how American courts and lawmakers are thinking about the legal liability of social media platforms. For decades, tech companies enjoyed broad protection under a federal law called Section 230 of the Communications Decency Act, which largely shielded them from being held responsible for content posted by their users. But the current wave of lawsuits is testing the limits of that protection — and the New Mexico verdict suggests those limits may finally be within reach.
What This Means for Families and Children Right Now
For parents, guardians, and children, the New Mexico verdict raises an urgent and practical question: what does this mean for the social media platforms that millions of young people use every day?
In the short term, the verdict is unlikely to change much on the platforms themselves. Meta will appeal the ruling, and the legal process will take time. The company is not going to overhaul its products overnight in response to a single court decision, even a significant one.
But in the longer term, the accumulation of legal pressure — from New Mexico and the many other cases still working their way through the courts — may force real changes. If Meta and other social media companies face the prospect of billions of dollars in penalties for failing to protect children, the financial incentive to invest more heavily in safety measures becomes much harder to ignore.
In the meantime, child safety organisations have a simple message for families: do not wait for the courts or the companies to solve this problem for you. Talk to your children about online safety. Know what platforms they are using and who they are talking to. Understand the privacy and safety settings available on every app, and make sure they are set appropriately. And take seriously any signs that a child is being contacted by an adult in a way that feels wrong.
Arturo Béjar — the whistleblower whose own daughter received disturbing messages on Instagram — became an advocate for change precisely because he saw the gap between what the platforms promise and what they actually deliver. His message, and the message of this verdict, is that children deserve better. And that the companies profiting from their attention have a responsibility to provide it.
A Landmark Moment — But Not the End of the Road
The $375 million verdict against Meta is historic. It is one of the largest financial penalties ever imposed on a social media company for failing to protect children. It will be studied by lawyers, lawmakers, and technology companies around the world. And it sends a clear message: the era of no consequences for social media platforms may finally be drawing to a close.
But it is not the end of the story. Meta will appeal. The legal fight will continue. Other cases are still being decided. And the children who are on Facebook and Instagram right now — the millions of teenagers scrolling through their feeds, making friends, sharing photos, and navigating an online world that can be both wonderful and dangerous — will still be there while the lawyers argue.
The New Mexico jury heard weeks of evidence and came back with a verdict that said, plainly and loudly: what happened here was wrong. Growth is not more important than safety. Profit is not more important than children. And companies that claim to protect their users while failing to do so will be held to account.
Whether that message ultimately changes the way the most powerful technology companies in the world operate is the question that will define the next chapter of this fight.