Newsweek

Our Kids Shouldn't Be Silicon Valley's Guinea Pigs for AI | Opinion

D.Miller36 min ago

In a new federal lawsuit, Florida mother Megan Garcia is seeking accountability for harmful AI technology—and wants to warn other parents .

The lawsuit, recently filed against app maker Character.AI and its founders, alleges that the company knowingly designed, operated, and marketed a predatory AI chatbot to children, causing the death of Megan Garcia's 14-year-old son earlier this year.

Garcia's son died by suicide in February after months of abusive interactions with a Character.AI chatbot. Garcia's complaint includes evidence that the chatbot posed as a licensed therapist, actively encouraged suicidal ideation, and engaged in highly sexualized conversations that would constitute abuse if initiated by a human adult.

Garcia accuses the companies of causing her son's death, knowingly marketing a dangerous product, and engaging in deceptive trade practices. We have to ask ourselves how this tragedy could have been prevented—and why we have allowed Silicon Valley to experiment on our kids to begin with.

Today, companies like OpenAI , Meta , Microsoft , Amazon , Character.AI, and Google operate in a liability-free zone. This lack of accountability means these companies have little incentive to thoroughly test their products for potential harms before releasing them to the public. Without legal consequences, they are able to treat society as a testing ground for their latest innovations, a practice that is particularly egregious when it comes to the most vulnerable members of our society: our children. This accountability vacuum has allowed unchecked experimentation with our democracy, mental health, and privacy.

Luckily, we've proven that we can hold companies liable for the harm that they cause. In 1972, 13-year-old Richard Grimshaw suffered severe burns when a defective Ford Pinto's gas tank erupted in flames. Grimshaw's lawsuit against the Ford Motor Company resulted in the largest product liability award in U.S. history up to that point, forever altering the auto industry's approach to risk. Grimshaw's tragedy became a watershed in American consumer safety.

Today, product liability is the invisible structure underpinning our lives as consumers and citizens, and is what protects our kids from harm. Liability helps "see" and prevent harms that even the most alert parents may not be able to anticipate. Liability is the reason we can buy toys at the store for our children without worrying about hidden dangers lurking inside the plastic clamshell packaging, or trust that a toddler's car seat will actually help prevent injuries in the event of an accident.

If a company's negligence does lead to harm—whether it's a faulty airbag, exposed wiring, or harm to a child—we have legal recourse to seek compensation and justice. The threat of legal action compels companies to design and build safer products from the outset.

Today, as American families face powerful new technologies, the tech industry lobbies to remain exempt from accountability. They're aided by a judiciary that has favored their expansive interpretation of Section 230 of the Communications Decency Act and their weaponization of the First Amendment. When it comes to product liability, the tech industry has taken us backward in history, with "caveat emptor"—or "buyer beware"—now dominating our modern, digital lives.

Garcia and her son's story is a devastating example of the harm that AI systems can cause for a family. And while companies make big promises in press releases about forthcoming safety features, ultimately those press releases serve to protect their reputations—not their users.

Before these harms accelerate and touch more lives, Congress and state legislatures must act to make clear that tech companies have a clear duty to exercise reasonable care in the design of their products. This duty forms the core of legal liability that every other successful American industry abides by.

By applying the same laws to tech companies that already apply to other manufacturers, we're not stifling innovation. We are channeling the sort of innovation that has led American companies to become industry leaders for decades, and allows families to feel safe using American products. A framework of liability will foster public trust, which is essential for the widespread adoption and success of AI technologies.

We have a choice. We can allow AI to become yet another realm where tech companies operate with impunity and prioritize profit margins over people—including American children. Or we can learn from history and establish a framework of accountability from the outset. Liability has protected consumers—and families—in countless ways throughout the modern era. It's time to extend that protection to the frontier of AI.

Casey Mock is chief policy officer at the Center for Humane Technology.

The views expressed in this are the writer's own.

0 Comments
0