“Where Are We Now: Section 230 of the Communications Decency Act of 1996.”

Steve Rosenbaum
101 min readApr 12, 2024

4/11/24

House Energy and Commerce Committee

“Where Are We Now: Section 230 of the Communications Decency Act of 1996.”

Chaired by Representative Cathy McMorris Rodgers (R-WA)

The subcommittee on Communications and Technology invited the following witnesses:

  • Dr. Mary Anne Franks, Professor of Intellectual Property, Technology, and Civil Rights Law, George Washington University Law School (written testimony)
    - Mary Graw Leary, Professor of Law, The Catholic University of America School of Law, and Visiting Professor of Law, The University of Georgia School of Law (written testimony)
    -Dr. Allison Stanger, Professor of International Politics and Economics, Middlebury College (written testimony

Transcript:
Rep. Bob Latta (R-OH):

Well, good afternoon. The subcommittee will come to order and the chair recognizes himself for an opening statement. Good afternoon and welcome to today’s hearing on Section 230 of the Communications Decency Act. In 1996, the early days of the internet ,Section 230 was enacted to provide online platforms immunity from liability for content posted by third party users. This legal protection was in an online ecosystem that led to the creation of social media platforms that promoted user-generated content, social interaction and innovation. Section 230 has two main mechanisms. First, a provision that exempts platforms from being held liable for content that is posted on their website by a third party user. And second, a provision that exempts platforms from being held liable for content that they remove or moderate in good faith. This dual liability protection is often referred to as the sword and the shield, the sword being the ability for platforms to remove the content and shield being the liability protection for content posted by users of the platform.

As the internet has evolved and become deeply integrated into our daily lives, we have encountered new challenges and complexities that require a reevaluation of Section 230’s role and impact. One of the most pressing concerns is the power that Section 230 has given to the social media platforms. Big tech is able to limit free speech and silence viewpoints, especially of those that they do not agree with. There are countless instances where individuals and groups with conservative viewpoints have faced censorship deplatforming and content moderation practices. In contrast, big tech continues to leave up highly concerning content. The prevalence of illegal activities such as drug sales, human trafficking, and child exploitation on some platforms underscore the need for stronger mechanisms to hold platforms accountable for facilitating or enabling harmful behavior. Big tech’s authoritarian actions that led to several court cases challenging the scope of Section 230’s liability protection.

Over the years, the courts have shaped the broad interpretation and application of the law. Some argue the courts have provided big tech with too much liability protection. Last year, two high profile cases related to terrorist activity on platforms were considered before the Supreme Court. In one case, the law was upheld and the other case, which challenge Section 230’s application to content promoted by algorithms, the court declined to rule. This year, two more cases are before the Supreme Court related to a state’s ability to regulate how social media platforms moderate content. It has become clear that Congress never contemplated the internet as it exists today when Section 230 was enacted. While the courts have too broadly interpreted the original intent of this law, numerous Supreme Court justices declared last year that it’s up to Congress, not the courts, to reform Section 230. It’s time for Congress to review the current legal framework that shields big tech from accountability for their decisions. We must determine how to strike a balance between protecting online speech and holding platforms accountable for their role in amplifying harmful and illegal content. I look forward to hearing from our witnesses and working with my colleagues for thoughtful and targeted reforms to Section 230, and with that, I will yield back the balance of my time and at this time I will recognize the gentle lady from California’s 16th District for an opening statement.

Rep. Anna Eshoo (D-CA):

Thank you, Mr. Chairman and I want to thank the witnesses for being here today. I’m really looking forward to what you will advise us of. There aren’t many members that can say I was a conferee for the Telecommunications Act of 1996, but I was and that work included Section 230 of the Communications Decency Act. Now I continue to strongly believe in Section 230’s core benefit, which is to protect user speech, but when algorithms select what content will appear personalized for each user, the platform is more than just a conduit, transferring one’s user speech to others, and should not be immune from courts examining if their actions cause harm. Withdrawal of immunity is not the same in my view as the imposition of liability. Those harmed should have the opportunity to confront the platforms in court and prove that they did not meet an established standard of care and platforms should have the opportunity to defend themselves.

When we adopted Section 230 so many years ago, the internet was a nascent technology. It was like a little baby in the crib. I know because it was born in my district and we didn’t want to stifle innovation. We had that at the forefront of our work. As we drafted and debated and discussed, we recognized that an open internet risked encouraging noxious activity, so we enlisted the tech companies to be partners in keeping it clean, giving them immunity for Good Samaritan efforts that over or under filtered objectionable content. It’s been 28 years. 28 years since Congress adopted Section 230, and in my view, it’s clear that we’ve made mistakes. It’s allowed online platforms to operate with impunity despite the harms it has wrought. They have knowingly and recklessly recommended content that harms children. Every policy in this country should start with no harm to the children and there has been enormous harm done to children, also abuses of women in marginalized communities and radicalizing Americans through the spread of misinformation and disinformation, threatening our very democracy.

When Congress passed Section 230, we did not foresee what the internet would become and how it would be used. We have the experience now. All we have to do is look over our shoulders and peruse 28 years worth. We didn’t anticipate the harms to children. It’s use for the illegal sale of arms and opioids, abuse and harassment of women, and as I said before, marginalized communities especially through revenge pornography, through deep fakes, doxing and swatting. This is a very long list of dark undertakings. No one can be proud of that, no one, and it’s not defensible in my view. We didn’t anticipate how it would be exploited to spread misinformation and disinformation interfere with our elections and threaten the foundations of our democracy and society and we didn’t anticipate online platforms designing their products to algorithmically amplify content despite its threats to the American people. All of this necessitates Congress to update the law. I appreciate the Chairman. I very much appreciate the chairman holding this hearing on this highly important topic. And again, I’ll circle back to how I started. I genuinely look forward to the witnesses’ testimony and discussion. And Mr. Chairman, I yield back.

Rep. Bob Latta (R-OH):

Thank you. The gentle lady yields back. The chair now recognizes five minutes, the gentle lady from Washington, the full committee chair for five minutes.

Rep. Cathy McMorris Rodgers (R-WA):

Thank you. Thank you. Good afternoon. Thank you Mr. Chairman. Last month this committee led a bill that passed out of the house with overwhelming support to protect Americans against national security threats posed by TikTok. The Protecting Americans From Foreign Adversary Controlled Applications Act is significant legislation that will protect Americans and our children from a CCP-controlled social media company that threatens American national security and fails to uphold our values. That debate has also reignited longstanding concerns about US social media companies and how Congress can keep them transparent and accountable to Americans. Today we’ll examine the law that provides the most significant protections for those social media companies.,Section 230 of the Communications Decency Act of 1996. A lot has changed since then from recent developments and artificial intelligence and its applications to the growth of big tech and other companies that have become increasingly integrated into our everyday lives.

Needless to say, this law is long overdue for meaningful updates and I look forward to discussing those today as written. This law was originally intended to protect internet service providers from being held liable for content posted by a third party user or from removing horrific or illegal content. The intent was to make the internet a safe space for users to connect and find information. However, the internet has changed dramatically since then. As a result, Section 230 is now being weaponized by big tech against Americans. Big tech actively curates the content that appears on their platforms in order to control what we see and what we’re allowed to post. This level of moderation is similar to that of a traditional newspaper or publisher who carefully curate the articles, opinions, and information they publish for their readers. Just as a newspaper editor chooses which stories make it to the front page and which ones are relegated to the intersections, big tech companies make decisions about the visibility and accessibility of content on their platforms. As these companies increasingly evolve and act more like publishers, they have a responsibility to the American people to moderate their platforms in a fair way that upholds American values like free speech. No other class of company in the United States has full immunity from liability like big tech. The reality is that for years these companies have failed to be good stewards of their platforms, especially when it comes to how they’re harming our kids. We’ve seen numerous reports detailing how big tech encourages addictive behaviors in our children in order to keep them glued to their screens and fails to protect their users from malicious actors on their platforms. We’ve all heard countless heartbreaking stories of drug dealers targeting children with illegal drugs including counterfeit drugs laced with fentanyl, which are killing hundreds of Americans every single day.

We also see platforms failing to take action to address cyber bullying and harassing content which is contributing to the rise in teen mental health issues. Parents and victims are unable to hold these platforms accountable for content they promote or amplify due to the way laws like Section 230 are currently written. This legislative shield allows big tech to hide from expensive lawsuits and no one is held responsible for the loss of innocent lives. I’ve said it before and I’ll say it again: big tech remains my biggest fear as a parent and they need to be held accountable for their actions. These issues are not new. Last Congress, we created the big tech accountability platform to examine these topics and I led a proposal to reform Section 230. Big tech is abusing the power granted to them by Congress. They’re censoring Americans allowing and promoting illegal content and turning a blind eye to how their platforms endanger our children. It is long past time to reevaluate this unchecked power and I’m hopeful that this hearing is the start of an opportunity to work in a bipartisan way to do just that. It’s vital that we identify solutions that restore people’s free speech online. I look forward to the hearing today, I appreciate the witnesses being here and I yield back.

Rep. Bob Latta (R-OH):

Thank you. The gentle lady yields back. The chair now recognizes the gentleman from New Jersey, the ranking member of the full committee for five minutes.

Rep. Frank Pallone (D-NJ):

Thank you Mr. Chairman. We’re here today to talk about Section 230 of the Communications Decency Act and Section 230 was codified nearly 30 years ago as a Good Samaritan statute designed to incentivize interactive computer services like websites to restrict harmful content. It’s been critically important to the growth of the internet, particularly in its early stages, but much has changed in the last 30 years and unfortunately, in recent years, Section 230 has contributed to unchecked power for social media companies that has led them to operate their platforms in a state of lawlessness. So I’m pleased this hearing is bipartisan. Democrats and Republicans have come together recently to address challenges presented by the rising influence of big tech in our daily lives and the evolving communications landscape. Earlier this year we worked together to address the dangers of allowing the Chinese Communist Party to control TikTok.

We also passed my legislation with Chair Rogers restricting the sale of American’s data to foreign adversaries and that bill unanimously passed the House last month, something that’s almost unheard of in the House right now. I’m hopeful that we can continue to focus on the areas where Democrats and Republicans can agree social media platforms are not working for the American people, especially our children. Whether it’s videos glorifying suicide and eating disorders, dangerous viral challenges, merciless bullying and harassment, graphic violence, or drug sales, pervasive and targeted harmful content on these platforms as being fed nonstop to children and adults alike. And worse yet, the platforms are playing an active role in shaping these messages, connecting users to one another, promoting and curating this content and monetizing it. Social media companies are putting their own profits ahead of the American people and Section 230 is operating as a shield, allowing the social media companies to avoid accountability to the victims and to the public for their decisions. The fact that this relatively simple provision of law now operates as a near complete immunity shield for social media companies is due to egregious expansion and misinterpretation by years of judicial opinions. Congress should not wait for courts to reverse course. We have to act now. There was a chance last year when the Supreme Court had the opportunity to decide the very important question of whether algorithmic amplification was protected by Section 230, but instead the court declined to offer an opinion and remitted the case back to the lower court. The Supreme Court’s inaction leaves the status quo in place. Bad Samaritans who facilitate the most egregious and heinous activities continue to receive protection from a statute intended to promote decency on the internet. Unfortunately, the successful use of Section 230 as a shield in court has emboldened more companies to use the statute in ways far beyond its initial aims.

Just recently, one voice provider invoked it to evade liability for fraudulent robocalls. Now despite all of this, some courts have started to more closely scrutinize the limits of the Section 230 shield, and while these cases do not always result in platforms ultimately being held legally liable for harm, they have shed light on the important distinctions between third party content and the actions of the platforms themselves. Moreover, the recent success of these claims has poured cold water on the argument that limiting Section 230 immunity and allowing consumers to successfully sue social media platforms will destroy the internet as we know it. However, this slow moving piecemeal approach is unsustainable. As one circuit court judge wrote in considering Gonzalez v. Google and I quote, there’s no question Section 230 shelters more activity than Congress envisioned it would. The judge went on to say the questions around broad interpretation of Section 230 immunity are — and I quote — “pressing questions that Congress should address” and today marks a first step in trying to find a bipartisan solution to the Section 230 problems. So the get out of jail free card enjoyed too often by big tech is an extraordinary protection afforded to almost no other industry. This protection is not appropriate. It has to be reformed. While online platforms have been a positive force for free speech in the exchange of ideas, too often they function more like fun house mirrors distorting our discourse and reflecting our worst qualities. And the sad reality is this is often by design because the platforms are not passive bystanders, they knowingly choose profits over people and use Section 230 to avoid any accountability with our children and our democracy paying the price. So I’m hopeful that after hearing from these experts today, we can work together on long overdue fixes of Section 230. I look forward to discussion. I did want to say that I saw that Professor or Dr. Allison Stinger, or Stanger I should say, is a professor of international politics and economics at my alma mater, Middlebury College in Vermont. Good to see you. I have to say that when I was there, I only took one course in intro to political science with Murray Dry, but I was the head of the student government, so I did get my start there, but thank you Mr. Chairman. I yield back.

Rep. Bob Latta (R-OH):

The gentleman yields back. How did you do as the head of the student government?

Rep. Frank Pallone (D-NJ):

Oh, well you don’t really want to hear this. Maybe we don’t. This was a very tumultuous time, but I won’t say because it was so long ago. I don’t want it to reveal my age.

Rep. Bob Latta (R-OH):

Oh, okay. Tell the professor how long ago it was.

Rep. Frank Pallone (D-NJ):

I graduated in 1973. She probably wasn’t even born.

Rep. Bob Latta (R-OH):

The gentleman yields back the balance of his time. This concludes member opening statements. The chair reminds members that pursuant to the committee rules, all members’ opening statements be made part of the record. We want to thank our witnesses for being here today to testify before the subcommittee. Our witnesses will have five minutes to provide opening statements which will be followed by questions from our members. Our witnesses today before us are Dr. Mary Anne Franks, professor of law and intellectual property technology and civil rights at George Washington University Law School; Dr. Mary Graw Leary, professor of law at the Catholic University of America School of Law and a visiting professor of law at the University of Georgia School of Law; and Dr. Allison Stanger, professor of international politics and economics at Middlebury College. I would like to note for our witnesses that you have a timer light on the table that will turn yellow when you have one minute remaining and will turn red when your time has expired. And before we get started, before speaking, if you would want to pull your mics up close and Dr. Franks, you are recognized for five minutes and again, thank you for being with us today.

Mary Anne Franks:

Thank you very much. Section 230 is often referred to as the 26 words that created the internet. It’s a really catchy description, but it’s also a really revealing one. When you glance at Section 230, you realize that it’s a lot longer than 26 words. It’s got multiple sections, subsections that detail congressional findings, policy objectives, definitions, exceptions, and so on and all in all it runs about a thousand words. The particular 26 words that are credited with the creation of the internet come from Section C, the law’s operative provision and those words are, “no provider or user of an interactive computer service shall be treated as the publisher or the speaker of any information provided by another information content provider.” This provision ©(1) is indeed 26 words long and it is true that this single isolated subsection has played an essential role in creating the internet as we know it today.

That is, it has been sweepingly interpreted to allow tech companies to avoid liability for a vast array of harms inflicted by their products and their services, including life destroying harassment, sexual exploitation, deadly misinformation, and violent radicalization. This dystopian result has been made possible by divorcing those 26 words from the rest of the law’s text, its context, its title, its history, and its purpose. The title of the operative provision is Protection for Good Samaritan Blocking and Screening of Offensive Material. Good Samaritan laws are common throughout the United States and they have a specific structure. They immunize bystanders from liability when those bystanders engage in voluntary good faith efforts to assist those in need in the hopes of encouraging people to act like Good Samaritans. Subsection ©(2) of Section 230 does exactly this for the internet. It provides immunity from civil liability to providers and users of interactive computer services for actions voluntarily taken in good faith to restrict access to or the availability of harmful content.

When courts interpret Section 230 ©(1)’s prohibition against treating interactive computer service providers as the publishers and speakers of other information content providers, they treat that as bestowing the same immunity not only on indifferent bystanders but on those who contribute to or even profit from harmful content, they render the entire statute incoherent. This can be illustrated through reference to the original biblical parable of a Good Samaritan. A traveler is beaten by robbers and left half dead by the side of the road. A priest sees him and steps over to the other side. A Levite does the same. And then finally a man from Samaria sees the injured traveler and even though it costs him time and it costs him money, he stops and he tends to the man’s wounds, he takes him to an inn to receive further care. If the Samaritan’s voluntary good faith rescue attempts are unsuccessful or incomplete or they cause unintentional harm, he is not liable, but it would make no sense to extend the same immunity to the priest or to the Levite who did nothing to help or even more absurdly to the robbers who assaulted the man to begin with.

It would also not make sense to extend immunity to the inn keeper if he failed to provide safe premises for his guest. Most people, most of the time can face liability, not just when they intentionally cause harm or directly cause harm, but when they contribute to even indirectly to harm. Shopkeepers can be held responsible if their premises are unsafe. Auto manufacturers can be sued for faulty designs. Hospitals can be sued for botch surgeries. As Justice Kagan asked during oral argument during last year’s Section 230 case, Gonzalez v. Google, every other industry has to internalize the costs of its conduct. Why is it that the tech industry gets a pass? The answer that is sometimes given is that the business of the tech industry is speech and that anything less than sweeping immunity will mean the end of the internet as well as the end of free speech as we know it.

But that answer is flawed in at least three ways. The first is that Section 230 has been invoked to absolve tech companies of responsibility for far more than speech, illegal firearms transactions, credit card transactions, faulty dog leashes. Second, the tech industry is far from the only speech focused industry. Newspapers, booksellers, television stations, universities, they’re all in the business of speech and they can all be sued sometimes for harmful speech. And finally, while some groups may be enjoying free speech under the Section 230 status quo — especially billionaires, white supremacists, conspiracy mongers — this freedom is not shared equally across society. Unchecked sexual abuse, harassment and threats have a silencing effect, especially on vulnerable groups, especially on women and minorities which pushes them out of the public sphere and undermines their full participation in society. Last year in Gonzalez, the Supreme Court made clear that if Section 230 needs to be clarified, it is up to Congress to do it and hopefully before the 26 words that created the internet destroy everything else. Thank you.

Rep. Bob Latta (R-OH):

Thank you very much. Professor Graw Leary, you are recognized for five minutes.

Mary Graw Leary:

Thank you. Thank you, Chair Rogers, Chair Latta and Ranking Member Matsui and members of the subcommittee for having this important hearing. I have to mention being a lawyer, the views expressed of mine are not those of the Catholic University of America or the University of Georgia. Narrow limited immunity that is designed to prevent the proliferation of explicit material to prevent child abuse, to prevent exploitation, or to protect platforms from good faith removal of such material is completely different from near absolute immunity, de facto near absolute immunity for one industry, for a host of actions and conduct well beyond the removal of this material. It’s entirely different. The former is what was intended in 1996 and the latter is what we have today. Section 230 cannot properly be understood unless we understand its context and its context is really beyond dispute. It was developed as part of a larger landscape having to do primarily, although not exclusively, with how to best shield people and children from explicit conduct and harmful material.

And while some would like to act as though it’s a standalone piece of legislation meant solely for a growing vibrant internet, it is not. And that reality of the background is reflected in its legislative history, its text and the contemporaneous media coverage at the time. As this body well knows, Congress was attempting first to update the 1934 Communications Act dealing with this new medium and Congress had the wherewithal to see that the guardrails that were in place in the old medium needed to be translated into the new medium. Two visions came out, as I lay in detail in my written comments, from the Senate, the Communications Decency Act, and from the House, the Internet Freedom and Family Empowerment Act. The discussion between these two pieces of legislation was not whether to protect and limit this material but how best to do it. And within that backdrop we have to understand Section 230 of the Communications Decency Act and the conference committee understood that and put them together and it cannot be divorced from this backdrop.

It is in Title Five of the Act, obscenity and violence. It is in Section 230 protection for private blocking and screening of offensive material. And the particular issue, as has been pointed out by my colleagues, is protection for Good Samaritan blocking and screening of offensive material. The debate was how best to stop this material. The promise was made by the technology companies about their efforts which they guaranteed would make it a safe environment and that is not what we have today. And why don’t we have it today? Because in litigation throughout this country, it has been interpreted in a way that gives de facto near absolute immunity and this has resulted in many harms that have been laid out by my colleagues. The result has been platforms profiting from venters to engage in sex trafficking, illegally selling firearms, apps with design flaws that allow predators unfettered access to children, CSAM and non- consensual pornography, fentanyl, all seek and receive immunity for their actions and profit from this exploitation.

A look at CSAM alone highlights in hard numbers the harms of this act outside the courthouse. In 1998 when the cyber tip line was created, the National Center for Missing and Exploited Children received about 4,500 reports. In 2023, they received 36 million reports containing more than 105 million pieces of content. And today the cyber tip line averages about 99,000 reports a day. That’s the harm outside the courts. The harm inside the courts is equally devastating and I should say inside the courthouse, not the courtrooms, because victim survivors, attorneys generals, aggrieved parties are denied access to courtrooms, denied their opportunity in court to litigate this. Why? Because of this broad immunity that asserted as a litigation position and a policy position and the time has long passed to do these reforms. The motto of Meta was once, ‘move fast and break stuff,’ that also sounds catchy until you realize what’s being broken is people and the legal regime designed to protect them. I look forward to your questions.

Rep. Bob Latta (R-OH):

And thank you very much for your testimony and Dr. Stanger, you are recognized for five minutes.

Allison Stanger:

Thank you very much. It’s a real honor and privilege to be appearing before you’re here today and I’m absolutely thrilled to be participating in a bipartisan hearing. I’d like to direct our collective attention not only to the past and the present, but to the future with my remarks. As we’ve heard today, Section 230 was designed to unleash and protect internet innovation, thereby maintaining America’s competitive edge in cyberspace. It provided the runway for the takeoff of companies like Google, Twitter, and Facebook and it created the internet as we today know it, where extremely powerful companies are effectively shielded from liability no other American corporations, especially ones with so much power benefits from such blanket exemption from liability. Today Section 230’s unintended consequences have had a negative impact on both our children and our democracy again, as we’ve already heard. So I’d like to take us in a slightly different direction and think about reforming Section 230 by repealing those 26 words that have been mentioned.

If we did so, what might we expect for free speech and commerce and I see net positives in both categories? First, for free speech, it no longer makes sense to speak of free speech in traditional terms, the internet has so transformed the very nature of the speaker that the definition of speech itself has changed. Without Section 230, companies would be liable for the content on their platforms at a stroke. Content moderation would be a vastly simpler proposition. Companies need only uphold the first amendment and the courts would develop the jurisprudence to help them do that rather than to put the onus of moderation as it is today entirely on companies. It is sometimes imagined that there are only two choices, a world of viral harassment or a world of top-down smothering of speech. But there is a third option, a world of speech in which viral harassment is tamped down, but the ideas are not. Virality might come to be understood as an enemy of reason and human values.

I think we Americans can have culture and conversations without a mad race for total attention. Without Section 230, recommender algorithms and the virality they spark would be less likely to distort speech. Second, with respect to commerce, without Section 230, existing large social media companies would have to adapt. They’d be forced to do so. Decentralized autonomous organizations, such as Blue Sky and Mastodon ,would become more attractive. The emergent DAO social media landscape should serve to put further breaks on virality, allowing a more regional social media ecosystem to emerge, thereby creating new demand for local media. In an ideal world, these networks of DAOs, decentralized autonomous organizations, would comprise a new fediverse where users would have greater choice and control over the communities of which they choose to be a part. The problems of virality, harassment and exploitation of our children could be met head on. Third, there would be positive net consequences for national security.

I can speak on that in the questions if you have interest in that topic. To conclude while Section 230 might’ve been considered more a target for reform rather than repeal prior to the advent of generative AI, it can no longer be so social media could be a business success even if its content was nonsense. ai cannot. An ai model is only as good as the ideas and data it is trained on. The best ai will come out of a society that prioritizes quality conversation and communication. While an ai model can tolerate a significant amount of poor quality data, there is a limit. It is unrealistic to imagine a society mediated by mostly terrible communication where that same society enjoys unmolested, high quality AI. A society must see quality as a whole shared cultural value in order to maximize the benefits of ai. Now is the best time I would argue for the tech business to mature and develop business models based on quality. We can nudge them in this direction by repealing Section 230. Thank you for your time and I welcome questions.

Rep. Bob Latta (R-OH):

Thank you very much. And that will conclude our witness opening statements and I’ll now recognize myself for five minutes. Professor Franks, during your testimony, you detail how the courts have interpreted Section 230 too broadly. Please explain how the interpretation of Section 230 has evolved over this time and to what extent you think the courts have applied 230 too broadly too.

Mary Anne Franks:

Thank you. What we’ve seen is that this very limited immunity that was provided clearly in subsection ©(2) has been sort of transferred over to ©(1) and then expanded. Instead of saying, for instance, that you can’t treat a particular intermediary as if it were the speaker of someone else’s speech, we now see every kind of claim, speech claims, non-speech claims, basically anything one can imagine, being treated as though it were clear that any responsibility that the intermediary might have is foreclosed by ©(1). And that I think does not make any sense of course in the context of the statute. But it has also meant that in terms of incentives for the industry, it is essentially saying to the industry, you can participate in any kind of reckless profit maximizing behavior that you want. It doesn’t matter what the consequences are for you because you will not have to pay for them.

Rep. Bob Latta (R-OH):

Thank you very much. Thank you. Professor Leary. How can we strike a balance between protecting free expression and holding big tech accountable for dangerous content that it promotes under Section 230?

Mary Graw Leary:

I think the balance is struck, similar to what Dr. Sanger was saying, by the reality of the marketplace. This is the only industry that things are so out of balance. So one of the ways that we can strike the balance of free speech is to really focus on the harms, the harms that are caused by Section 230. And one of the free speech aspects that is often overlooked in these discussions is this access to court, the access to civil rights from parties who are challenging the actions of these companies and they’ve been completely denied and shut out. So when we talk about balancing free speech, I think we have to think about all the speech that has been shut out as a result of Section 230. All the cases that are closed off at immunity as opposed to that are closed off after a full litigation of hearings.

Rep. Bob Latta (R-OH):

Thank you. Professor Stanger, you said something kind of interesting before you closed about talking about national security. Would you want to speak to that which you were referring to?

Allison Stanger:

Yes, I’m happy to do so. I think it’s important to realize that our internet is precisely unique because it’s so open and that makes it uniquely vulnerable to all sorts of cyber attacks. Just this week, we saw an extraordinarily complicated plot that is most likely done by China, Russia or North Korea that could have blown up the internet as we know it. If you want to look up XZ Utils, Google that and you’ll find all kinds of details. They’re still sorting out what the intention was. It’s extraordinarily sophisticated though, so I think that the idea that we have a Chinese company where data on American children is being stored and potentially utilized in China, can be used to influence our children. It can be used in any number of ways no matter what they tell you. So I very much support and applaud the legislation to repeal, not to repeal, but to end TikToks operations in the United States.

The national security implications are extraordinary. Where the data is stored is so important and how it can be used to manipulate and influence us is so important. And I think the next frontier that I’ll conclude with this, for warfare, is in cyberspace. It’s where weak countries have huge advantages. They can pour resources into hackers who could really blow up our infrastructure, our hospitals, our universities. They’re even trying to get, as you know, into the House. This House right here. So I think repealing Section 230 is connected to addressing a host of potential harms

Rep. Bob Latta (R-OH):

In my last 35 seconds. Let me ask one last follow up on this then. When you’re talking about our national security and the cyberattacks, and of course TikTok as the Chair mentioned that we’ve passed out of here, how vulnerable, how vulnerable are we? Are we winning this race? Are we losing this race in my last 14 seconds?

Allison Stanger:

We are stars in innovation and so we want to keep that advantage, but our very openness makes us vulnerable. China doesn’t have to worry about freedom of speech, so they get security. We’ve got to balance the two.

Rep. Bob Latta (R-OH):

Well, thank you very much. My time has expired and I now recognize the gentle lady from California 16th District for five minutes for questions.

Rep. Anna Eshoo (D-CA):

Thank you Mr. Chairman and thank you witnesses for not only your written testimony but your spoken testimony today. Professor Leary, you discussed the original intent of Section 230 and was born out of an intent to limit the proliferation of indecent and harmful materials on the internet specifically to protect children from obscene and indecent material. You argue that Congress’s original intent has been thwarted by the courts’ erroneously reframing and de-emphasizing these purposes and therefore turning 230 on its head and providing de facto near absolute immunity for online platforms. What do you think Congress needs to do to return Section 230 to its original intent while also necessarily protecting the free speech rights of the platforms to moderate content? And let me just throw another question out there and it may surprise you. Are any of you aware of the key companies putting out on the table what they are willing to do to address so many of the things that are now almost commonplace in terms of understanding and damage, et cetera, et cetera? But we go to Professor Leary and then any other witness if you can answer the question I just posed.

Mary Graw Leary:

Thank you, congresswoman. Certainly I think keeping the protection for the Good Samaritan, I think that that should stay. I think that that is a really effective tool that Congress came up with. However, when we look at this idea of publisher and how it has been so twisted well beyond what a publisher would ever be considered doing, I think that’s really where the problem is. As has been pointed out already today, the standard of know or should have known either the design was harmful, the content is happening on your website, whatever the specific claim in my view, that’s the standard most businesses have to deal with and why this industry doesn’t have to deal with that in either its design or its execution of its products I think is troubling. And that standard, as has been conceited by the other side, would be a defense at trial. And to your point, they’d be able to defend themselves, plaintiff victim survivors would be able to prove their cases. And as Justice Thomas pointed out in Malwarebytes, this isn’t to say that tech will lose every time. What this is to say is they’ll have their day in court. Very quickly on your last point. That’s —

Rep. Anna Eshoo (D-CA):

It’s alright, go ahead.

Mary Graw Leary:

I think it’s insightful, in 2020, at a hearing on the Senate side, the representative of one of the trade associations for tech was asked that very question, what are you doing for your members? What are you putting out? And there was no answer for the members —

Rep. Anna Eshoo (D-CA):

Well do you know of anything since then? That was a long time ago.

Mary Graw Leary:

Well, I know that they are representing some things that often are so bogged down in detail. They don’t actually get to solving the problem because immunity for Section 230 is what allows them to function with impunity and with profits.

Rep. Anna Eshoo (D-CA):

I’m struck by the old adage of addressing alcoholism. The patient has to acknowledge that it’s the case. Unless you acknowledge something, you’re not going to pursue the cure or the fix. Dr. Stanger, I’m very interested as Co-Chair of the House ai Caucus and also as a member and we have other distinguished members on this committee, Mr. Obernolte, Congresswoman Kat Cammack, thank you Kat, and others that serve on the bipartisan ai task Force, can you tell us why you believe it’s critical to reform Section 230? I’m fascinated by this in light of generative AI.

Allison Stanger:

Yes, very simply, all the harms we’ve talked about are just exponentially increased by generative AI, which is automating disinformation, automating these harms, making them harder to stop

Rep. Anna Eshoo (D-CA):

Because of the scraping?

Allison Stanger:

Because of the fact that they can move so quickly to generate new deepfakes and so forth. Not so much the scraping, that’s a separate issue, but it’s important to realize that. I just want to also say two things if I may go.

Rep. Anna Eshoo (D-CA):

Go ahead

Allison Stanger:

In regard to your last question.

Rep. Anna Eshoo (D-CA):

We’re over time, but go ahead.

Allison Stanger:

I have traveled around the country this past year talking about this argument to repeal Section 230 and I’ve been all over Silicon Valley saying this and the reaction I get is complete outrage in public. But if you talk to people in private, they say we’ll be alright. And the second point I would make is that big tech is not a monolith monolith. We’re seeing some divisions among the companies on this issue now. Eric Schmidt just came out last week for repealing Section 230, so it’s an interesting moment for Congress to act.

Rep. Anna Eshoo (D-CA):

Thank you. With that, Mr. Chairman, I yield back.

Rep. Bob Latta (R-OH):

Thank you very much. General lady’s time has expired and the chair now recognize the chair of the full committee, the gentle lady from Washington for five minutes for questions.

Rep. Cathy McMorris Rodgers (R-WA):

Dr. Franks, earlier this year, a US appeals court heard a case on whether TikTok could be sued for causing a 10 year old girl’s death by promoting a deadly blackout challenge that encouraged her to choke herself. TikTok pushed this dangerous content to this child’s For You Page. Do you think this type of personalized amplification or promotion should receive Section 230 protections and how can Congress reform Section 230 to protect children from this deadly content?

Mary Anne Franks:

Thank you. So I think that Section 230’s benefit of immunity should apply very narrowly for two kinds of situations. One is when the platform is taking active steps to mitigate against harm. So the facts that you are describing clearly do not fit. This is promotion of harm or this is indifference to harm rather than active intervention. The other narrow view is in ©(1) is the question of whether or not someone is being treated as though they are the speaker for someone else’s speech. And so in this case, I don’t think that this applies either. So I don’t think that the facts as you’ve described them would be something that you should get immunity for. The problem has been that the interpretation broadly speaking has been that, in fact, in situations like this, when a company can say, “this wasn’t our direct issue, we didn’t directly do this,” that too often has been enough to get that case dismissed.

And so I think that is what needs to be clarified at this point. Even though the text of Section 230 itself does not demand that result, there has been so much case law at this point that seems to point in that direction that ©(1) really does need to be clarified to include limitations that say you cannot use that kind of argument. You cannot get this kind of immunity from civil liability anytime you want. There has to be certain limitations and those limitations in my view, should be including things like you can’t solicit it, you can’t encourage it, you can’t profit from it and you cannot be deliberately indifferent to it.

Rep. Cathy McMorris Rodgers (R-WA):

Thank you. Professor Leary, I mentioned it in my opening statement that as a mom I’m very concerned about big tech and this impact on our children. I want to thank you for your work in drawing attention to the exploitation of women and children online. How has activity such as online sex trafficking, exploitation, and pornography been allowed to exist and grow due to Section 230 protections?

Mary Graw Leary:

Thank you, congresswoman. Well, I think we’ve heard the answer again and again today, haven’t we? And that is courts taking what is a fairly clear text and turning it on its head and they’re being led to that point by litigants who are arguing for this massively broad immunity. So what we’ve seen is courts in the First Circuit, very famously a few years ago, acknowledging that even if we accept the plaintiffs in that case, three girls who were trafficked on Backpage, even if we accept that as true, that is not what Section 230 was designed — that is what Section 230 was designed to protect. Courts have allowed for direct actions, partnering with illegal entities, or profiting from them to exploit children in a number of ways. And they have simply regarded those somehow as a publishing action, which it is absolutely not. So that’s how Section 230’s been abused in that way and denying people the opportunity to get into that information where we can show how these companies are in fact engaged in that activity.

Rep. Cathy McMorris Rodgers (R-WA):

Thank you. Dr. Stanger, how might reforms to Section 230 impact smaller tech companies and startups compared to larger, more established platforms who have benefited from liability protections?

Allison Stanger:

That’s a great question. There is some concern, it’s sometimes expressed from small businesses that they are going to be the subject of frivolous lawsuits, defamation lawsuits, and they can be sued out of business even though they’ve defamed no one. I’m less concerned about that because if we were to repeal section ©(1) of Section 230 of those 26 words, I think the First Amendment would govern and we would develop the jurisprudence to deal with small business in a more refined way. I think if anything, small businesses are in a better position to control and oversee what’s on their platforms than these monolithic large companies we have today. So with a bit of caution, I think that could be addressed.

Rep. Cathy McMorris Rodgers (R-WA):

Okay. In my time remaining, Dr. Franks, I wanted to go back to this question of Section 230, applying to generative AI technology such as ChatGPT, and if there’s anything you want to add to the impacts there that you see?

Mary Anne Franks:

I’d say two things about this. One, in terms of the harms that we’re seeing, especially for sexual exploitation, this is one of the most serious, clearly an urgent situation already, in terms of the damage that it is doing to women and girls in particular. And we’re seeing that this is a problem not just of the apps and the services themselves, but also of the distribution platforms like X or Facebook or wherever that material happens to end up, which highlights the fact that we need to be thinking about both of those angles of the problem and to make clear that that should seem, should be the case that textually speaking, that sort of product, generative AI, giving that sort of product in response to inputs should not be the kind of thing according to even the current text of Section 230 should not receive immunity because they’re acting as their own information content providers.

Rep. Cathy McMorris Rodgers (R-WA):

Right. Okay. Thank you everyone. Appreciate your insights on this very important topic. I yield back.

Rep. Bob Latta (R-OH):

Thank you very much. The gentle lady yields back. The chair now recognizes the gentleman from New Jersey, the ranking member of the full committee for five minutes for questions.

Rep. Frank Pallone (D-NJ):

Thank you Mr. Chairman, I have one question for each of you. So I’m going to ask you to spend about a minute and a half in response. Let me start with Dr. Stanger. I was going to say, what are the consequences of our failure to reform Section 230? But of course you say it should be repealed. So maybe I should change that to say what are the consequences of our failure to repeal Section 230, particularly for the health and wellbeing of young people, our safety, our democracy? Of course, you could write a book on this, but in a minute and a half if you could, I know you’ve touched on it, but if you want to elaborate a little.

Allison Stanger:

Absolutely. I think — I’m writing a book called Who Elected Big Tech, and I just would want to dispel one potential misunderstanding here that big companies performing content moderation follow their own rules of service. You can show systematically that they don’t, it’s a complicated affair. They have a big challenge on their hand, they use some AI, but if you look at what they really do, it’s very politically connected to events happening here or to things that are happening with their rivals. So the idea that content moderation is just proceeding so smoothly and this is going to get in the way of proper content moderation, I think is a myth we need to dispel. You’ll hear it a lot from Silicon Valley. My research shows that is not true.

Rep. Frank Pallone (D-NJ):

Alright, thank you. Then I wanted to ask Dr. Franks about the First Amendment. Do social media platforms serve a unique purpose distinct from traditional media companies? And if not, why are First Amendment protections not sufficient for these platforms, if you will?

Mary Anne Franks:

Social media platforms do serve, or you could say that they serve a somewhat different purpose in that they are engines of user-generated content. So when we think about newspapers, newspapers are very heavily curated. It is the responsibility of the newspaper itself to choose and pick the articles, whereas the point in most cases of a social media platform is to allow others to speak freely or not freely, but allow others to speak. That being said, we certainly have other examples that are very close to this kind of function, which is booksellers, for instance, or television programs, any kind of television station that is going to have, for instance, talk show hosts and have guests come on and give their opinions. That too is someone else’s speech. And so there’s nothing I would say unique about social media platforms that may be that they do that more as their focus than other types of industries, but it is not so unique that it warrants having a completely different approach to their business as we would have in any other industry.

Rep. Frank Pallone (D-NJ):

Well thank you. And then Professor Leary, how would reforms to Section 230 ©(1) lead to social media companies taking more responsibility for how their platforms are designed and operated?

Mary Graw Leary:

Well, I think at this point they have been using ©(1) to say, “product design that must also be publishing capabilities,” which again, defies reason. So I think reforming that or withdrawing it to preclude that kind of an argument would be essential. And also one thing that has resulted from this immunity as opposed to a defense is we haven’t developed the jurisprudence that would naturally give guidance to businesses that has been stunted for the past nearly 30 years. Businesses today, when they want to make a decision about how to go forward and balance all these things, look and see, well what do I know already? What are my obligations already in this sort of real world? And how can I do my costs and benefits to decide if I’m going to go forward and how I’m going to do that? By having ©(1) in place, giving such broad immunity, turning things on its head, it’s precluded us from being able to see where those guardrails are.

Rep. Frank Pallone (D-NJ):

Alright, thank you all. Thank you so much. It’s very enlightening.

Rep. Bob Latta (R-OH):

Thank you. The gentleman yields back the balance of his time and the chair now recognizes the gentleman from Florida’s 12th district for five minutes for questions.

Rep. Gus Bilirakis (R-FL):

Thank you very much, Mr. Chairman. I appreciate it very much. I want to thank the panel as well. Whenever we talk about Section 230, our online privacy protections. I’m particularly focused on how we best protect our children, our nation’s children, children and teens spend the most of the time on the internet. And there are some of the most manipulated, unfortunately. In a hearing a few years ago, I had a discussion with witnesses on how Section 230 interacts with child exploitation online. We explore how special immunities are granted to online platforms that don’t exist for brick or mortar stores when it comes to a business knowingly exploiting our children and facilitating child pornography. I want to expand on this very serious issue. A 2019 New York Times podcast reported that the FBI has to prioritize sexual exploitation cases of infants and toddlers because it cannot effectively respond to all reports. This leaves older children less protected and therefore more likely to be repeatedly abused. Unfortunately, Ms. Leary, if the FBI cannot pursue a case against a platform due to lack of resources under current law, can a state attorney general file criminal charges under state law or would Section 230 block that case as well?

Mary Graw Leary:

Thank you, Congressman. The way that Section 230 has been interpreted, the answer would be no. There’s a provision after these 26 words we’ve talked about which says no state law inconsistent with this should be followed for — it’s not phrased quite like that. And courts have interpreted that to say states can’t enforce their own criminal laws and you are quite right. Child exploitation in this country in part due to these platforms is exploding. And we need to have multiple pressure points, state and federal and not civil litigation, criminal litigation, and by telling states they cannot enforce their own criminal laws or their own regulations has a very challenging effect on these online forms of exploitation.

Rep. Gus Bilirakis (R-FL):

So again, just like with lawsuits brought by victims of child sexual exploitation online, there is no reason why we should be giving special immunity, in my opinion, on online platforms where they facilitate child pornography. I agree with you. It is so shameful that are own laws prevent state AGs, it’s unbelievable, from prosecuting child pornography facilitators when the FBI cannot handle the cases themselves. Let’s close the loophole folks. Let’s close the loophole and get more cops on the street to stop these predators. So my second question is Dr. Franks, Section 230 ©(2) states that a provider is protected under the good faith standard for material that is obscene or otherwise objectionable. Does this mean that the provider can shield itself from liability simply because at least one of its users has flagged content as personally objectionable? And if so, would that provider protection still exist if the user flagged the post in bad faith? Perhaps because they didn’t agree with the position of the original poster.

Mary Anne Franks:

So the protections of ©(2) would not be dependent on users at all. So it would not need to rest on whether or not a user has said, “this is objectionable.” The ©(2) provision says that the provider themselves finds that this is objectionable or any of the other obscene, lewd, lascivious, et cetera, any of those kinds of characteristics, it is allowed to restrict access to that content and cannot face civil liability for that basis. And I think it’s also important to note that even though that’s a procedural protection that is important in ©(2), this is really building on a foundation that actually reinforces something about First Amendment law as well. Namely that these social media companies, while they might not seem like it, they are private actors in the sense that they are not government agencies, they’re not government agents. And so the First Amendment operates for them both in terms of what they can say and also what they don’t have to say. And so the First Amendment already gives them the power to take things down to choose not to post speech, to choose to restrict access. They can do all of those things based on their powers under the First Amendment. And then ©(2) gives them extra procedural protection.

Rep. Gus Bilirakis (R-FL):

Okay, I guess I’ve got, I yield back. Thank you very much Mr. Chairman. Appreciate it.

Rep. Bob Latta (R-OH):

Well, thank you very much, gentleman yields back and the Chair now recognizes the gentleman from Florida’s Ninth District for five minutes for questions.

Rep. Darren Soto (D-FL):

Thank you, chairman. Way back in 1996, Reps. Widen and Cox came down from the Capitol hilltop with the two commandments of the internet. Apparently Representative Eshoo was there too, which is pretty cool. Thou shall not treat an internet provider as a publisher of content posted by another on their platform. And thou shall not hold an internet provider liable for taking down content in good faith for various nefarious reasons like obscenity, lewdness, illegality. Let me take you back a moment to 1996. The top web browser, Netscape Navigator. Remember those guys? Yahoo was just created a year or two earlier. Google didn’t exist as a noun or a verb. Amazon was created just two years earlier and was known for mostly selling books. Facebook wouldn’t exist for another eight years and most people had dial up internet connections from 28.8 to 33.6 kilobytes per second. I was graduating from high school in ’96 to date myself. And remember explaining to adults that the internet is more than email and sports scores. My point is it’s time, right? It’s been a while since that last law passed. And so we need to review common sense reforms that deal with children, our identity and data, while also making sure we have enough space to promote innovation. I don’t take for granted the fact that the technology industry is a robust part of our nation’s competitiveness and prosperity. We just need basic rules of the road as we go forward. We’re going to have the option to vote on national comprehensive privacy reform and to protect our data and our identities and another bill to protect our kids. And these are going to be important issues we work on in central Florida. We saw a young man, Alex Bourget, who had his identity stolen online to make racist comments towards a Georgia State representative at the time.

Obviously it wasn’t him, it cost him a research position at a local hospital, jeopardized his matriculation at a local university, and with no cause of action, he was powerless to take down volumes of false information. And since we’re talking about alma maters, I’m a proud GW Law alumni. Welcome Dr. Frank. We’re proud to have you here. For the record, I took Dr. Siegel’s IP survey course. It was brilliant by the way. It’d be great to hear from you, what rights and causes of action do you believe are just to protect citizens from identity theft? What should we do to help out constituent like Mr. Bourget?

Mary Anne Franks:

Thank you. I think on the one hand that the problem that you’ve articulated with that situation that you have someone out there who is committing an action that is obviously harmful and if the person knew who that was, perhaps they could try to seek relief from that person. But because of the structure and the nature of many internet platforms, that identity might be hidden. And this is something that actually benefits many social media companies. And so they actually encourage things like anonymity and a lack of tracing. So that puts the person in the position of thinking about other avenues. So if you can’t find the person who is doing this to you, can you stop the distribution of the harmful content? And there we went into this problem with Section 230 because that is when social media companies will say, this wasn’t us. It was some user, and we’re not accountable for that.

What we would need in addition to not allowing Section 230 to necessarily be raised preemptively in those kinds of situations is also to remind ourselves that sometimes the internet has made things possible and harms possible that were either not possible before or not possible quite in such a dramatic manner, and that may be a situation where we need to start thinking about targeted new legislation for certain types of harms. Impersonation laws right now as they exist, are very, very narrow. They mostly apply to people who are government officials or police officers or someone who’s a medical personnel. The average citizen doesn’t have much to go on when someone is impersonating them, and I think that situations like the one you described suggests that we should be thinking very hard about whether we should change that.

Rep. Darren Soto (D-FL):

Well, thank you Dr. Franks. I’m concerned about protecting our personal data, our identities, and our kids and looking forward to the Chairman getting the opportunity to look at some of these bills we’ll be voting on pretty soon. Thanks. And I yield back.

Rep. Bob Latta (R-OH):

Thank you. The gentleman yields back and the chair now recognizes the gentleman from Michigan’s Fifth District for five minutes for questions.

Rep. Tim Walberg (R-MI):

Thank you, Mr. Chairman, and thanks to the panel for being here and to my friend Congressman Soto. 1996 I didn’t come down from the mountain, but I did have a bag phone that I couldn’t use, but no laptop. Well, Section 230 has grown a robust and innovative internet ecosystem in the United States nearly 30 years after its enactment. It’s time, clearly, that we look if the current model is best serving American consumers and businesses. Big tech’s behavior has become increasingly more concerning as you’ve indicated, Dr. Stanger, very concerning. Illegal activity and harmful content seems to be rampant on the platform, especially impacting the mental health and safety of our children. My ENC colleagues and I are working to address the catalyst of this issue, children’s privacy, which is why this week I introduced HR 7890, the Children and Teens Online Privacy Protection Act or COPPA 2.0 together with comprehensive privacy. COPPA 2.0 will help address the root cause of the harmful algorithms and content online, but we also need to look at how companies treat that content on their platforms. I want to thank the committee for holding the hearing today to do just that. Professor Graw Leary, regarding protecting children online. In your testimony you identified how 230 has given near absolute immunity to platforms that can be used to groom and abuse minors among many other harms. Could you expand on how section 230 has contributed to challenges in providing access to justice for victim survivors, particularly in cases involving online harm or abuse?

Mary Graw Leary:

Sure, and piggybacking on Dr. Frank’s comments, right? We have situations in which victim survivors, and you make an excellent point, congressman. In all other aspects when we discuss youth, we talk about the brain not being fully formed. All of the information that we know about these really vulnerable populations and these companies take advantage of that and offenders take advantage of that and offenders flock to an atmosphere in which they can either anonymously or not anonymously get access to youth, offend against youth, engage in sextortion a growing problem that the FBI had to send out a warning on this year, a warning from the FBI about how social media platforms are a vehicle for this kind of abuse. The DEA had to send out a warning last year about how social media platforms are involved in selling drugs to youth. So lots of them are doing this and the platforms have zero incentive to clamp down on this to clean up their atmospheres. Why? Because they’re monetizing it. And there’s an excellent case involving Twitter out in, I believe in the Ninth Circuit that talks about they’re made aware of the CSAM that is on their platform. And not only is the company not taking it down, there are links to advertisements in these images.

Rep. Tim Walberg (R-MI):

To use it and monetize it as you’ve said and make it worse. Thank you. Dr. Franks. We’ve seen Section 230 play out in the courts on numerous occasions recently has been mentioned on Gonzalez v. Google, the justices declined to rule whether targeted recommendations by social media’s company’s algorithms would fall outside the liability of Section 230. What does this mean for Congress and how should we proceed?

Mary Anne Franks:

I think it means that to the extent that Congress was waiting to see if the Supreme Court would clarify the original intent of the statute and maybe sort of steer it back to where the path should have been, the Supreme Court has now pretty decisively said, we’re not willing to do that, or we think that Congress is better situated to do that. And I think at that point, that means if the feeling is that Section 230 has led us down a very dangerous path, Congress has to act now.

Rep. Tim Walberg (R-MI):

Almost saying that we must act, because they won’t do it. So that gives us the opportunity. Thank you very much. I yield back.

Rep. Bob Latta (R-OH):

Thank you. The gentleman yields back the balance of his time. The chair now recognizes the gentleman from California’s 29th District for five minutes for questions.

Rep. Tony Cardenas (D-CA):

Thank you, Mr. Chairman. Appreciate the opportunity for us to come together like this and have this very, very important hearing. It’s affecting millions and millions of Americans every single day and we hear the horror stories of the negative effects of what’s happening, especially when it comes to little children and communities across America. We’re standing to a very unique point in American history where a number of technologies are converging to create a digital landscape that could prove to be unfriendly to democracy. And the public advances in generative artificial intelligence as well as the rolling back of social media, content moderation, policies or opening the door to a boom in mis and disinformation in our information ecosystem. Americans who are invested in the least by social media platforms will invariably bear the brunt of this. We’ve seen this play out in the past two decades and cycles of our elections where Americans whose primary language is Spanish and other languages were exposed to higher levels of false information online that includes inaccurate information about access to reproductive health, vaccine safety, and election integrity.

Rampant mis and disinformation serves to weaken democracy and embolden our adversaries abroad as well as radical elements here in our own country. It has a real world effect on public health with a major US election looming this fall. We need to be paying attention. There needs to be accountability from the platforms that we trust to connect Americans with each other and the world with the world around them to ensure that information is designed to harm them, not allowed to spread wildly in the name of driving engagement and record breaking profits. That accountability also needs to lead to equitable investments in fighting mis and disinformation in languages beyond English. Dr. Franks, in your testimony, you talk about how the protections the tech industry currently enjoys because of Section 230 have resulted in a warped incentive structure that can create profit at the expense of tremendous harm to people. I’ve sent multiple letters to online platforms with my colleagues highlighting these platforms, lack of investment in Spanish language content moderation, and while the responses we get are sometimes receptive to the problem, we don’t see a follow through of investment on acting on it. As things currently stand, do social media platforms have any incentive to seriously invest in Spanish language content moderation outside of a fear of public shaming?

Mary Anne Franks:

I would say that unfortunately the answer is probably not much. Public shaming can do a little bit, but we’ve already seen that in some of the documents and the conversations that have been revealed by whistleblowers and others, that tech officials often openly talk amongst themselves about how, oh, there’s a new scandal, we’re probably going to get called before Congress, we’re going to be asked some embarrassing questions and then everybody’s going to move on, we’ll go back to making money. So I think given the way that Section 230 has clearly been read and interpreted for these companies as essentially guaranteeing them, you won’t have to face the consequences of your actions, you end up with a perfectly rational but terrible situation where profit seeking companies think they can expand their enterprises, they can offer all of these services without offering any of these protections as you’re pointing out.

Rep. Tony Cardenas (D-CA):

What would an incentive structure look like that would produce a reasonable investment in non-English language content moderation on social media platforms?

Mary Anne Franks:

I think at a minimum you would have to really restrain the definition and the interpretation of ©(1), right? The provision that’s essentially saying you are not responsible, you Facebook or whoever the company is, you’re not responsible for these issues. That is, as I’ve mentioned before, being used to defend against any number of claims that are really far beyond anything contemplated by Section 230, I think in 1996. And really what you would need to show is that if you are causing harm by pushing out a product that you have not established appropriate safeguards, for instance, it should be clear that if you are targeting and making your product accessible to people who do not speak English or that you are offering it outside of your company’s own chosen language, you need to have protections in place and linguistic competence in place and cultural competence in place in order to make that a safe product. But if Section 230 is interpreted as saying, you can simply throw up your hands and say, “we just offered a great service and maybe we didn’t do it very well and maybe it’s not that safe, it’s not our responsibility.” I think that particular interpretation of ©(1) definitely has to be restricted.

We don’t treat bank robbers like that. If somebody drives somebody to a bank robbery, we don’t say, “oh, you just drove the car, you didn’t run in and rob the bank, the others did.” You’re still held accountable. You’re involved in that situation. You are integral part of and what took place, and we don’t have that for these organizations. And if you’ll indulge me, I’d just like to ask you to give us your written interpretation of what a cyber civil rights bill should look like or some of the elements thereof. Thank you very much. Thank you Mr. Chairman. I apologize I went over my time.

Rep. Bob Latta (R-OH):

Thank you very much. Gentleman’s time has expired. The chair now recognizes the gentleman from Georgia’s First District for five minutes for questions.

Rep. Buddy Carter (R-GA):

Thank you, Mr. Chairman and thank all of you for being here. This is something that this committee and particularly this subcommittee, is taking very, very seriously. Look, my daddy used to tell me, when you don’t do something, you’re doing something and if we don’t do something, we’re going to be doing something. So we’ve got to address this and we recognize that, but we want to do it right. I don’t want to stifle innovation and I don’t want to stop the progress that we’ve made. The internet is phenomenal, but at the same time we’ve got to address this and we want to do it in a responsible way, but it is changed. It’s changed since 230 was written. We all know that. And it’s something that it’s kind of a heavy lift if you will, but certainly something that can be done. Dr. Franks, I’ll start with you. Algorithms, they’ve evolved over the years and over the last decade and they’re used by the social media platforms, but sometimes I think they’re using it as an excuse, as a crutch, if you will. It always seems to be that we blame it, everything on the algorithms, but there was a Gallup poll last this past February. It said the average teen spends four hours a day on the internet, four hours a day. Unbelievable. And that’s why we take this so seriously and we know that there have been studies that have shown that the increase in the time spent on the internet has resulted in increased mental health issues. What’s your opinion of algorithms and the algorithmic recommendations that the First Amendment, do you consider algorithm recommendations that the First Amendment is protected speech?

Mary Anne Franks:

Thank you. I think that’s a fairly complicated question, but I will say that the algorithmic, if we’re keeping it at the category of algorithmic sorting generally, I think that that’s a very large category that in some cases can be used for good as well as for ill. So the reason why I am being cautious here about saying algorithms are good or bad is one that they have such a vast array of uses and can be deployed in so many ways. I think we’d want to be very narrow and we want to be very focused about the kinds of algorithms that we think are malicious. And I want to note that in that ©(2) provision, the Good Samaritan provision of Section 230 that talks about restricting access to harmful content, what we don’t really think about sometimes but I think is important to think about is that companies are fully capable of using algorithms for good in that sense. That is, you could imagine the opposite situation of the bad situation we mostly hear about. That kind of terrible rabbit hole that there’s a teenager who wants to look for diets and suddenly she’s being fed all this information about eating disorders. What if we do the opposite, right? And some companies have tried to do this to pick up on what a user’s vulnerabilities are and move them away.

Rep. Buddy Carter (R-GA):

Well very quickly, let me ask you, should the platforms be shielded from liability when they’re using algorithms?

Mary Anne Franks:

If they’re using algorithms to restrict access to harmful content, which is something that these companies could do, I’d say that falls squarely within ©(2) and should not categorically be seen as a bad thing or something that is not deserving of protections.

Rep. Buddy Carter (R-GA):

Okay, Ms. Leary, go dogs. I’m sorry that my colleague Cammack is not here to hear that, but she’s a Florida Gator and then I got a Tennessee volunteer up here. All these people, they don’t get it. But anyway, thank you for being here. Let me ask you, there are a lot of parents who rely on third party apps to help them protect their children when they’re using social media. In fact, we’ve got some legislation, some bipartisan legislation in this committee that we’re working on that deals with that. But as you know, Section 230 requires the platforms to notify users of parental protections that are commercially available that seems to be largely ignored. Why is that and why don’t we force them to do that?

Mary Graw Leary:

Well, I’m hoping you will force them to do that, but why is that? Again, what’s the incentive? The incentive is to take this highly vulnerable group and I think they spend a lot more than four hours a day here and to get as much content in front of them and keep them on as possible. And so to have really solid age verification to really honor their privacy rights, that will cut into profits because again, it’s the monetizing and I wanted to clarify something I said before. I didn’t want to suggest there was a link to CSAM on the particular accounts. What I’m talking about is they want to traffic eyes to ads, right? And that’s what I mean when I say they monetize things, the ads for whatever are put on things that are popular that they’re trafficking folks to. Of course, I just wanted to clarify that.

Rep. Buddy Carter (R-GA):

That’s business, we all understand that, it unfortunately has negative impact as well. Well, again, thank all of you for being here. This is extremely important. We want to get this right, but we got to do something. I feel very strongly about that and I think my colleagues here feel the same way. So thank you and I yield back.

Rep. Bob Latta (R-OH):

Thank you. The gentleman yields back and the chair now recognizes the gentle lady from Texas’s Seventh District for five minutes for questions.

Rep. Lizzie Fletcher (D-TX):

Thank you so much, Chairman Latta, thanks for holding this hearing. This is a really important hearing it’s been very useful and helpful I think for all of us here. Certainly I’ve appreciated the testimony of all of the witnesses and appreciate you taking your time and sharing such detailed recommendations and thoughts in your written testimony as well. I want to follow up on a couple of things that we’ve heard today or things that were in your testimony and Dr. Franks, I want to start with you. Your testimony would say suggest, but I think your testimony just says that the courts have basically just gotten it wrong and now we’ve got 30 years of the courts getting it wrong consistently and that has become now precedent and that is what’s being followed. And so I really appreciate the specificity of your recommendations at the conclusion of your written testimony about specific changes to the provisions to potentially address the existing issues that we see. I guess it seems to me like part of the issue is that the very plain language of Section 230 in the first place, the courts have gotten wrong. So what do you suggest or can you share with us other things you think we can or should do to avoid that problem in whatever we try to craft to address the situation now?

Mary Anne Franks:

Thank you and I think the most important aspect is to really focus on ©(1) because that seems to be where the problems are coming from and to largely leave ©(2) where it is. And I think in terms of the problems that we’ve seen over and over again from the courts in ©(1) is I think there’s two major issues. One is that even though protections under ©(1) and Section 230 generally are often touted as free speech issues, they’re often touted as protections for free speech that are necessary to foster dialogue and encourage public discourse. The terms in ©(1) say information, and I think companies have really taken advantage of that ambiguity to really invoke Section 230 for any number of actions. When you look at the text of ©(1), it says publisher, speaker information that should be speech, but a lot of these things would not be considered speech if they were coming up outside of the online context or at least they would be contested.

So I think clearly what needs to be limited there is take out the word information and put in speech and make it clear that as an initial threshold matter, a company cannot invoke Section 230’s protections unless we’re talking about speech and is the obligation of the company itself to actually show that it is in fact speech that they’re talking about. And the other is that there needs to be a limitation on this kind of immunity if it’s going to be given at all under ©(1). It’s got to be limited to those kinds of social media companies and platforms that are not soliciting, encouraging, profiting from or being deliberately indifferent to what they know is harmful content.

Rep. Lizzie Fletcher (D-TX):

Okay, thank you for that. I think it’s very helpful and again, I appreciated your recommendations. I want to turn now to Professor Leary because I appreciate in your testimony and discussion that because of Section 230, we really haven’t developed the case law as envisioned, and so I’m wondering as part of this process what you think that we could try to do or what you suggest we might try to do to fill that gap and address the gap in the 30 years of case law as a part of what we’re doing here?

Mary Graw Leary:

I think a key part of the gap is to get rid of immunity that says we’re not having the lawsuit. You can’t come in, you can’t prove your case, we don’t have to give discovery over to you, and the public cannot learn and get answers to some of these questions that this committee is struggling with. It’s a defense and we go in a court and sometimes plaintiffs will be able to prove their case and sometimes they won’t, but that jurisprudence will come up and currently we have information about these websites almost entirely from either a two year investigation of Backpage from over on the Senate side or from the whistleblowers that have come forward not from litigation, and typically we have them from litigation. That’s where we learn about product design, about what’s happening in these algorithms, et cetera. So I think that that’s a key space where we could develop that jurisprudence.

Rep. Lizzie Fletcher (D-TX):

Okay, thanks. And I have about 40 seconds left. I also wanted to follow up with Professor Stanger. You just mentioned in your opening that you had some thoughts on national security. Did you want to share more with the 30 seconds you piqued my interest. I’ve got about 30 seconds for you to share your thoughts on that. Some of them.

Allison Stanger:

Just in summary, I think some of our enemies are very much interested in subverting our infrastructure and also influencing public opinion in the United States by working through and exploiting Section 230 and limited content moderation. So that’s something that I think we need to stop.

Rep. Lizzie Fletcher (D-TX):

Okay. Perfect timing, Mr. Chairman. I yield back. Thank you all so much.

Rep. Bob Latta (R-OH):

Thank you. The gentle lady goes back and the chair now recognizes the gentleman from Florida’s Second District for five minutes for questions.

Rep. Neal Dunn (R-FL):

Thank you very much Mr. Chairman. I believe all my colleagues here in the committee agree we want the internet to remain a free and open place, but since 1996 it’s been operated — Section 230 has been operated under a light touch regulatory framework allowing online companies and providers to moderate their content heavily under an immunity shield and I think many of us have seen some problems with that regulatory framework. The American public gets very little insight into the process when content is moderated and they have little recourse when they’re censored or restricted. Recently Americans experienced a high level of online policing from big tech during the last election and people saw a lot of things, stories being taken down immediately from Twitter and Facebook and whatnot. It’s Congress’s job to make sure that big tech companies are not obstructing the flow of information to benefit a political agenda and to ensure a free and competitive news market.

It’s our job to promote transparency and truth. As a member of the select committee on China and the Speaker’s AI task force, I have major concerns about the risks to our internet ecosystem from the Chinese Communist party and other adversarial nations. Our younger generation, in addition, has never been more susceptible to foreign propaganda. Dr. Stanger, you stated in your testimony that liberal democracy depends on public deliberation to make citizens feel connected to a common enterprise that they feel they had a hand in shaping, but the techno authoritarianism that we see on display in China, especially sacrifices individual rights on the altar of communist party ideology. How can we ensure potential 230 reforms will safeguard Americans from that kind of nefarious online action? I mean would amending the law to exclude companies with indirect ties to CCP? Is that a start?

Allison Stanger:

I think if you were to repeal Section 230 ©(1) and hold companies liable, you could get at a lot of these problems quite directly. One point I think that’s really important to read into the record that might be a surprise to some of you is that there are two versions of TikTok. There’s the version for the United States and there’s the version for China. And the version for China optimizes for things like wellbeing, test scores. It limits the number of hours on the platform. We all know that the American version is something else entirely. If you spend any time on it, it is super addictive and it’s definitely not raising test scores or optimizing for wellbeing. I think that speaks volumes about the differences in values between China and the United States in this issue area.

Rep. Neal Dunn (R-FL):

I loved your comment on national security as well. That was very good. Dr. Franks, I was recently at a conference with major players in the generative AI space were talking to us, and by the way, your testimony was very helpful I thought in explaining the 230 in the way that was working. Some of these speakers were very hesitant to discuss what data their algorithms or large language models are actually trained on, but they were very clear that they didn’t want to be held liable for the output of those same algorithms. Do you think clarifying Section 230, generalized so we get more to the AI outputs, can we incentivize those platforms to invest in higher quality training or data? That was to Dr. Franks.

Mary Anne Franks:

Okay, thank you. Sorry, we’re a little confused about that, but yes, I think that one thing that should be made clear, and I do want to emphasize that I think a commonsensical reading of Section 230 would suggest that generative AI would not get protections because there’s that distinction made in Section 230 between being a provider of these services versus being an information content provider and then a single entity can have both of those functions at different times. If you are taking in inputs and you are giving something over a new thing that didn’t exist before, some speech that was not there before, an image that didn’t exist before, it’s quite clear that that is your own product and therefore the intermediary liability immunity from liability simply shouldn’t apply. That being said, many of us have been pointing out that for 20 years it should have been obvious that this interpretation and that interpretation and this particular defense by a particular company shouldn’t have made sense under Section 230 and yet courts did it anyways.

Rep. Neal Dunn (R-FL):

Anyway. I like the way you pointed it out, actually, your testimony that we have turned this law in its head with common law and now we need to I think get back in with statutory law. And so I thank you. I think by the way, all three members of the panel, I think that you’ve really helped us with clarification on 230 and I do see our responsibility to follow some of these guidelines. Thank you very much, Mr. Chairman. I Yield.

Rep. Bob Latta (R-OH):

Thank you. The gentleman’s time has expired and yields back, and the chair now recognizes the gentle lady from Illinois Second District for five minutes for questions.

Rep. Robin Kelly:

Thank you Mr. Chair, and thank you for holding this very important hearing and as you’ve heard from my colleagues, this is something that Democrats and Republicans agree on. It’s time to reevaluate Section 230 of the Communications Decency Act. Dr. Stanger as a chair of the Congressional Black Caucus Health Braintrust, I was particularly drawn to the section of your testimony where you discuss Section 230’s negative effects on human wellbeing. May you please explain how Section 230’s blanket immunity from liability for social media platforms can contribute to an increase in extremism and hate in society at large?

Allison Stanger:

Yes, thank you for that question. I think we’ve gone over some very obvious harms, but I think much of it ties back to the immunity from liability but also the ad-driven business model, which means that the algorithms optimize for engagement the amount of the time on the platform, and what we’ve learned is that human beings are most engaged when they’re outraged, and so this really produces a kind race to the bottom of the brain stem sort of dynamic rather than kind of robust public square we would like to see. I think the problem in a nutshell is we have yet to acknowledge that we have a national virtual public square as stands and it’s a free for all. It is not encouraging the kind of well reasoned, respectful argument, the agreement to disagree, all of these things that make America great and I think reforms of Section 230 would help us get that back.

Rep. Robin Kelly:

Thank you. Closely related to extremism and hate, I think there should be grave concerns about the influx of disinformation on social platforms and especially when it has adverse effects on some of the most vulnerable members of our society. Dr. Frank, your written testimony states that Section 230 was intended to enable and incentivize online intermediaries to engage in modernization and other content management practices to protect users from harmful content. However, my concern is that Section 230(c)(1) is an incentive for social media platforms not to act even when they know their platforms are spreading harmful content. So how can we best ensure that social media and other internet platforms that choose not to moderate or remove harmful content are ineligible for the immunity protections provided by Section 230?

Mary Anne Franks:

I think what we do have to do at this point is think about how ©(1) is providing that kind of immunity, which as I’ve attempted to illustrate, really does undercut the whole idea of ©(2), if there’s going to be a benefit for engaging in voluntary good behavior, but you’re going to get the same benefit if you do nothing and even if you do terrible things, then obviously that’s inconsistent. So there has to be a clarification made about ©(1) that says you cannot interpret it in a way that will make it undermine the goals of ©(2). And at a minimum, that means you cannot be profiting from harmful content and I think it also means you cannot be an indifferent bystander.

Rep. Robin Kelly:

Thank you so much. And Professor Leary, Georgia State University,

Mary Graw Leary:

University of Georgia.

Rep. Robin Kelly:

Oh, oh, because my daughter graduated from —

Mary Graw Leary:

Hence the dog reference, which I wouldn’t have understood up until this semester.

Rep. Robin Kelly:

I was going to say my daughter graduated from GSU. Across America, hundreds of state AGs, school districts, families and parents of children who have been hurt, or worse killed, as a result of dangerous and addictive social media platforms have filed cases seeking accountability for poorly designed social media products. Do you agree that all of America’s children and young people deserve a safe non- addictive social media product and how we’re reforming Section 230 help in this effort? I know you agree.

Mary Graw Leary:

Yes. I was going to say that’s an easy question. Yes, and I would note just a couple days ago, some tribal nations joined in these lawsuits that we’re seeing with over 40 attorneys generals on some of these platforms making some of the very arguments that you are pointing to. So I do agree, and again, I think the reform involve ©(1) as has been said and talking about knowing or should have known or deliberate indifference. We might be able to have disagreements about the level of mens rea, but right now there is no mens rea, right? And the way that we hold people accountable is we say there’s a standard of care that you should have either designing your product, making it not addictive, et cetera, and whatever other feature and you needed to abide by that. The problem is we have no idea what’s going on behind all of these things because all of these suits have not been able to go forward.

Rep. Robin Kelly:

Thank you and I’m out of time. I yield back.

Rep. John Curtis (R-UT):

The gentle woman yields. The chair now recognizes myself. I’m John Curtis from Utah and I’m really pleased to be with you today, Dr. Franks. I’m going to start with a comment you made, but I’d like all of you to respond to this and let me try to explain my thought process. In your opening remarks, you went down kind of into the weeds of Section 230 and talked about a situation that in my mind I envision what I would call a community bulletin board and where with some exceptions that we’ve all agreed I could come up and post something on that bulletin board and anybody could walk along and see that posting that I made. And in my opinion, that’s what happened many years ago As these social medias were coming up, I could get online and I could see my friend from high school if I wanted to.

I could go seek that out and I could find my friend from high school. I was surprised one day when I logged on and I no longer saw my friend from high school, but I was served information. In other words, somebody had gone to that bulletin board and moved the information around and changed what I would see. If you follow a continuum from those early days of social media to more and more algorithm interaction along that continuum all the way over to ChatGPT and AI, where I think we’ve discussed in this hearing, Section 230 wouldn’t apply. The one thing that I’m not really think that we’ve talked about so much today is our real trigger point, the point at which these companies stop being a bulletin board and start changing the algorithms to dish up, what I see, Dr. Stanger you mentioned that we’re motivated if we’re angry, so I stay on longer. And I’m just wondering if a definition of when Section 230 applies and doesn’t apply is more tied to these algorithms and when companies move away from simply a community bulletin board to where they’re now deciding what John Curtis sees and how long they see it. Does that make sense? I’d love you all to comment on that Dr. Franks.

Mary Anne Franks:

So my hesitation about the algorithm, the sorting, this kind of question is that that category I think does encompass a lot of different things that a social media platform could do or search engine for that matter, and some of those things are actually quite beneficial I think on the one hand. And on the other, I do think ©(2)’s clear immunity provision that says if you’re doing something to restrict access to harmful content, you should be getting this protection. I think some of those algorithmic sorting functions can be used in that way, not nearly as often as they should be because there’s no real incentive for these companies,

Rep. John Curtis (R-UT):

And I’ll give you both a chance to comment, but I want to jump on and noodle this for just a minute, so I’m not sure if this is what you’re saying, but in essence, sometimes when they make decisions, they show us good things and sometimes when they make decisions they show us bad things. And I guess my question is —

Mary Anne Franks:

They can also direct people away from bad things. That is to say if you are trying to figure out an algorithmic system that actually identifies that instead of looking for information on eating disorders because you are a researcher, you may be a vulnerable 14-year-old girl maybe directing you away from that.

Rep. John Curtis (R-UT):

And my only concern for that is then somebody is making a decision about when that’s good or bad. But I do appreciate your opinion, professor.

Mary Graw Leary:

I’m hearkening back to the point of Justice Thomas, right in his statement in the denial of certain in Malwarebytes where he reminds us the reason why there was a distinction in liability for publishers versus others is the publisher again would know, know that information and maybe a bulletin board wouldn’t, right? A distributor wouldn’t. What you’re describing sounds a lot more like a situation where there’s knowledge about the person seeking the information, there’s knowledge about the information available somewhere in that toxic mix, there is a financial incentive to get certain information in front of that individual and not in my mind, that sounds a lot more like a situation of a publisher than it does just a distributor. Now, I don’t disagree with Dr. Franks’ point, I’m just sort of highlighting I think what you’re getting at Congressman, which is, how things looked in 1996 and who was what is very different today.

Rep. John Curtis (R-UT):

Yeah, and well, I want to give Dr. Stanger time as well, so please,

Allison Stanger:

Yeah, no, I love the way you frame that because it really captures the move from the original internet to web two, the social media internet. First world, you’ve got that bulletin board, you’re speaking out, your voice is heard, everybody’s equal. Second one, you’ve got an algorithm that’s intervening in between your voice and what people actually see, and that is something else entirely. It’s important I think, to realize that Section 230 currently covers recommender algorithms, content moderation and search. They’re all immune and that’s a very sweeping mandate.

Rep. John Curtis (R-UT):

Sadly, I’m out of time. Thank you all for your comments and for being here today. I yield and the chair recognizes Ms. Dingell from Michigan.

Rep. Debbie Dingell (D-MI):

Thank you, Mr. Chairman. I appreciate this hearing being held to today to discuss the harms many of us see on the internet these days and thank you to all of the witnesses for testifying. In today’s digital age, online events directly impact our lives, whether it’s cyber bullying, mental health issues, explicit threats or the dissemination of false information, online content can result in tangible harm and it is resulting in tangible harm. Our recent bipartisan concerns in this committee over TikTok underscore this reality. We should be focusing on the direct human impact and imminent threats posed by such content to our communities. We shouldn’t have to accept the hate, the misinformation or violent language circulating online as it inevitably infiltrates our communities, often with severe consequences. As we’re all aware, courts have interpreted Section 230 of the Communication Decency Act to give tech companies broad immunity, align them to evade accountability for what occurs on their platforms.

Section 230 deserves scrutiny as the internet has changed dramatically since this was passed 25 years ago. However, we’ve also heard and know that some forms of content moderation can result in censorship of free speech. We have to strike a careful balance preserving free expression while ensuring companies and platforms effectively shield users, especially our vulnerable populations like our children from harmful or explicit online content, and we must hold them accountable when they fall short. Dr. Stanger social media companies have an incentive to prioritize controversial content to drive user engagement and therefore ad dollars. I’m interested in why they would fail to act when they know their platforms are harming people, especially kids, by allowing them to find and then pushing them to information on suicide, eating disorders, and the like. Why does Section 230 act as a disincentive for these companies to take down the kind of information with what we have proof is harming people?

Allison Stanger:

The simple answer is that they’re immune from liability, and so it’s very easy to appear to respond when Congress is shining a spotlight on that activity. You’ll see, I haven’t done this, but you can track the number of trust and safety employees at companies. They shoot up after a big incident, but then when the attention moves elsewhere, they cut the trust and safety employees. So the simple answer to the question is there’s just so much money to be made and it’s also a massive undertaking. These are enormous companies Meta, I think the statistic is, it’s in my testimony that they had —

Rep. Debbie Dingell (D-MI):

Got it there. I’m going to keep asking questions, so thank you for that. I’ve only got a few minutes. Dr. Franks, could you expand on how the application of Section 230 allows these companies to make design decisions that they know result in tangible harm including vulnerable populations?

Mary Anne Franks:

Well, due to this expansive version of ©(1), or the interpretation, instead of being limited to things like defamation and speech that are clearly countenance by ©(1), companies have been able to make the claims that essentially everything they do, every choice they make, even about their own platforms counts as that content that is covered under ©(1). That shouldn’t be the case because it doesn’t seem to be supported by the text or history, but it has been successful in the courts.

Rep. Debbie Dingell (D-MI):

Thank you. Professor Graw Leary. Does Section 230 effectively shield social media companies from accountability for the negative consequences stemming from the content on their platforms?

Mary Graw Leary:

It absolutely does for the reasons that we have stated, in their design, in their failure to respond when they’re put on notice, and their failure to be transparent as to how they do things. All of that they’re shielded from that. They have absolute profit motive and zero accountability.

Rep. Debbie Dingell (D-MI):

Thank you. Dr. Franks, what reforms do you think would realign the incentives for these companies to act responsibly?

Mary Anne Franks:

I think at a minimum, focusing again on ©(1) because that is driving most of the problems here to ensure two things. One is that we very much make sure that it is only limited to speech, so we are not going to be able to countenance arguments, for instance, that this is going to cover things that would not be considered to be speech in any other context. And secondly, that the immunity would not be available to any platform that has knowledge or should have knowledge of harmful content on its platforms and is doing nothing to stop it even when it easily could and certainly not if it is profiting from it or exploiting it or soliciting it.

Rep. Debbie Dingell (D-MI):

Thank you, Mr. Chairman. I’m out of time, so I’m going to be submitting more questions for the record and I yield back

Rep. John Curtis (R-UT):

The gentlewoman yields the chair calls on the gentleman from Pennsylvania, Mr. Joyce.

Rep. John Joyce (R-PA):

Thank you Mr. Chairman for holding this hearing today on Section 230 and thank you to our witnesses for appearing. As a doctor, I am acutely aware of how important today’s hearing is, particularly concerning children’s mental health. For years, harmful content online has been linked not only to lonelier children but to adults. Bullying, explicit materials, and violent content has been allowed to stay up on platforms for users to see regardless of age. We owe our children a safer experience online. While the Communications Decency Act attempted to do so by incentivizing good faith behavior by platforms, unfortunately, this has not been put into action. Instead, harmful content still persists on platforms and it is our congressional duty to focus on necessary and balanced reforms. Professor Leary, in your testimony you mentioned the original intent of Section 230 of the CDA was to, I’m quoting, “protect children and families from explicit content.” Do you believe that Section 230 is standing up to its original intent to protect our children?

Mary Graw Leary:

No, and I am not being flip. I think the numbers of CSAM that are outlined in my written testimony in more detail than what I said orally demonstrate categorically quantifiably absolutely no. The reports to the cyber tip line about what our children are experiencing an exploitation and exposure to CSAM and other obscene material as well as being put into these images makes it very clear that that is amplified by a lack of liability for these companies under section 230.

Rep. John Joyce (R-PA):

Let me allow you to clarify that. You first said yes, but my question was, is Section 230 standing up to its original intent and your answer to, is it standing up to its intent?

Mary Graw Leary:

I thought I said no, but I could be wrong.

Rep. John Joyce (R-PA):

Thank you. So thank you. I think your explanation was no, and I agree with that. What consequences Professor Leary do you foresee for children if big tech continues to allow harmful content to be pushed onto our kids?

Mary Graw Leary:

Well, the research, I believe I cite too, what I think is a very comprehensive study of the effects of this from the Canadian authorities about the effects or survivors of CSAM, and it’s really lifelong to quote Dr. Franks’ discussion of victims of so-called revenge pornography or non-consensual pornography, they’re lifelong in every aspect, psychological, physical, emotional, et cetera. And I would again direct you to the testimony of the Vice President from NCMEC a couple of weeks ago talking about AI. There’s a whole new dimension now of how children are being harmed by the use of generative AI in creating more CSAM from innocuous pictures of youth or youth who’ve been in CSAM and now there’s another version of it coming out. So another dimension of what is already lifelong effects from this unique pernicious victimization.

Rep. John Joyce (R-PA):

Dr. Leary, in cases such as DOE v. Twitter out of the Ninth Circuit, Section 230 was interpreted to shield platforms even when they are in knowing possession and distribution of child sexual abuse material. Do you think the Ninth Circuit got it wrong or does circuit [sic] 230 need to be amended to prevent similar court decisions?

Mary Graw Leary:

I think both of those are true. I think that there are cases where there is precise and exact knowledge of the specific source of the claim in the litigation, and the companies have still gotten immunity. I think that is wrong textually, and I think as we’ve demonstrated, there are so many courts that have gotten wrong because throughout the country, tech companies have been advocating this litigation position that Congress needs to act. One trend we’re seeing is it’s not just in dissents that courts are saying, “I really think we need to revisit 230.” We’re seeing it in concurrences where courts — where individual judges are saying this is wrong, but I feel now compelled by all of our precedent to rule this way, but I want to acknowledge I think this is wrong. So we’re seeing this voice come up in a slightly different way, which I think speaks to how courts are feeling. Their hands are tied.

Rep. John Joyce (R-PA):

So in my remaining few minutes here, do each of our witnesses feel that Congress needs to act to create and reform this law to protect our kids? This is a simple yes or no.

Mary Graw Leary:

I’m happy to start and say, yes, Congress needs to act.

Allison Stanger:

I would say yes, and my reform would be repeal of ©(1).

Mary Anne Franks:

I would say yes, and not only children, but everyone.

Rep. John Joyce (R-PA):

I thank each of our witnesses for appearing here today. And Mr. Chairman, I yield.

Rep. John Curtis (R-UT):

The gentleman yields and the chair recognizes the gentlewoman. Ms. Clark from New York.

Rep. Yvette Clarke (D-NY):

Thank you very much, Mr. Chairman. I thank our ranking member for holding this important hearing, and let me also thank our expert panel of witnesses for joining us today to examine one of the key laws underpinning a collective shift towards an increasingly online society. While I am appreciative of the opportunity to speak on these important issues, I’m experiencing a bit of deja vu or if you will, Groundhogs Day phenomena. This committee has worked for years now to better understand the limitations of the current regulatory infrastructure and combat the spread of harmful content on social media. And we’ve watched for years as social media platforms have shifted from chronological ranking to more targeted user experience, reliant on algorithmic amplification, a process that remains opaque to users and policymakers alike. As I said in this hearing, in a hearing this very committee held over three years ago, the time is now to act, or the time is to act is now.

This use of algorithmic amplification now coupled with the rise in artificial intelligence has far too often resulted in discriminatory outcomes, the promotion of harmful content, and now generative AI, unprecedented threats to our democracy and electoral processes. Unfortunately, regardless of whether these outcomes were intended or not, many in big tech have chosen to pursue business models that prioritize profits over people and are using laws intended to keep folks safe online to shield themselves from liability. That’s why I’ve introduced the Civil Rights Modernization Act, excuse me, in the past and will do so again in the coming days. If passed, the Civil Rights Modernization Act would ensure that American civil rights are protected online by making it clear that Section 230 does not exempt social media platforms from adhering to civil rights laws, particularly in the case of targeted advertising. With so much of our society from education and healthcare to economic opportunities shifting to the digital realm, we must take greater care to ensure that technological innovation does not serve to unwind over a century of hard earned civil rights, especially for communities of color. Having said that, my question for any or all of our witnesses today is about the interplay between Section 230 and artificial intelligence. Do you believe that the proliferation of AI tools such as LLMs and chatbots have exacerbated the shortcomings of Section 230, and if so, how and how can we best combat this? So Dr. Franks, do you want to start?

Mary Anne Franks:

I’d be happy to thank you, and thank you for those comments about civil rights because I think this is exactly the message that does need to be reinforced about the ability of this interpretation of Section 230 to roll back really important progress that has been made and how dangerous that is, especially for marginalized communities. And I do think the answer to your question is yes. What these proliferation of these technologies have done has democratized the use of really terrible tools and practices that used to be expensive, hard to operate and has now made it possible for anyone to do this and to spread this kind of harmful information on these platforms in ways that were simply not possible a decade or so ago. So I think that this needs to be made clear that even though I think as many of us has reinforced on this panel, that Section 230’s terms should not apply to generative AI, it seems clear that we cannot rest on the assumption that that will mean that this will not happen in the courts, and therefore I do think one of the explicit targets for reform for Section 230 needs to be to spell this out to say that for generative AI, they do not get the protections of this immunity.

Mary Graw Leary:

I would agree, and I would just highlight a very concrete example to your question, Congressman Clark, which is mentioned in my testimony. There was a study out of Stanford that pointed out that actual CSAM has been found in the collection of the large language model for one of the generative AI, I don’t have the technical terms. It’s worked its way in there, and once it’s in, very hard to get out. So there’s a concrete example of what you raise.

Allison Stanger:

Yes, I thank you too for drawing our attention to this important issue of civil rights. I think failure to act on Section 230 will allow for automated disinformation, misinformation, harassment in a turbocharged way with generative AI. And I think another thing Congress needs to think about in the generative AI age is the vast inequalities that some of these technologies are producing. One example, I had to put in my syllabus this year that students could use the free version of ChatGPT, but it was a violation of the honor code to use the subscription version. Why? Because this is access to knowledge and I do not want my students having unequal access to knowledge.

Rep. John Curtis (R-UT):

The gentlewoman yields and the chair recognizes the gentleman from Georgia, Mr. Allen.

Rep. Rick Allen (R-GA):

Thank you, Mr. Chairman, and I’m glad we’re convening this hearing. I want to thank our witnesses for being here today. It’s been very informative. Big Tech currently has the, and we’ve said this over and over, the unilateral control over much of public debate today, and it’s concerning a lot of Americans and the tech landscape has evolved dramatically since 1996, and I’m glad we’re, like I said, holding this hearing today, and I thank you for giving us the opportunity to learn more about the potential reforms needed in Section 230. Dr. Franks, you noted in your written testimony that the New York Times and Fox News have no special sweeping immunity from liability the way the tech industry does. Newspaper and television industries have not collapsed under the weight of potential liability, nor can it plausibly be argued that the potential for liability has constrained them to publishing and broadcasting only anodyne non-controversial speech. My question is, do you think that a provision designed to incentivize screening and blocking offensive materials should extend to shielding internet companies from liability for harms arising from algorithmic recommendations and amplification of content.

Mary Anne Franks:

I want to reinforce again that the idea that algorithmic amplification sorting, this is a very large category, and therefore I would be hesitant to say that there’s something about that specifically that is something that you would lead to categorically a situation where there should be no immunity under Section 230 precisely because ©(2) makes so clear that any attempt to restrict access to harmful content is something that should be rewarded with that immunity. Those systems, the algorithmic sorting systems that are so often used for bad purposes can also be used for very good purposes if only there were an incentive for companies to use them this way to actually divert people away from some of that harmful content.

Rep. Rick Allen (R-GA):

A key cause of today’s internet toxic environment is that digital platforms are shielded from liability. In a world where clickbait attracts greater attention and advertising revenue and absence of liability creates a perverse incentive for platforms to surface, disseminate and amplify low quality, outrageous, addictive, harmful, and illegal content. Dr. Stanger, you stated in your written testimony, that while Section 230 perpetuates an illusion that today’s social media companies are common carriers like the phone companies, they are not unlike [indistinguishable], they curate the content that they transmit to users. You also noted in Section 230 has also created fictitious platforms that are in reality publishers. Since they curate the content via recommended algorithms and content moderation, traditional media is held liable for publishing defamation and untruths while big tech companies are accountable only to their shareholders. Dr. Stanger, what do you think Congress should do to right the imbalance that currently exists?

Allison Stanger:

Thank you very much for that question. I think it’s really important to realize the way our media ecosystem has changed as a result of Section 230. Traditional media is dying. You’re seeing it reduced to a few big newspapers, and then there are all kinds of special bulletins that you can subscribe to for money to get your news in a particular issue area. Now, an institution like the New York Times or the Wall Street Journal, they are meticulous about going over their sources to be sure that they’re not publishing something libelous because that could be ruinous. So in a sense, they’re clearing material, then social media posts it, but guess who profits from it? Guess who gets the ads? The social media companies, and this seems to me to be a problem for traditional media to be liable for what they publish, but the social media companies are not, the way to deal with this is to abolish section ©(1). I’d make one other point about recommender algorithms. They really are intervening in ways we can’t understand. I would like it very much if I got to choose my own algorithm, and that’s something you could think about as well.

Rep. Rick Allen (R-GA):

They’re addictive. I mean, certain people have addictive behavior and then all of a sudden they’re addicted and they can’t do without this stuff. A question for the entire panel. Well, I’m out of time, but maybe you can answer this after the hearing in writing, should generative artificial intelligence receive Section 230 liability protections? And if you just give us an answer to that after the hearing since I’m out of time now you’re back.

Rep. Bob Latta (R-OH):

Thank you, the gentleman’s time has expired and the chair will recognize the general lady from New Hampshire’s Second District. But before she begins your line of questions, I just want to say that we’re sorry to hear of you retiring from the House and from this committee, and we’re going to miss you.

Rep. Ann Kuster (D-NH):

Thank you.

Rep. Bob Latta (R-OH):

All the best in the future, and you’re recognized for five minutes.

Rep. Ann Kuster (D-NH):

Thank you, and I’m very grateful for your call. I apologize. I was in Italy and didn’t get to return it, but thank you for your kind words. Well, I want to thank our Chair Latta, and our ranking member Matsui, for holding this very, very important hearing. I was actually at a conference on artificial intelligence, so I’m very interested in this topic. I want to begin by asking for unanimous consent to insert a New York Times story from April 8th, 2024, titled, “Teen Girls Confront An Epidemic of Deep Fake Nudes in Schools.” And I’d like to submit for the record, the article outlines a heinous problem confronting our children, AI generated child sexual abuse material, or CSAM. aProfessor Graw Leary, my first question is to you. I believe that Section 230 was not intended and does not provide civil immunity for content created by generative AI including child sexual abuse materials. And do you agree with this perspective?

Mary Graw Leary:

I do agree with this perspective for all the reasons that we’ve said. However, the concern is that the courts not withstanding the plain language of the statute will rule otherwise.

Rep. Ann Kuster (D-NH):

Thank you so much. And do you think it’s likely that a court, well you are saying the courts have reached the opposite conclusion, and how harmful do you think that is for young people in this country?

Mary Graw Leary:

Well, as I said, I think what you’ve referenced in the article is a new dimension of harm. Just when you think that people could only be digitally harmed in so many ways, we come up with new ways or at least big tech facilitates them and this is another one. So there’s two dimensions to this harm that I think we’re observing. One are the children who are already in CSAM and now these images, thanks to generative AI are being manipulated. So there’s a whole other form of victimization that is out there compounding what they’ve already had to live with is out there as a reminder of their physical abuse. Secondly, there’s then these children who have not been the victim of physical sexual assault, but these images are then out there creating the effect as well, and the Supreme Court recognized all the way back to Ferber and these kinds of images, the harm is — a harm, not the harm — is the images themselves. And this is a unique kind of victimization, uniquely pernicious because it is in perpetuity and so now it’s in perpetuity and it’s fictional and it looks just like it actually happened.

Rep. Ann Kuster (D-NH):

Horrifying. I can’t even imagine. As Professor Frank so aptly notes in her testimony, I want to stress that while Section 230 was intended to protect Good Samaritans who make responsible content moderation decisions, it has actually turned out to become the Bad Samaritan law that rewards internet platforms that ignore harmful content. As Congress, it’s critical we learn from our past and we know that when all parties are absolved from being held responsible for their conduct, bad actors run amok. As we recognize sexual assault awareness month is April, let us take a hard look at Section 230 and evaluate what Congress can do to hold bad actors accountable and to protect and support survivors. Dr. Franks, your testimony recommends updating Section 230 by adding a deliberate indifferent standard, which will empower people to hold bad actors accountable when they overlook harmful content on their platform. Can you explain how this change could empower survivors of sexual assault or harassment to hold intentionally negligible platforms accountable?

Mary Anne Franks:

Thank you and thank you for highlighting both the question of image-based sexual abuse that we have seen with deep fake artificial nudes as well as the issue of sexual assault. My organization, the Cyber Civil Rights Initiative, focuses extensively on both of those issues and I appreciate the highlighting of the stakes of this here. The reason for the suggestion of the deliberate indifferent standard is precisely to highlight or to undercut this defense that we often hear from companies and social media platforms who say that they’re not responsible for exploitation, for instance, on their platforms because they did not themselves do the initial act and the deliberate indifferent standard I think would allow us to consider or not — whether they were the cause directly of this particular act. Have they seen it? Are they aware of it? Were they continually over time indifferent to it when they could have helped. And that standard, I think if we want to think about situations such as the Taylor Swift deepfake news that hit the platform X just a short time ago for something like 27 million views before they were able to sort of get a handle on it. I think a situation like that would be to say that someone in that position could actually hold X potentially accountable for the fact that they did nothing and allowed that kind of imagery to be seen 27 million times before taking action.

Rep. Ann Kuster (D-NH):

Yeah, I think that makes a great deal of sense and we’d love to follow up with you. My time is up, but I’ll submit for the record another question on the specific exemptions of the 230 liability shield and whether we should add victims of child sexual abuse and harassment. Thank you. I’m incredibly grateful for your work. I’m grateful for this hearing and I yield back.

Rep. Bob Latta (R-OH):

Thank you. The gentleman yields back and the chair now recognizes the gentleman from Texas 11th District for five minutes for questions.

Rep. August Pfluger (R-TX):

Thank you, Mr. Chair. I think the witnesses, over a year ago we held a round table to discuss big tech and the fentanyl poisoning crisis. We had a mother, Amy Neville, who lost her son or 14-year-old son to poisoning. I’m also on Homeland Security. We’ve had a lot of hearings in the homeland security about the illicit sale of drugs, the issue coming across the border. We’ve talked to DEA, we know that criminal drug networks are using the internet and using platforms to further the sale. So this Congress, I’ve introduced the Drug-Free Social Media and Digital Communities Act that would increase penalties for individuals who are selling those drugs. But I’ll start with Dr. Franks and Professor Leary and then have a separate question for you, Dr. Stanger, what role should, how would you reform Section 230 to combat the illicit sale of drugs? And I’ll start with you Dr. Franks.

Mary Anne Franks:

I would say first that I do want to recognize how serious those harms are and that the suggestion that has been made, I think in previous reform attempts, has been to think about what are some of the most serious harms we worry about and maybe we’ll have carve outs within Section 230 to address them. So we’ve seen that approach in the past and I would suggest that the better approach would be to think about why those particular kinds of harms are being facilitated on certain platforms and try to address that directly without trying to think of carve outs for what are obviously very serious kinds of injuries. And I think that that’s the case, partly because I think we want the statute to be somewhat readable and it is getting to the point where it is less readable certainly than it was. We want it to be understandable by platforms and by users, and I think we really do want to be as targeted as possible in trying to identify what is the underlying problem with the incentive structure for these platforms as opposed to going case by case and harm by harm.

Rep. August Pfluger (R-TX):

Okay. Professor Leary?

Mary Graw Leary:

I would echo those comments about the carve-out piece and I think on that same vein, for three times now, our nation’s attorneys generals have asked, please just insert state criminal laws can be enforced in this as well and don’t tie our hands anymore. I think that would be a way, as we talked about before, where you’d get multiple enforcement of criminal laws both on the state and federal level and again, the private rights of action that exists with some statutes, both of narcotics, crime in general. We want multiple pressure points to disincentivize people from engaged or facilitating criminal activity.

Rep. August Pfluger (R-TX):

Thank you very much. I’ll move to the national security bucket if you will. I mean we’re going through this right now with Pfizer reform. It’s the line between safety and security and liberty and overreach. It’s where is the role of government, where is that line? So you preserve liberty. We’ve talked about a couple of cases, Tamneh, Gonzalez, and in your testimony something stood out, you said getting rid of the liability shield for all countries operating in the United States would have largely unacknowledged positive implications for national security. We’ve talked about some of the recent legislation that affected companies that are located in countries that are malign actors or non-friendly in some ways. Can you expand on how can you expand on this and how foreign adversaries have utilized Section 230 to hide behind it and maybe manipulate to their advantage national security issues?

Allison Stanger:

That’s a great question. I think it’s important to realize that the core of this problem is the ad driven business model, which is explaining a lot of the behavior. You can make money by ignoring the harms and our enemies exploit that by building troll farms that flood social media with things that divide us. This is a deliberate Russian influence strategy that is used on social media. It’s been used in the past elections. I’m sure there are many other examples, but I’ve heard some people talk about China as being involved in a reverse opium war with the United States, with the fentanyl and with harming–

Rep. August Pfluger (R-TX):

Do you believe that’s true?

Allison Stanger:

I don’t have enough information to know if that’s true, but I’ve heard it’s explained to me as being said that they feel justified in it because it was done to them.

Rep. August Pfluger (R-TX):

Do you believe that it is possible and in this upcoming election that foreign adversaries countries would manipulate data or anything to have an outcome, an electoral outcome that they benefit from or like?

Allison Stanger:

Yes, absolutely. They’re going to be geared up to divide us and instigate chaos because they thrive on chaos. Terrorists thrive on chaos and you can really whip up this country and divide us through social media.

Rep. August Pfluger (R-TX):

Should Section 230, be reformed to push back against that? My time is —

Allison Stanger:

I think if you remove, if you remove ©(1) and the immunity shield, companies will behave very differently.

Rep. August Pfluger (R-TX):

Thank you. My time has expired, I yield back.

Rep. Bob Latta (R-OH):

Thank you. The gentleman yields back and the chair now recognizes the gentleman from Texas’ 33rd district for five minutes for questions.

Rep. Marc Veasey (D-TX):

Thank you, Mr. Chairman. This is an interesting conversation that we’re having today and I think around Section 230 I can think about locally in Dallas we have Mark Cuban who made his fortune in the tech industry with broadcast.com, which later became Yahoo. And Mark talks a lot about how because it was largely an unregulated atmosphere that companies like him, nascent companies, were able to make money and that it would’ve been hard for startup nascent companies like his to be able to get to where he ultimately got had it been for reams of regulations. At the same time, I think that all of us are concerned about a lot of the social media that’s out there, the rise of a lot of the hateful content and misinformation that’s out there. And when you think about the overall overarching subjects that the Energy and Commerce Committee deals with, which is a lot of different topics outside of the subcommittee, one of the issues that we talk about is the dangers of the environment and climate change.

But I can tell you that when you throw Section 230 and you mix it with what we’re seeing with AI, I think that AI may kill all of us before climate change will if we don’t do something about everything that’s happening in this particular space. And so I want to try to figure out a way how we can continue to have a thriving tech industry where people and startups can make money, but at the same time deal with some of these very serious issues. And so I wanted to ask, I know that there was a 2016 case Force v. Facebook and it was involving the states of four US citizens that argued that Facebook knowingly hosted accounts belonging to a terrorist organization, but the Second Court of Appeals ruled that Facebook was shielded by 230. And I wanted Ms. Leary to talk, as you outlined in your testimony, this decision gave platforms like Meta de facto near absolute immunity for claims, creating algorithms that facilitate and spread terrorism. Can you discuss how reconsidering and reforming the scope of Section 230 will help rid harmful content across social media platforms without impacting Section 230 ©(2), which protects platforms from civil liability when they in fact decide to remove objectionable content?

Mary Graw Leary:

Sure. Well obviously as we’ve said, this deals with ©(1), right? And the idea that somehow facilitating illegal content coordinating and connecting with and coordinating fellow terrorists is somehow publishing really does defy logic. So ©(1) needs to be eliminated or amended I think as well. Looking at the idea of when one facilitates illegal conduct, there’s lots of different ways we can phrase this and you’ve seen a number of suggestions, but this is facilitating illegal conduct, which in most businesses face liability for and why this industry doesn’t make any sense at all.

Rep. Marc Veasey (D-TX):

In the same vein, can you elaborate on how Congress can clarify Section 230 to hold social media platforms accountable for harms they cause through their own actions, whether using an algorithm or not and how we can ensure that big tech companies do not continue to have immunity for causing harm and how we can deter the use of dangerous algorithms and targeted ads?

Mary Graw Leary:

Sure. I think the answer is really pretty much the same, whether it’s their actions directly or deliberate indifference, to use Dr. Franks’s language, it’s all the same. They’re allowing it to take place and they have got immunity to do it. So addressing ©(1), I think we’ll address that and we’ll hold them to the same standards that every single business in America is held to when their direct actions cause harm, they face litigation and the jurisprudence develops. And then when they’re making decisions, should we do X or Y? They can look at the jurisprudence and the guardrails that exist and make informed decisions.

Rep. Marc Veasey (D-TX):

Yeah. And Dr. Franks, in closing, I want to ask you how should we reform Section 230 to account for the law’s current over-interpretation of activity that is objectionable and what would be the impact of limiting liability protection to speech as opposed to information?

Mary Anne Franks:

To reinforce some of the, excuse me, the comments that were just made, limiting clearly within ©(1) to say that this is not going to provide immunity for platforms, intermediaries that are soliciting, encouraging, profiting from, or being deliberately indifferent to harmful content. I think that’s the limitation. Something along those lines is needed for ©(1). I do also think in terms of restricting the scope of things that you can raise a Section 230 kind of defense too, it is important to identify that this should not be simply information very broadly conceived because that can cover anything from the Snapchat filter that calculates your speed that may have led to the deaths of a couple of young men. The argument that that should be considered as something within the purview of ©(1) I think is because of that language about information that should be replaced with the more restrictive term of speech.

Rep. Marc Veasey (D-TX):

Yeah. Thank you Mr. Chairman.

Rep. Bob Latta (R-OH):

Gentleman’s time has expired. The chair now recognizes the gentlelady from Tennessee’s First District for five minutes for questions.

Rep. Diana Harshbarger (R-TN):

Thank you, Mr. Chairman. Excuse my voice, it’s allergy season, Dr. Frank’s words can be very powerful in legislation and the meaning behind them. And in your testimony, one of your recommendations, and I’ll follow up on what Mr. Veasey was saying, one of your recommendations for reform is to change the word “information” and replace it with the word “speech.” Can you elaborate on that and tell us exactly why that needs to be changed?

Mary Anne Franks:

Yes, thank you. Because I think what we’re seeing in addition to a lot of speech claims that are quite tenuous, we’re also seeing this defense being used and invoked for things that I think most people had, they heard about these cases offline would not have thought that this is a plausible claim for speech or anything that originally Section 230 was intended to protect. So when we talk about the benefits of an expansive interpretation of that kind of protection and ©(1), usually the justification is something like we need to foster public discourse. We want to ensure that people can speak freely, but then we see that defense being used in defense of facilitating illegal arm sales for instance, or drug sales, or credit card processes, or features of a particular platform that are things like choosing not to do background checks on a dating site. And the idea that those things should plausibly fall within ©(1), I think really could be limited or could be sort of cut off if you specified that the only kinds of claims that you can bring the defense for are claims that involve speech.

Rep. Diana Harshbarger (R-TN):

Okay, thank you. This is for Professor Leary or Dr. Stanger. Without Section 230, we wouldn’t be able to have websites like Reddit or Yelp as they’d be open to lawsuits for the opinions of users. But sometimes these companies hide behind Section 230 to amplify certain voices over others. And I guess my question to you is what’s the appropriate balance? How much reform is appropriate to ensure that the internet as we know it continues to operate?

Mary Graw Leary:

I’d like to make a historical point and I’m really actually going to direct the committee to some scholarship that’s a little, that’s from 2017, but I think it makes a really helpful point and that is Danielle Citron and Benjamin Wittes’ article, which is referenced in my written statement about the internet won’t break. And one of the things it talks about is this historical arc of yes, when things began, they’re nascent and there’s not a lot of regulation and that may be okay, but as they grow and we deal with a situation where one company, a small company can cause last lots of harm, that’s where we see regulation and whether we’re talking about the environment, motor vehicles, food, we fill in the blank. That is the natural arc of things. Why in this industry were all of a sudden resisting that natural arc even though the amplification of the harms are so great is beyond me.

So in terms of that balance, I think history has given us a very good sense of that balance and we are clearly past the time where there should be some more of a balance to benefiting what is no longer a nascent internet. One quick point, I just happened to notice this morning. When you look at the richest individuals in the world, there’s a lot from the tech industry and then just earlier this week the National Coalition on Sexual Exploitation released its dirty dozen list and there’s a vast amount of technology companies and that just struck me as an interesting reality in the world in which we’re living. And Section 230 may be playing a part in that.

Rep. Diana Harshbarger (R-TN):

Yeah. Gotcha. Dr. Stanger?

Allison Stanger:

I’d really like to echo what Professor Leary has said, and it’s really true that eliminating ©(1) or reforming Section 230 is not going to break the internet as we know it. If anything, I think we’re moving beyond the web 2.0 ecosystem of social media to something that’s new and different and I would just point out how antitrust suits against companies like Microsoft or IBM resulted in very different outcomes. In one IBM never quite recovers and another Microsoft is booming with its connection to ChatGPT. So these companies aren’t going to break if you regulate them.

Rep. Diana Harshbarger (R-TN):

They’re resilient, they’ll pop back or they’ll find a way to get around. This is for anyone, assuming a Yelp user posts something allegedly untrue about a business and the business makes a legitimate case to Yelp that the post should be removed, if Yelp reviews the post and the facts and then makes a determination about whether to remove the post, are they a publisher?

Mary Graw Leary:

Well, I’ll jump in to say I’m not sure if they’re a publisher or not for all the reasons that we’ve said, but I will say that’s where mens rea comes into play. If somebody had a good or bad dinner at a restaurant, I don’t think a platform knows or should know that is true or false, right? That’s not what we’re dealing with and that’s not what our concerns are. So I think that that would solve sort of that problem if there is mens rea idea. So I don’t know if that’s helpful.

Allison Stanger:

I love the Reddit example there because the users themselves are upvoting or downvoting entries and that’s determining what the community voice is and I think that’s really a window on the future. These large social media companies are a thing of the past or they will be eventually.

Rep. Diana Harshbarger (R-TN):

Exactly. Okay. I guess my time’s expired. I had one more question and I won’t get to ask it, but I’ll submit it. Thank you.

Rep. Bob Latta (R-OH):

Thank you. The gentlelady’s time has expired. The chair now recognizes the general from California’s 23rd district for five minutes for questions.

Rep. Jay Obernolte (R-CA):

Thank you Mr. Chairman, and thank you for being here on a topic that’s very personally important to me and I’ll apologize in advance and ask your indulgence because I’m going to take a little bit of a contrarian stance here and say that I think Section 230 was hugely important for the growth of the internet. And I’m not saying that it’s perfect and I’m not saying that it doesn’t need to be reformed. I think it does, but I certainly don’t think it needs to be repealed. And I want to talk a little bit about that because I don’t think we would have been able to see the adoption of the internet in the form it is now without Section 230 if in the early days of the internet, and I’ve run a technology company for 30 years, so I have some experience with the business side of it, I don’t think it would’ve been possible to expect a purveyor of the early bulletin board systems to moderate all the content that was created there.

And I think that if plaintiffs were empowered to be able to sue anyone that said any platform that hosted content that was defamatory against them, for example, I don’t think anyone would’ve had platforms that hosted that kind of content at all anywhere. And I don’t think we’d be in the place that we are today. So I want to ask some questions about that because I think it’s important to recognize the balance. And Professor Leary, I want to start with you and you made a point that I thought was very interesting when you said that Section 230 effectively denies some parties the opportunity to litigate issues, which it does, it undeniably does, but said in a different way, it sounds like you’re saying the world would be a better place if we just sued each other more often, And I know I’m being a little bit uncharitable about that, but I mean that’s kind of what I hear. And I think it’s also important to recognize the other side of that equation, which is there are large transactional costs in achieving the kind of equity that righteous lawsuits do. You have to pay the lawyers, you have to pay judges, you have to have courtrooms, you have to deal with the fact that there are malicious law firms out there with profit incentives that really aren’t focused on equity and all of these things impose societal costs. So for example, cars cost more undeniably because of lawsuits, righteous and non-righteous because it’s impossible to make a perfect car and because cars operate in a high risk environment. So we have these societal costs that we have to bear, that we all bear as a result of this, and our job is to strike a balance here. So you specifically talked about the kinds of things that proliferate as a result of Section 230 child exploitation. I think you mentioned fentanyl sales to minors. All of these things are terrible things, but I mean it is important to note that these problems weren’t created by the platforms. I mean, you make the point that increased moderation could prevent them, which is true.

How do you navigate that issue? Because I think it’s an important point to make. If a platform is actively participating in the sale of fentanyl to a minor, you have other avenues other than those shielded by 230 to go after them legally, don’t you?

Mary Graw Leary:

Well, if I could begin with the first question and then maybe wrap up.

Rep. Jay Obernolte (R-CA):

Sure. Well, it’s a lot to unpack, sure.

Mary Graw Leary:

I think your analogy to the car is a really great one, that cars are more expensive. And if I want to design a new car, I have to think about what are my potential liabilities, what are the rules, no pun intended, the rules of the road, sort of what are the regulatory or outlines in the law of what is reasonable. There’s a mens rea and I think if I intentionally or with deliberate indifference or I know that there’s a bad part to the car, but I install it anyways, I should be held liable. And I think the same is true for the internet and why —

Rep. Jay Obernolte (R-CA):

But if I could stop you there, it’s also true that even if I was completely righteous as a car company and I said I’m going to do everything that I can to make the safest car that I can, it’s impossible to make a car where I’m never going to get sued. Right?

Mary Graw Leary:

I think that’s right. But our law has never said that you have absolute immunity because it’s impossible. What our law has said is we’ll hold you responsible for your business if you know or should have known there was a problem, even if you didn’t cause it, you got a part from another company that you knew was faulty or thought could have been faulty, but you installed it anyways? Because if it doesn’t work, it’s not like you’ll be held liable. We never —

Rep. Jay Obernolte (R-CA):

The case with these companies is different, right, because they didn’t intend, I mean, there was no negligence, like “we knew that you were going to sell fentanyl to a minor, but we allowed you to do it anyway because we wanted to make money.”

Mary Graw Leary:

And I agree with you congressman, and I guess the point that I’m trying to make is we need to discuss that in court and after the jurisprudence is devolved where we know what is an appropriate mens rea for a platform and what’s not. Those companies will all have guidance or what is the standard of care in place to make sure this doesn’t happen, which is sufficient, but not a guarantee of a perfect world. And I think that’s what we don’t have now because it’s never been allowed to develop.

Rep. Jay Obernolte (R-CA):

Right. Well, we’re going to continue this discussion. I’m already out of time and I touched a little bit of the iceberg that I wanted to expose here, but I want to thank you very much for your willing to gauge it on this issue. I think it’s going to be a productive discussion. I yield back, Mr. Chairman.

Rep. Bob Latta (R-OH):

Thank you much. The gentleman yields back and the chair now recognizes the gentleman from Idaho’s First District for five minutes for question.

Rep. Russ Fulcher (R-ID):

Thank you, Mr. Chairman, to those of the panel, thank you for, it’s been a long hearing and appreciate your input and feedback. This is very valuable to us. I have a question for each of you. I’d like to start with one, the same one actually to Professor Leary and to Dr. Franks. We know that if you yell fire in a crowded theater, the First Amendment doesn’t apply. So with that example set up, if you will, I’d like to match that up with a context of an AI algorithm that foments violence, and we had discussion about how that works. Where’s the limit, the realistic limit to Section 230’s immunity protection when it comes to that AI algorithm? And I’ll start with Professor Leary to get your feedback on that. Then I’d like to get that same feedback from Dr. Franks.

Mary Graw Leary:

Please forgive me if I’m not following the question, but the link to the yelling fire in the —

Rep. Russ Fulcher (R-ID):

Yeah so what’s the realistic limit do you think should be for Section 230’s immunity protection when it comes to that AI algorithm that’s generating —

Mary Graw Leary:

Sure. I think as we have said, I think a read of the plain language of the statute where it wouldn’t apply if it led to the AI algorithm, assuming its generative AI has created content, so it shouldn’t apply. And I think that we have to be cautious about courts and how they interpret things because for 30 years they’ve interpreted it, I think we would all agree, interpreted Section 230 differently from what Congress intended. So in my mind, those are the two essential problems with the question that you prob.

Rep. Russ Fulcher (R-ID):

Okay, thank you. Dr. Franks?

Mary Anne Franks:

If we can make a distinction, as I think you’re suggesting between sometimes there are aspects of a platform’s own sort of conduct that is that they are sorting things, they’re recommending things. That’s one kind of question I think versus whether or not they’re producing their own content and I think the generative AI example is a lot easier in the sense that we would simply have the same standard that we would have for anyone else, right? If you were producing a certain type of content, you don’t get Section 230 immunity because you’re not an intermediary, you’re simply one of the speakers. As to the question of when you’re sorting or recommending somebody else’s speech, I do understand the temptation to want to articulate that there’s something about that in particular that maybe should be distinguished in the Section 230 context. I would just suggest that I think it’s more compelling to look at the responsibility and the contribution regardless of which form it takes, whether it’s algorithms or anything else. If at the end of the day the question is did this platform have some sort of knowledge about this harmful content, did it do nothing to stop it? Did it encourage it? Did it solicit it? Did it profit from it? And make those the key questions.

Rep. Russ Fulcher (R-ID):

Fair enough. Good input. Thank you. That actually is a reasonable segue to what I wanted to talk to Dr. Stanger about. Thank you for your feedback also. You’ve made it clear your position on section ©(1). I get that and that makes sense. But on a related note, just speak to elaborate a little bit more. Policing by social media companies, what’s the proper role there or is there a proper role? Liability? I assume that’s going to be connected to your previous statements on the ©(1) removal, but could you just expand a little bit more on those things when it comes to these massive social media companies, what should be the benchmarks in terms of policing and liability?

Allison Stanger:

I just want to maybe take you back to the first part of your question to explain that, which I thought was a good one, which is that we have a long history of First Amendment jurisprudence in this country that in effect has been stopped by Section 230. In other words, if you review, if you remove ©(1), that First Amendment jurisprudence will develop to determine when it is crime fire in a crowded theater, whether there’s defamation, whether there’s libel. We believe in free speech in this country, but even the First Amendment has some limits put on it and those could apply to the platforms. We have a strange situation right now if we take that issue of fentanyl that we were discussing earlier, what we have right now is essentially a system where we can go after the users, we can go after the dealers, but we can’t go after the mules. And I think that’s very problematic. We should hold the mules liable. They’re part of the system.

Rep. Russ Fulcher (R-ID):

Okay. Alright. Thank you Mr. Chairman. I too have further questions, but I’ll put those on the record in writing and yield back.

Rep. Bob Latta (R-OH):

Thank you. The gentleman yields back and sees no further members wishing to ask questions. I want to thank — Pardon? Okay. Seeing no further members asking questions to our witnesses today, I want to thank you all for your being with us today, it is very insightful. I know when I reviewed all your testimonies, I found it very, very interesting. It did take me back a few years when I was studying torts, when I saw that Prosser was being cited. But I really appreciate you all being here today. I asked unanimous consent and certain the record, the documents included on the staff hearing document list without objection, this will be the order. And without objection, so ordered, I remind members they have 10 business days to submit questions for the record, and I ask the witnesses to respond to the questions promptly. Members should submit their questions by the close of business on Thursday, April 25th, and without objection, the subcommittee is adjourned.

--

--