Episode 23 | Cheap Speech: How to Stop Disinformation Without Killing Free Speech April 03, 2022


GUEST: Rick Hasen, Author, Cheap Speech: How Disinformation Poisons Our Politics – and How to Cure It

It’s easy to mock those spouting outlandish and off-base claims on social media networks, but are there more effective ways to counter the spread of misinformation? Our guest Rick Hasen believes there are other solutions to tackling false information – whether it’s spread unintentionally or intentionally – which could pave the way for more lasting change. Is it a matter of personal responsibility? Legislative fixes? Media reform? And what about the responsibilities of Big Tech and social media platforms? In this episode, we talk about his latest book, Cheap Speech: How Disinformation Poisons Our Politics – and How to Cure It, and some of the legal and political remedies he sees as potential solutions to the threat disinformation poses to our democracy and an informed electorate.


Richard “Rick” Hasen teaches law and political science at the University of California Irvine, where he also is co-director of the university’s Fair Elections and Free Speech Center. His latest book, Cheap Speech: How Disinformation Poisons Our Politics ― and How to Cure It, examines the dangers of disinformation, its viral spread, and the actors that are helping to push it into the mainstream. He is a nationally recognized expert in election law and campaign finance regulation. He has written more than 100 articles on election law issues, and remedies, in numerous publications. His op-ed and commentary works have appeared in major publications, including The New York Times, Wall Street Journal, The Washington Post, and Slate. He is also the author of Election Meltdown: Dirty Tricks, Distrust, and the Threat to American Democracy and Plutocrats United: Campaign Money, the Supreme Court, and the Distortion of American Elections.


Cheap Speech: How Disinformation Poisons Our Politics – and How to Cure It

Results of YouGov poll on 2020 U.S. presidential election 

Public Research Institute poll results

Jan. 6 , 2021 invasion of U.S. Capitol

Full Transcript


Back in March, the “People’s Convoy,” a collection of truckers opposed to COVID restrictions, among other things, made its way to Washington, D.C.

One of the drivers – a woman who appeared to be in her 40s or 50s – spoke about her reasons for participating in the protest. Here’s what she said when asked what specific issues she was most concerned about:


Woman: “I don’t want ’em to digitile us.”

Man: “You don’t want ’em what?”

Woman: “Digitile us. I don’t want to take medicine that will hurt people,”

Man: “What was the word you used, though? I haven’t heard that.”

Woman: “Digitile. Digitile.”

Man: “Digitile? And what does that mean?”

Woman: “They’re trying to make us robots. They’re trying to make it where we have no choice in nothing to where they control. They can control us from what they’ve put in our bodies.”

Man: “And eventually we’ll actually be robots?”

Woman: “According to what they’re saying. Not robots, but according to, they’ve already got full manmade working female robots.”


As you can imagine, many Twitter users delighted in mocking her. So, I sent a tweet and it said this: “So many people are dunking on this woman. But she’s a victim of purposeful disinformation campaigns. We should be calling out the for-profit perpetrators of that nonsense, not the vulnerable people who fall for it.”

Well, that tweet generated a lot of push back. As of this recording, more than 3,300 people liked the tweet – but plenty of others didn’t.

One tweeter going by the name Sally wrote this: “She’s not vulnerable. You can choose to actually find the information and filter what people tell you, or be ignorant and believe the first lie someone tells you. She’s not vulnerable, she’s ignorant, and that doesn’t excuse her from responsibility.”

Another example, from someone with the screen name Robin: “These people have agency. I’ve dealt with people like her. You try to present facts and they tune it out. They want to live in this bizarre, nonsensical universe. They aren’t victims. They are willing participants.”

And one other person, going by the name “Dissident Elf,” didn’t mince words. “Nah dude, adults are responsible for their own choices.”

Several other people objected to me calling her a “victim of purposeful disinformation campaigns.” In hindsight, I wish I had used the term “susceptible to purposeful disinformation campaigns,” if only to avoid distracting from my larger point.

And I want to acknowledge that many of the people disagreeing with my tweet raised a fair point. It’s worth asking whether people have agency over their conspiratorial and misinformed beliefs – and if so, how much. And yet, the people who simply shouted, “personal responsibility!” as a retort to my tweet dismiss the tremendous power of disinformation.

Consider this:

  • A December 2021 poll by YouGov found that only 21 percent of Republicans believed that Joe Biden’s victory was legitimate.
  • And a poll by the Public Religion Research Institute last year found that 23 percent of Republicans agreed with this statement: “The government, media, and financial worlds in the U.S. are controlled by a group of Satan-worshipping pedophiles who run a global child sex trafficking operation.”
  • With millions of people buying into these beliefs, it seems to me that we should look deeper than simply dismissing these people as not exercising their personal responsibility or agency – even if that is indeed a meaningful part of it. If we agree that disinformation is a problem, simply ending our analysis by screaming “personal responsibility” doesn’t lead us to a better place.

So, instead of playing the social media game of whack-a-mole mockery every time a video gets posted of some random stranger espousing some odd beliefs, it’s worth asking some bigger questions instead:

  • How does disinformation work?
  • Who stands to benefit and profit from it?
  • Who’s funding it?
  • How do we prevent it from spreading?
  • And how might we help people most susceptible to disinformation be less prone to fall for it in the first place?

My guest is Rick Hasen, Chancellor’s Professor of Law and Political Science at the University of California, Irvine. He was named one of the 100 most influential lawyers in America by The National Law Journal in 2013. If you’re a cable news viewer, there’s a good chance you’ve seen his legal analysis in your living room. His new book is called Cheap Speech: How Disinformation Poisons Our Politics – and How to Cure It. In it, he proposes specific antidotes to disinformation – some of which are matters of law, others are matters of legislation, and yet others cite the responsibility of private actors, such as the social media giants. As he concedes, there are no easy answers – but as you’ll hear, he still retains his optimism.


Rick Hasen, thank you very much for joining me.


It’s great to be with you.


I want to begin, you just heard in this opening monologue the conversation about this woman who people on social media were attacking and the question of personal responsibility. And I wondered if maybe the place to begin was with you talking a little bit about just how powerful disinformation is and why we can’t simply dismiss it as a matter of personal responsibility.


Well, I think that I’ll start where my book starts, which is January 6th, 2021. It’s pretty clear that many of those people who went to the capital, and some of whom engaged in property crime or even violent action against law officers, honestly believed that the 2020 election was being stolen from Donald Trump. And that is, I think, a very powerful symbol of the force of disinformation. You know, Donald Trump went to Twitter over 400 times between Election Day and 19 days after Election Day to mostly make false claims that the election was stolen, to invite people to come to Washington D.C. for wild protests, and it activated people. And so, we know that disinformation can be incredibly powerful and can sometimes even be deadly. I mean, think about Comet Pizza, right? The Pizzagate problems that we had where someone showed up at a pizza parlor in Washington D.C ., with a gun believing that Democrats were pedophiles who were holding children hostage in the basement. I mean, really, I think that, yes, personal responsibility, people have a responsibility to educate themselves about what’s happening in the world, but it’s so much harder to do now, when in our current information environment, it’s hard to tell what’s true and what’s not true.


There was a line in your book that really stopped me in my tracks. It really forced me to rethink a lot of the things you were talking about. The line you wrote was unhappiness with cultural shifts and bad news drove demand for false information. And when I think about disinformation, I usually think of it as a supply problem, bad actors putting disinformation out to achieve whatever nefarious goals they might have. But, you are presenting it here, in addition to that, as a demand problem, can you talk about that?


Right. I actually think the best way to think about it is that it’s a vicious cycle. So, we know that, and my book focuses on election disinformation and so I can’t really speak to how generalizable this is, but in terms of false statements about elections or about politics, we know that some people who are spreading this misinformation are doing so inadvertently. I give some examples in the book where people really believe that what they’re sharing is true, even if it’s not. But some do it for political gain, and some do it for financial gain. There’s a lot of money made by making false statements. Now, just think about Alex Jones of Infowars, one of the most notorious spreaders of disinformation. It was reported in The New York Times that he was making something like 55 million dollars a month or taking in that much for sales of vitamins and body armor, just crazy kind of stuff, right? So there is a supply. There’s a reason to supply it. It’s telling people what they want to hear, but that also triggers more demand. You know, if you are feeling like you’re losing economically and you feel like you’re losing culturally, the kinds of people that Donald Trump was appealing to the most in his 2016 to 2020 campaigns, then something that gives you an explanation for that, which is not based on the things that would personally be unappealing to hear, you want more of it. And it’s not just the algorithms on YouTube feeding you more of it, although there’s some of that, but it’s also that providing simple answers to complex problems is something that’s always appealing to people. What’s different now is that there’s a ready supply for it. And now that the demand for it can be filled through ever more increasingly outlandish conspiracy theories and other things spread, not just on social media, it’s also on cable television, right? So. believing that there’s bamboo coming from China in the pallets that are being examined in Arizona.


That was eye-opening. I had not heard that until I saw that one in your book


Or, you know, Italian satellites being used to change voting totals. I mean, these things are just so outlandish. If it were a parody and you’re watching a movie, you wouldn’t believe it, but yet this kind of stuff is believed. And so, it’s a feedback loop. The more people demand it, the more it’s supplied. And it’s kind of, I think about it like a drug dealer, right? So, you know, the drug dealer might push the supply, but once you’re hooked, you want more of it.


That’s right. Now, the name of your book is Cheap Speech. So, we probably, before we go further, should define that term. What do you mean by cheap speech?


So, the term isn’t mine. It comes from a 1995 Law Review article by a UCLA law professor named Eugene Volokh. He was writing about the coming information revolution, where rather than there being a scarcity of speech, where, you know, there’s just a few television stations and you’ve got your local newspaper or a couple of newspapers, The New York Times,  few other national papers. And there, you know, there was a scarcity, if you didn’t like what was written in The New York Times, you could write a letter to the editor. If you were extremely lucky, it might be printed, but otherwise you didn’t have a lot of ways of getting your message out. Now, you know, as Volokh predicted, we have a situation where it’s very easy to get your message out. The only limitation on you spreading your message on social media or Twitter, or another platform is people’s willingness to read it or hear it. So, that was a very positive thing for Volokh because, it implied that without intermediaries we’d have a much better information environment. I mean Cheap Speech in that sense of the term, but also in a much more negative sense of the term. And here, I think, we need to take a step back and talk about what’s happened to newspapers, especially to local news coverage over the last few decades, which is that the economic model for newspapers has collapsed for local newspapers because they depended so heavily upon not just subscriptions, but on selling advertising, both classified advertising and other kinds of advertising. And much of that advertising has shifted to online. First Craigslist, Facebook marketplace, all kinds of things like that. And, producing smart investigative journalism that voters would want to have to be able to make informed decisions about how to vote consistent with their interests and values, that’s really expensive to produce. But, producing disinformation on a format, on a website that looks as good as The New York Times website, that’s really cheap to produce. And so, I mean, cheap speech in the second sense of a system where lower valued information is prioritized over higher valued information and where there’s such a flood of misinformation, disinformation, that the risk is not only that some people will be taken in by it, as we’ve talked about, but that people will just discount the value of all information. They’ll be less willing to pay for valuable information, just throw up their hands and say, we don’t know anything.


And am I correct? That that’s what you described as the lemon effect.


That’s right. So, it’s like the market for used cars. And there’s a famous article by an economist named George Akerlof about if you can’t tell the quality of a used car, it might push the price of all used cars down. And I talk about how CarMax solved that problem for the used-car market, but it’s a much harder problem to solve in the information market, in part, because of the reason we talked about a few minutes ago, which is that there’s this demand for false information. It’s almost as if people want to buy cars or demolition derbies. They want the bad speech.


Before we leave the topic you invoked a moment ago of local journalism, there was one thing you mentioned in the book that again, really surprised me. It was that when a state capital is farther away from a population center, there tends to be an increase in corruption in that state capital. The example you give is Albany and New York City are 150 miles or so apart. And there tends to be more corruption in Albany, in part, because there’s not as much local news to cover what the state legislatures or legislators are up to. So that was a very interesting point you made about the cost of not having local journalism


And, of course, it, it’s no longer a geographic problem. So many newspapers are being bought up by hedge funds. Their buildings are being sold. Their staffs being reduced. Some places, I think it was The Des Moines Register went from a hundred people covering state and local politics to three. Just incredible journalists have lost jobs faster than coal miners over the last few decades. And so that’s a really tough environment in which to get the kind of reporting that we need.


Your book is really nice. I think it’s an antidote to cynicism. Instead of just throwing up your hands and saying this is an unfixable problem, you come with solutions, even if they’re only partial solutions, because it’s such a vast problem. So, I’d like to get into a few of those potential remedies. One of them that you begin with is banning what you term false selection speech, which is false speech about the mechanics of voting. So how would that work and given the temperature or the composition of the Supreme Court, how likely is it that they would affirm that law?


Right. So, just to make the general point, the, the book believes, or I believe as I expressed in the book, that this is a multifaceted problem. It requires multifaceted solutions. Some of these changes can be made through law. Some of these changes can’t be made through law, but have to be made through private action in society. And the reason for that is we have, I think, in this country, or hope we still have, a commitment to both free and fair elections and to robust political speech. So, want to have a law that says anytime someone says something bad, however we’re going to define bad about elections, that some kinds of elections are, gets to say that speech is removed. Because, you know, pick your least favorite president and imagine that he or she gets the chance to appoint the czar, right? You wouldn’t want that kind of system. It would be very bad for our democracy. And so, I’m looking for certain ways to, legal ways, to try to act consistent with the First Amendment, in terms of allowing free speech, but also giving voters more reliable information. And you picked on one of only two very small areas where I think I would talk about prescribing limiting information. Most of my solutions are about providing more information, like enhanced disclosure rules, labeling deep fakes as fakes. But I do believe that there is room for a very narrow ban on empirically falsifiable election speech. So, if you lie about when, where, or how people vote, Democrats vote on Tuesday, Republicans vote on Wednesday, or to take a real-world example where someone’s being prosecuted right now. Someone lied and directed messages to African Americans during the 2016 election saying you can vote by text or vote by social media hashtag.

And about 5,000 people tried to do this. This is empirically falsifiable. So, you can go to the website of official election reports. You find actual voting is on Tuesday; it’s not on Wednesday. You can’t vote by text. You can only vote by mail in some places or in person, right? So that kind of ban, I think, can work. You could give the federal government the power to prosecute people, who spread these false informations and the power to seek injunctions against news media that share this information, right? Just like there are other kinds of prohibitions on perjury, for example. That’s a ban on false speech in the context of a trial or another judicial proceeding. Bans are very hard to get through. You have to have a really compelling reason and the compelling reason to stop speech about when, where, and how people voted is that it could lead to actual disenfranchisement of voters. It wouldn’t address most of the speech I’m concerned about in the book, but I think it would deter some very bad, rare, but very bad conduct. And it’s a small piece of dealing with a much larger problem.


Yes, and one of the thoughts I had when reading your book, you made the point of how difficult it is to ban certain types of conversation. You said how to deal with Trump’s voting tweets was a much harder question than how to police false posts related to COVID 19. And what I was thinking about, and this is where banning fake news or false inform gets so tricky. Early in the pandemic, both Dr. Anthony Fauci and then Surgeon General Jerome Adams told the public don’t wear masks. Jerome Adams, I think this was March of 2020, tweeted this “Stop buying masks. They are not effective in preventing general public from catching coronavirus.” And so, I can see if that’s the government’s message that a physician who pushed back on social media and said, hold on a minute, Dr. Fauci and Jerome Adams are both wrong. People should be wearing masks, that they could be triggered for misinformation, because that was different than what the CDC and the surgeon general were both advising at the time. That’s just one example, but it seems to suggest that making those types of decisions about what kind of information is misinformation and who gets to make those types of decisions is very difficult to implement in the real world.


Right. And, a point I make near the end of Cheap Speech is that science is an iterative process, right? So, you know, at first we think … remember all the COVID theater, all the cleaning of surfaces, you know, we thought, oh, we gotta do that. Until, you know, I think way too late, we end up realizing or we end up being told that it’s only spread airborne, you know? And, people were being excoriated for being out at the beach without masks, at one point you may remember, and that turns out to be one of the very safe places. And so, science is an iterative process, and you have to be careful that you don’t require embracing of orthodoxy, which could turn out to be wrong. And so, you have to have respect for the scientific method, but not necessarily a specific scientific conclusion. But, after a while, so let’s take vaccines. Were vaccines safe, right? We didn’t know when they first came out if they were safe and then millions and hundreds of millions of people end up taking the vaccines. And now we know vaccines are safe. And so, someone who says vaccines give you COVID or someone who makes other false claims, I think now we’re at a point where that kind of information should be taken down. Not as a matter of law. I wouldn’t support a law that says it’s illegal to make this claim, but I would support social media companies, which are private companies, being able to decide just like we removed pornography and we remove hate speech. We’re going remove misinformation about vaccines because people’s lives are at stake.


Right. And since we were just bringing up social media and I’ll add to that the search engines, you gave an example from Instagram, which I should note is owned by Meta the parent company of Facebook. During the 2020 election, that when people search for information about Joe Biden, this was, I guess, for a two-month period or so this happened, they would see positive stories about Donald Trump, but the opposite was not true. When people search for Donald Trump, they did not see positive stories about Joe Biden. Instagram said this was a mistake. There was no intention there. But it does a question of platform bias, which you address in the book. Should Google, which has a near monopolistic hold in the search engine industry, and the social media giants be subject to some type of regulation as it applies to that type of platform bias. And if so, what would it look like?


So, this is a very difficult question. It’s a great question. And, I don’t believe that we should have provisions that would, aside from very narrow things like the removal of false, empirically verifiable, false speech about when, where, and how people vote, which would apply not just to social media companies, but to newspapers or apply across the board. I don’t believe that we should say that these companies can’t tweak their algorithm to favor one candidate over another in part, because the platforms are tweaking their algorithms all the time. They’re doing things that will make it so that you’ll find things that you want to find. So, you’re not wasting your time. You have to make value choices. So, the fact that Facebook decides to promote or demote certain content or remove hate speech, which is not required under law, I think, it’s a business decision. Because, people would not want to see that. They would find Facebook less pleasant. They wouldn’t go there. But there is a problem with bias. The way I suggest dealing with that is a law that would require some kind of disclosure about the algorithms. And if algorithms are being manipulated to favor one candidate or another, that this should have to be disclosed. It wouldn’t have to be changed, but you’d know that the platform is biased. And that disclosure, as I said, most of my solutions involve giving more information to voters, more disclosure. If it turns out that the platforms are too powerful, rather than having a speech code that says, you can say this, or you can’t say that, you must be even handed it. I favor instead breaking up the platforms. If we think that Google search is so dominant and that people can’t get information otherwise, then that suggests we have a problem with the market, the market power of Google.

Another example of this is the decision that Facebook and Twitter made right after the January 6 insurrection to de-platform Donald Trump, which I think was the right decision. I think there should be a very heavy thumb on the scale against de-platforming the politician. We want to have rigorous speech, but when someone is consistently degrading the integrity of the election, to the point that millions of people believe the false claim that it was stolen, or when they advocate violence or invite people to come for a wild time in Washington D.C., at the time that Congress is counting the votes. I think that it easily passes the bar. Now, Facebook is going to have to make a decision, because that ban on Trump only applies for two years. And given the fact that Trump is continuing to say these things and the integrity of our election system is still being questioned for no good reason, I support extending that ban. But, I wouldn’t want a law that would say he can’t speak. I wouldn’t want a law that would say that Facebook can’t include the content if it wants to. After all, there are other platforms, Gab, Parler, and now Truth Social that will include Trump’s content. That’s fine. But I think as a matter of the public, we should pressure these companies into doing the right thing. What I don’t support is this Florida law that was passed that says that a social media company cannot de-platform a candidate, even if that candidate calls for violence or that candidate continues to spew dangerous rhetoric about elections being stolen. I was actually shocked. And I spent a lot of time talking about law in the book and the interpretation of the First Amendment. And, I was actually shocked that Justice Clarence Thomas …




… who generally has taken a very libertarian position on speech. You can’t even require disclosure in most circumstances of who’s speaking in a campaign context, he says. He believes that Florida could pass a law that would say that Facebook has to carry Donald Trump, even if it doesn’t want to.


And weren’t they analogizing the social media companies to the telephone companies, essentially saying they’re just carriers and they can’t examine the content.


So just like Verizon or AT&T can’t say, you know, well, you’re a Nazi, I’m not going to let you have a phone, Justice Thomas and Professor Volokh have said that Fox News or The New York Times gets to decide what content to include or exclude. And I think that social media companies are much closer to that model. They exclude that hate speech. They decide what gets promoted, what gets demoted on their site. If Donald Trump were added back to Facebook on January 7th, 2023, when he’s eligible, many people are going to protest and they’re going to leave because that’s not what they want to see. And of course, many people a protest and leave if he’s not re-included. Right? But, but that should be a private decision that’s made, and people will think of Facebook differently, whether it includes Donald Trump content or doesn’t. Just like they would, if, you know, one America news network includes Donald Trump content, or doesn’t.


I know that most of your suggestions are not for bans, but one other place where you do argue there should be an outright ban is when it comes to microtargeted advertising. And the one argument you make is that people don’t realize what information they’re giving up. And so, therefore, microtargeting shouldn’t be allowed to appeal to a specific group or subgroup of people, as it applies to election information. I do have a question about that. So, let’s say my argument is, hold on a minute, I’m a candidate for office. I know that there is an issue of particular importance and relevance to one part of my potential constituency, maybe African Americans. And I want to use microtargeting to reach that portion of the base with good information, because it’s relevant for them more so than other constituencies. Why shouldn’t that be allowed?


Well, so I generally think that a campaign can do this kind of targeting and they do this kind of targeting. So, campaigns will, for example, get a list of voters with Hispanic surnames and send mail in that way. And, you know, campaigns will go give a speech at a union hall or at a church, right? So, targeting happens all the time. What’s different in terms of microtargeting, it’s like the difference between taking a photograph of a person or having the ability to read their minds. We give up so much personal information when we go onto social media. All of our clicks are followed, not just when we are on that social media site to create profiles of us. And so, if you are a candidate for office, now you can go to Facebook and you can say, look, here’s a list of a hundred people, who let’s say they’re African American voters, that you’ve identified. Here’s this list of a hundred people in my district that I want to target. Can you find other people using your algorithm that have similar profiles and direct messages to them too? Right. So, I have no problem with the campaign saying, send it to these a hundred people. And I think that’s perfectly fine, but the fact that these other people who have not been identified by the campaigns or asked by the campaigns to get information are being targeted using this private information. That’s where I think it’s problematic. And that’s where I think, although I don’t know that the Supreme Court would agree, under its view of the First Amendment, that it would be permissible to say that social media companies cannot use this privately targeted information to go after these additional people. And that, especially, we know from some of the Facebook files, some of the internally leaked documents from Facebook that these ads can be targeted at particularly vulnerable populations, and they can include misleading information or worse. So, I do think that there’s a special danger that comes from letting social media companies parlay the kind of data that they collect for the political purposes that a campaign would want.




So that is where I draw that line. I go through a kind of very extensive First Amendment analysis of how the Supreme Court views information, data, as a kind of information protected by the First Amendment. And it’s a very difficult question about how to deal with this, but I do think that this does represent a real kind of problem that it’s much harder for people to see than some of the other problems in the book, like someone seeing a deep fake that shows Joe Biden doing something that he didn’t actually do.


Now, obviously the name of your book is Cheap Speech. And one of the things I’ve been wondering about is, okay, how do we make speech more expensive? How do we put up guardrails to disincentivize bad actors from taking advantage of that cheap speech? One easy example, I think, is defamation lawsuits. If you are a voting machine manufacturer, and you have people like two of President Trump’s lawyers, Sidney Powell and Rudy Giuliani, making false claims about it, or a news network like Fox News, then the voting machine manufacturer can sue those parties. Are there other barriers that could be erected to make more expensive speech?


I guess I would disagree a bit with your premise. I don’t think we want to go back to the old days with expensive speech. There was a big value. I pointed this out in a recent New York Times article about my Cheap Speech book that the racial justice protests, the George Floyd protests were organized, in part, through sharing messages on social media, putting up videos of police brutality helps to bring that issue to greater salience, so that there can be action in the public. So, there is still, as professor Volokh recognized, a bright side to cheap speech. So, I think what we want to do is not make the cheap speech more expensive. What we want to do is find ways to level the playing field so that those are creating good information can continue to do so. So, for example, subsidizing journalism. I talk about news outlets like The Nevada Independent, or The Texas Tribune that have come in. It’s much harder for local newspapers, as we’ve talked about to, to stay in business under a for-profit model. But, nonprofit model has proven, at least in some places, to be a very useful way of ensuring that that investigative journalism is still being produced. And it’s especially important to produce it on the local level because that’s where, as we’ve talked about earlier, there’s the least attention being paid by journalists. Another example I give here of a way to try to promote good information is you can imagine a society of journalists that says here are the five things that make you a bona fide journalist in terms of objectivity, like having two sources, giving someone a chance to respond. We can imagine kind of a good housekeeping seal of approval going on, uh, coming out from one of these companies. And then social media companies can say, oh, look, this is speech from The Des Moines Register. And The Des Moines Register has the seal of approval. So, voters would know when they see that, okay, this is something that is more reliable. It’s something that is trustworthy. And so, designing such a system, I think, would be very valuable to voters. Again, it’s providing more information. It’s not saying disinformation is illegal. You can’t say something false. But it’s saying, here’s a way that voters can discern between what’s true and what’s not true.


There is such a torrent of misinformation and disinformation. Let’s stick, I guess, with the latter disinformation. I’m left with this feeling that even if all of these ideas, even if we put the seal on certain news organizations’ websites, even if passing new laws that would prevent somebody from being able to say the election is on Wednesday, when it’s Tuesday. That it’s still asymmetrical, the solutions don’t match the scale of the problem. And I’m left with this sense that really we’re putting rumble strips on a raceway. It may slow the cars down a little bit, but they’re still going to be able to push 200 miles per hour. Is my reading, in your view, too cynical? Do you have more optimism on that front than I do?


I do have a little more optimism. And I guess I would look at what’s happening with Ukraine right now in Russia. So, the Russian disinformation campaign has fallen pretty flat and trying to keep information, even in Russia about what’s going on about the war in Ukraine has been very hard. And so, I don’t believe in the marketplace of ideas approach of the First Amendment that the truth is always going to rise to the top. The best evidence of that, or most salient evidence of that is the millions of people who believe the 2020 election was stolen, despite all reliable evidence of the contrary. But, I do think that things can break through, and that people are fed up with what they see as the current system. And they’re looking for ways to deal with it. It’s not as though the problem of disinformation is something people are not paying attention to. I think they’re paying way too much attention to disinformation that’s coming in and not sure what to do about it. And so, what I’m trying to do in Cheap Speech is offer some practical solutions that are consistent with the First Amendment to give voters what they need to be able to make sound decisions when they choose to vote and to have confidence when an election is fairly run to know that it was fairly run.


Rick Hasen, the book is Cheap Speech: How Disinformation Poisons Our Politics – and How to Cure It. Thank you very much for coming on The Speak Good Podcast.


It was a great pleasure.


Back to All Episodes

Public Speaking Skills Training

Since 2004, we have helped speakers prepare for the world’s biggest stages, including TED, the World Economic Forum, and a presidential announcement speech. We’re committed to your long-term growth, and we’ll be with you every step of the way.

Learn More