As Trump’s symbiotic relationship with Fox News has lost its shine, the President has taken to directing his followers to One America News Network, a low-rent cable news upstart with non-existent editorial standards. But no one gets a free ride on the Trump gravy train and OANN just found out that loyalty bordering on lunacy can have business consequences—very mild business consequences.
On Tuesday, YouTube said that it has suspended OANN’s channel from its YouTube Partner Program after it uploaded a video that pointed viewers to a fraudulent cure for covid-19. A YouTube spokesperson told Gizmodo that the video violated the platform’s COVID-19 misinformation policy. YouTube says that for the next week, OANN will be prohibited from uploading new videos and from earning ad dollars on the clips that are still live on the channel.
OANN will be required to re-apply for the YouTube Partner Program after it has resolved any issues that triggered the suspension and it isn’t guaranteed re-entry. YouTube did not answer questions about whether it will monetize future OANN videos. However, the company did say that OANN has reached its limit for warnings on violating its COVID-19 misinformation policy and the conservative network will receive a strike for any future violations—three strikes result in termination.
The punishment is pretty insignificant considering how much heat YouTube has taken in recent weeks for its failure to crack down on videos that violate the company’s terms of service and spread misinformation. The day after the Presidential election, YouTube immediately came under fire for leaving up an OANN clip that claimed Trump won the election. The clip was eventually demonetized but allowed to stay online.
Yesterday, a group of senators sent a letter to YouTube CEO Susan Wojcicki expressing “deep concern regarding the proliferation of misinformation” on the platform she oversees. While strongly-worded letters expressing deep concern can be motivating, the more notable news item is probably the fact that on Monday, the Trump administration formally allowed President-elect Biden’s transition to begin—a move that was widely seen as the closest thing to a concession we’re going to get. Still, with or without a President Trump, private tech companies have been and are likely to continue being extremely sensitive to conservative cries of censorship.
So, it’s surprising that OANN isn’t losing its shit right now. The company did not immediately reply to our request for comment and its Twitter and Facebook pages have been silent on the issue this afternoon. So far, the President has not tweeted about OANN’s suspension either, and he’s really not busy at the moment.
Maybe the right-wing pundits are all huddled in a brainstorming session on how they’re going to make YouTube pay for this assault on real Americans, or maybe they just don’t think YouTube’s punishment amounts to much at all.
As a political weapon, Section 230 is becoming a mutually assured destruction device.
I’m old enough to remember a time when the phrase “Section 230 of the Communications Decency Act” meant nothing to the general public. It was about 18 months ago, and in the meantime, millions of people have become self-declared experts on the topic. We’ve suffered through countless congressionalhearings that were touted as an exploration of how to improve the law only to descend into lawmakers accusing CEOs of shadow-banning their best friend’s band. We’ve seen some half-assed efforts to tweakthelaw in ways that haven’t gathered any traction. And gradually, somehow, both Republicans and Democrats began to say that they support changing the law in the future.
Everyone seems to have lost the thread along the way and now, the New York Times reports that Section 230 has become a sticking point in the negotiations over approval of the annual military budget.
The issue is that this year’s National Defense Authorization Act (NDAA) includes plans to remove the names of Confederate leaders from military bases and Donald Trump has threatened to veto the legislation if the plan remains in place. The initiative has support in the Senate and House, but the president has seen it as the kind of wedge issue that keeps people angry and he loves when people are angry.
G/O Media may get a commission
Citing unnamed sources who are “familiar with the discussions,” the Times reports that Rep. Adam Smith, a Washington state Democrat, approached White House Chief of Staff Mark Meadows to ask what kind of compromise could get this legislation through. Meadows reportedly said that including a repeal of Section 230 might interest the president.
In another year, such an assertion would seem absurd. But we’re firmly in the Chaos POTUS session and Trump has nothing to lose. Still, this just isn’t going to happen. From the Times:
Such a deal would amount to a last-minute, sweeping overhaul of communications law, and a Democratic congressional aide, speaking on condition of anonymity to disclose internal discussions, said many lawmakers in the party viewed it as a nonstarter.
Let me say it again: Congress will not repeal Section 230 in order to get the NDAA passed. To do so, would cause extreme chaos online, in the courts, and in the economy. If it did happen, we’d probably have to rewrite the big trade agreement that Trump is so proud of because the protections of Section 230 were written into it. But again, this scenario is just not going to happen.
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
But there’s more to the law, and interpreting it isn’t as easy as it might seem, hence the need for Kosseff to write a whole book on it. At its most basic level, Section 230 is intended to shield a web platform like Facebook or Twitter from legal liability for the content uploaded by a third-party user. There are still legal limits. For instance, Facebook can’t knowingly leave up images of child exploitation on its website. But if a user uploaded child porn on Facebook and proper protocols were followed to remove it, the social network is free to continue raking in billions.
Repeal Section 230 with nothing in its place and most websites would need to put a halt to the uploading of third-party content or at least severely limit the practice because the legal risks would be too great. Facebook and Twitter might find a way to keep going, but say goodbye to rest.
All of this is to say that repealing Section 230 isn’t what the business interests behind Republican or Democrat donors want to see. Competing interests certainly have different ideas for how to change the law to benefit themselves, but a sudden repeal is not only unwanted but practically impossible.
Still, it’s clear that Section 230 has been successfully defined to some group of the public as the thing that allows Twitter to CENSOR posts with fact-checking labels, and if we get rid of it, these people believe that the internet would just become a First Amendment zone where nothing is censored. The reality is quite the opposite.
The report on NDAA negotiations just tells us that Republicans seem to believe they’ve found another hand grenade issue to throw around when they want to play chicken. It is solidified the political gamesmanship that Republicans want to repeal Section 230 and Democrats want to preserve it when it’s more like they both want to change the law but only one party is irresponsible enough to pretend they’d just throw the baby out with the bathwater.
When all is said and done, the military will get its money. If military bases get renamed, Trump will tell supporters he did his best to continue honoring racists by repealing the law that prevents them from saying the n-word on Facebook despite that notion having no basis in truth. And the president will continue to be the biggest beneficiary of social media malpractice in political history.
Thank the gods for Lindsey Graham’s alleged spinelesscriminality. Without the presence of the senator from South Carolina, today’s Big Tech hearing on censorship and election interference on social media might’ve been completely off-topic.
Back in October, Americans were staring down the horror of Election Day and Senator Graham was filled with righteous indignation over Twitter’s decision to block a link to a dubious New York Post story that appeared to be fulfilling the promise of ratfuckery that got President Trump impeached. Twitter eventually reversed the decision to block the story’s URL and apologize, but the Senate Judiciary Committee still voted to haul its CEO, Jack Dorsey, and Facebook founder and CEO Mark Zuckerberg in for another round of questioning. That hearing began this morning and it went about as well as other hearings on the general topic of social media censorship—that is to say, it was off-topic from the moment it started.
When Graham first entered a motion to subpoena Dorsey and Zuckerberg on Oct. 22, he gave his topics of discussion as the censorship of the New York Post article; “[a]ny other content moderation policies, practices, or actions that may interfere with or influence elections for federal office;” and the use of fact-checking labels on user posts. At the time, Graham was all fired up over a potential Trump reelection and the successful confirmation of Amy Coney Barrett to the U.S. Supreme Court.
As the hearing began earlier today, Graham seemed in a more conciliatory mood. There was no more talk about election interference. Instead, Graham opened with a long speech about social media being addictive. “Is that a good business practice,” Graham asked. “Maybe so. Does it create a health hazard over time? Something to look at.”
Graham then turned to the familiar GOP talking point that Twitter once left up a post from Iran’s Ayatollah Khomeini saying it’s okay to question the Holocaust while it flagged a post by Republican Nikki Haley. But for the most part, Graham seemed to enter the hearing with a shrugging indifference. He acknowledged that without the liability protections of Section 230 of the Communications Decency Act, then these companies “would have probably never been in existence.” Rather than citing specific changes that he would like to see made to Section 230—a law that protects internet platforms from liability for user-generated content—Graham lowered the ambitions of the proceedings saying that he hoped “in this hearing today that we can find a baseline of agreement that Section 230 needs to be changed.” He said that his advice would be “to allow the industry itself to develop best business practices” and suggested that lawmakers start looking at these companies through the “health prism” as some practices may need to be modified because these products “can become addictive.”
G/O Media may get a commission
So, what happened to that whole idea of exploring the potential of social media content moderation having an improper influence on elections? Well, it’s possible that Graham is feeling a little shy about the subject after the Washington Post published an article in which Georgia’s Republican Secretary of State, Brad Raffensperger, accused the senator of attempting to pressure him to “toss legally cast ballots” in certain counties in an effort to swing the state’s vote totals in favor of President Trump.
Raffensperger said his family has received death threats from the public as Trump and Graham have spread conspiracy theories about the Georgia election process, and the Post wrote that the Secretary reiterated “every accusation of fraud will be thoroughly investigated, but that there is currently no credible evidence that fraud occurred on a broad enough scale to affect the outcome of the election.”
Graham has called Raffensperger’s assertion that his call was anything but proper “ridiculous” and he said that his main concern is, how to “protect the integrity of mail-in voting,” and to answer the question, “how does signature verification work?” Amost every state has both of these practices in one form or another.
Raffensperger told the Wall Street Journal that Graham suggested throwing out ballots from counties that had “higher rates of signature errors” but that— because doing such a thing would be illegal—the secretary’s staff agreed to ignore the senator.
In the middle of today’s hearing, Graham had to step away to vote in what was announced as a 10-minute recess. The break ran for closer to 30 minutes and the senator answered some questions from reporters. Asked about his call with the Georgia Secretary of State, Graham said, “I talked to Arizona, I talked to Nevada,” as well. Confused, the secretaries of state in Arizona and Nevada quickly issued statements saying that they had not had any contact with Graham. The senator clarified that he spoke with Arizona Governor Doug Ducey and said he “can’t remember who [he] talked to in Nevada. But what I’m trying to find out is how do you verify mail-in ballots.” The state procedures for verifying mail-in ballots should be available to any senator with an internet connection.
As for the senator’s stated goal of finding “a baseline of agreement that Section 230 needs to be changed” at the hearing today, he appears to have succeeded before it even started. Senators and witnesses alike took turns throughout the session saying that they think the law should be changed while offering few specific suggestions for accomplishing that. Still, Graham’s mission of exposing malevolent election interference appears to be on track.
There was a big hearing in the Senate today regarding the future of online speech and the law that allows the web-as-we-know-it to function. Of the three big witnesses at the hearing, Google CEO Sundar Pichai may have expected to be in the hot seat given that his company was just slapped with an antitrust lawsuit by the DOJ. But no one even seemed to know Pichai’s name.
The regulatory issues surrounding big tech have become so pressing and entangled that none of the marquee hearings that we get in Congress seem to move the dialogue forward. That’s not to say that senators or members of Congress are incapable of asking informed questions or learning about the issues. When politicians sit down with experts for a fact-finding hearing, like in a recent committee hearing on antitrust, they can be quite curious and open-minded. But those sessions don’t make headlines. And when you bring in the CEOs of Alphabet, Facebook, and Twitter, it’s time to put on a show.
Today’s hearing was almost entirely dominated by senators trying to get in shots at Twitter’s Jack Dorsey or decrying the hearing itself as political theater ahead of the election. From the beginning, Chairman Roger Wicker repeatedly referred to the Google CEO as Sundar Puck-eye or Pick-eye rather than properly pronouncing it as Soon-dar Pee-chai. And sure, sometimes politicians have trouble with names, but basically everyone on the committee who elected to use Pichai’s name proceeded to mangle it. Lucky for him, Pichai was virtually ignored.
There was little substantive talk about Section 230 of the Communications Decency Act. Known as “the 26 words that created the internet,” the clause gives liability protection to web services for content that is created on their platforms by third-party users. Lawmakers on bothsides of the aisle are expressing an increasing willingness to change the law, but they often express different goals for the changes. In a statement published ahead of today’s hearing, the digital activist group Fight for the Future published said that “blowing up Section 230 would be devastating for human rights and freedom of expression globally,” and such action would “make Big Tech monopolies like Facebook and Google even more powerful in the process.” That kind of statement might be useful if anyone at the hearing was arguing to blow up Section 230, but lawmakers mostly spent the time bringing up individual grievances.
G/O Media may get a commission
Colorado Senator Cory Gardner got things rolling for the Republicans, asking the Twitter CEO, “Mr. Dorsey, do you believe that the Holocaust really happened?” No conversation that starts that way has ever produced good results. Gardner’s ultimate point was that Iran’s Ayatollah Khomeini is a holocaust denier and Twitter doesn’t slap him with fact-checking notes the way it has on a regular basis in recent weeks for President Trump. Dorsey explained that Twitter does not prohibit misinformation unless it falls into one of three different categories: manipulated media, public health, and election interference.
This exchange was admittedly confusing and required Dorsey to point out complications in Twitter’s policies that can make them baffling for the average user. Further questioning along this line continued from other senators throughout the session. One senator would point to one of the world’s worst dictators and say some variation on “you let this guy tweet but when our guy does it, you censor him.” This was tiresome and required Dorsey to explain various policies for world leaders and newsworthiness.
Senator Mike Lee of Utah recently made news for contracting covid-19 and tweeting that democracy is antithetical to what Americans want for our country. Lee’s a total scumbag, but he did acknowledge to Dorsey that Twitter has “every single right to set your own terms of service and to interpret them and to make decisions about violations.” This is a fundamental thing that conservative critics don’t seem to want to acknowledge. These platforms can “censor” anything they want. If you want to argue that they are so powerful that they shouldn’t be allowed to moderate any legal speech, you’re moving more into an antitrust arena, and many conservatives have no interest in breaking up corporations.
Lee wanted to ask about the equal enforcement of policies. He went down the line asking each CEO if they can name one “high-profile” liberal who has been penalized on their networks for violating the rules. This was an imprecise question that requires subjective evaluation of someone being high-profile and knowledge of an individual’s personal ideology. None of the CEOs could think of an example that satisfied Lee and he declared victory, saying that this is evidence that enforcement of social media policies has a negative impact on conservatives that is unfair. This is not, in fact, evidence of anything. It suggests that high-profile liberals violate the ToS less often than conservatives, but it doesn’t even prove that theory.
Senators Ron Johnson and Marsha Blackburn came in for the stupidest hour. Johnson asked Dorsey about a tweet in which a random person wrote:
Sen Ron Johnson is my neighbor and strangled our dog, Buttons, right in front of my 4 yr old son and 3 yr old daughter. The police refuse to investigate. This is a complete lie but important to retweet and note that there are more of my lies to come.
Johnson said his attempts to get the tweet taken down failed, and Twitter doesn’t consider the tweet a violation of its policies. “How does that not affect civic integrity?” Johnson asked. Dorsey was a little confused and said he’d get back to the senator on that. Twitter did not immediately respond to Gizmodo’s request for comment, but we’re just going to go ahead and guess that the tweet wasn’t considered in violation of anything because it’s about a public figure and specifically states that it is spreading a lie.
Blackburn finally gave Pichai his chance to shine. After asking Dorsey who elected the Ayatollah, Blackburn pivoted to the Google CEO. “Mr. Puhhcheye, is Blake Lemoine, one of your engineers, still working with you?” the senator asked with a growing smile. Pichai responded that he wasn’t sure if this was a current employee. Without missing a beat, Blackburn explained, “Okay, well, he has had very unkind things to say about me, and I was just wondering if you all had still kept him working there.”
The question was related to a Google employee who got some attention from Breitbart in 2018 for his claims that Blackburn is a “terrorist.” I, personally, feel a level of terror at the notion that a sitting senator decided to target a private individual at a committee hearing because he said something mean about her two years ago, so we’re gonna say that the fact-check came back ‘True’ on that one.
You get the idea. It was another infuriating show, and we haven’t even mentioned Ted Cruz’s mad-as-hell meltdown.
Often in these situations, we could point to the Democrats at least trying to talk about the main subject of the hearing, but that wasn’t the case today. Some asked on-topic questions but didn’t really seem to be trying to get anywhere. Most Democrats just wanted to note that this hearing is less than a week from the election and is clearly designed to focus attention on conservative gripes with social media—specifically, it seemed like one last desperate attempt to spotlight the New York Post’s shady story about Hunter Biden story that Twitter foolishly banned before reversing its decision.
Senator Jon Tester lamented the fact that this hearing was rushed and off-topic. He excoriated his colleagues for spending a hearing constantly asking about the political affiliations of private employees and claimed it was clear the directive for the hearing came from the White House. “This is baloney, folks,” Tester exclaimed. “Get out the political garbage and let’s have the commerce hearing do its job.”
Senators will get a do-over on Nov. 17 when the Judiciary Committee takes another stab at the same subject.
The GOP-controlled Senate Commerce Committee is holding a hearing next week on Big Tech, and it’s called on Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey to testify. Republicans on the committee have made it abundantly clear that their intent is to grill the CEOs over a conspiracy theory that they work to systematically censor conservative voices on their sites—a baseless claim that has nonetheless become a target of obsession for right-wingers.
Committee chair Senator Roger Wicker has already proposed his own way to own the libs through platform manipulation: reforms to Section 230 of the Communications Decency Act, which immunizes digital service providers against most civil liability for content uploaded by their users and how they choose to moderate said content. It’s the foundation of the modern internet, as it allows websites to offer services to users without being sued for those users’ actions.
Wicker’s bill, introduced alongside GOP Senators Lindsey Graham and Marsha Blackburn, is titled the Online Freedom and Viewpoint Diversity Act. The OFVDA tries to make it easier to sue the likes of Facebook and Twitter if they delete content that doesn’t fall within a narrow set of categories and strips their legal protections if they engage in ‘editorializing.’ This is more or less an attempt to bully websites into complying with Republicans’ demands on how they should be run, or else face the wrath of the extremely litigious conservative movement.
In advance of the hearing, the Commerce committee emailed out an FAQ on the OFVDA. It’s stuffed so full to the brim with doubletalk that practically has to be translated—which we’ve done for you below.
Does the bill raise First Amendment concerns?
· No. This bill was created with free speech in mind. By narrowing the scope of removable content, we ensure that Big Tech has no room to arbitrarily remove content just because they disagree with it while enjoying the privilege of Section 230’s liability shield.
G/O Media may get a commission
Quite literally what Graham, Wicker, and Blackburn are describing is an effort by the government to control the kinds of speech allowed on privately owned websites. If you’ve ever read the First Amendment, you might sense there’s a problem with this logic.
First of all, the answer noticeably conflates a dubious definition of “free speech” with the “First Amendment.” The First Amendment does not define “free speech” as the right to unfettered and unrestricted speech, anytime and anywhere. That’s not a right that exists. The First Amendment’s purpose, or part of it rather, is to restrain the government, and only the government, from “abridging the freedom of speech.” That means laws, such as those that one might introduce to prevent the owner of a website from deciding what is and is not allowed on their own website.
The law, of course, does not understand “speech” solely to mean things people say; it also covers a wide variety of actions. Putting a sign in front of your house can be speech; and so is drawing a big red “X” through the words on that sign the next day.
Importantly, the First Amendment does not protect speakers from Facebook or Twitter, any more than it protects people from getting fired for telling their bosses to “eat shit.” You simply have no legal right to a Facebook or Twitter account. In fact, the First Amendment protects Facebook and Twitter’s right to ban users in the first place.
What’s more, none of the changes to Section 230 proposed by Graham, Wicker, and Blackburn change anything about that. Revoking the law entirely would not change the fact that social media companies are under no legal obligation to allow you to use their websites—or to post anything on them they don’t like.
Hilariously, the White House recently attempted to cite a 2016 Supreme Court decision to argue that such a right may (or should) exist. But they left a few details out.
In a leaked draft version of President Trump’s recent executive order on Section 230, one of his lawyers, or possibly an unpaid intern, pointed to the case Packingham v. North Carolina, which is actually about whether pedophiles (though not conservative pedophiles, specifically) can be banned from social networking sites. Citing the case—without mentioning the whole “pedophile” thing, of course—the White House wrote: “The Supreme Court has described that social media sites, as the modern public square, ‘can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard.’”
What the White House failed to say is that Packingham was not about whether Facebook could ban pedophiles, which it can of course, but whether the government of North Carolina had a right to do so.
Conversely, Facebook obviously can and probably should ban pedophiles, and any U.S. government entity that tried to outlawFacebook from banning pedophiles would be violating the First Amendment. The government simply has no right to tell Facebook when it can and cannot ban users (unless those users are selling illegal guns, or drugs, or plutonium, or prostitution, or child sex abuse material).
“It’s painful to comment on this statement for at least two reasons,” Eric Goldman, a Santa Clara University School of Law professor and co-director of the High Tech Law Institute, said in an email. “First, I get angry every time I see how my tax dollars are being used to fund government propaganda like this.”
“Second, we are in the middle of a pandemic, an economic recession, an election that is being actively subject to foreign interference, and other existential crises, and this topic is what some members of Congress think is the most important priority for it to address right now?” Goldman added. “Any member of Congress actively working on Section 230 reform in October 2020 grossly misunderstands the problems facing our country and deserves to be voted out.”
Will this make it harder for platforms to remove objectionable content?
· No. We’re asking companies to be more transparent about their content moderation practices and more specific about what kind of content is impermissible.
Q: What does the law say about content moderation now, and how will this bill change it?
A: The law currently enables a platform to remove content that the provider “considers to be…. ‘obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.’”
The problem is that “otherwise objectionable” is too vague. This has allowed Big Tech platforms to remove content with which they personally disagree. We’re striking that phrase and instead specifying that content that is promoting self-harm or terrorism, or that is unlawful, may be removed.
As we just discussed, Section 230 is not the law that allows social media to “remove content with which they personally disagree.” Again, the government cannot restrict what kinds of content the owner of a website can and cannot remove. If Facebook decided tomorrow to ban everyone who likes plums because Facebook doesn’t like plums, the government would have no right to interfere.
This has to do with whether someone who likes plums can then sue Facebook in civil court for imposing a blanket ban on nature’s worst fruit. Section 230 was passed to ensure that companies could host user-generated content without exposing themselves to liability for what users choose to post. It reads, in part:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
It also provides liability protections when they remove or restrict content they deem harmful, so long as the moderation decision is undertaken in “good faith.” This applies “whether or not such material is constitutionally protected.” In either of these situations, the Section 230 liability shield gives websites a fast-track option to have lawsuits thrown out of court, limiting the cost of legal battles and opportunities for settlement trolling.
The issue at the time the law was passed, in 1996, was that the courts had relied on decades-old case law related to radio stations and book publishers when users inevitably took internet companies to court. What happened is that if a business made any attempt whatsoever to moderate the content on their website, even if they were legally compelled, the courts would then hold them responsible for literally everything users wrote.
This does not, however, mean that before Section 230 was passed websites were under some legal obligation to allow users to say anything they wanted. As millions of Americans acquired access to the internet, it merely became untenable, physically and financially, to expect any website owner to read every single post made by its users. It would also have required everyone running a website to have a lawyer-like understanding of what kinds of speech are not protected by law, i.e., what constitutes a “threat,” “defamation,” or an “obscenity.”
The assertion by Wicker, Graham, and Blackburn that “otherwise objectionable” is too vague is also pure nonsense. That wording is deliberately designed to be flexible, as Section 230 was crafted not to force platforms to be neutral actors, but to instead allow diversity of opinion on the internet. Federal courts have routinely found that sites are protected against lawsuits for content deletion regardless of whether the decision is narrowly tied to one of those categories.
Section 230 clearly protects service providers when they delete whatever content they disagree with. For example, courts have found that suing a website for deleting your account or posts attempts to treat them as a publisher, which the text of the act explicitly bars. From an appellate court’s ruling throwing out a suit brought by white supremacist Jared Taylor, who sued Twitter for banning him:
The OFVDA attempts to narrow Section 230 by replacing the section in which a website is shielded if it removes content it “considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Note that “considers to be” is inherently subjective; the website operator needs to merely believe the content is objectionable. The new language (emphasis ours) would read that websites are protected when it removes content it “has an objectively reasonable belief is obscene, lewd, lascivious, filthy, excessively violent, harassing, promoting self-harm, promoting terrorism, or unlawful.”
In another sweeping change, the OFDVA declares websites that “[editorialize] or affirmatively and substantively [modify] the content of another person or entity” void their status as an “information content provider” and thus cannot claim Section 230 protections in any resulting suit.
In a takedown of the bill on his web site, Goldman noted that the removal of the “otherwise objectionable” language would discourage websites from fighting “lawful but awful” content, such as anti-Semitism, doxxing, deadnaming trans people, junk science, and conspiracy theories. That’s not because websites would lose their legal right to do so, but because they’d no longer have a fast track to have suits over types of content not specifically listed in the new language thrown out of court. This would make every removal decision a “vexing calculus about whether Section 230(c)(2)(A) would apply and how much it would cost to defend the inevitable lawsuits,” according to Goldman.
The ‘editorializing’ section is just as bad and could refer to anything from slapping fact-check labels on bogus stories to the design of the algorithms that make sites work.
“Whatever ‘editorializing’ means, it creates a new litigable element of the Section 230 defense,” Goldman wrote in the blog post. “If a defendant claims Section 230(c)(1), the plaintiff will respond that the service ‘editorialized.’ This also increases defense costs, even if the defense still wins.”
“I can’t speak to the motivation of the drafters or why they are choosing to prioritize their time this way,” Goldman said in an email. “There have been dozens of lawsuits against Internet services over account terminations / suspensions or content removals. With Section 230 in place, these lawsuits usually end quickly.”
“With the proposed changes to Section 230, there will be vastly more cases (because plaintiffs will incorrectly assume they have a better chance of winning) and those lawsuits each will cost to more defend,” Goldman added. “Yet, in many of those cases, the plaintiffs are obvious trolls engaging in anti-social behavior, and we should encourage, not discourage, their removal. Section 230 currently provides that encouragement.”
The “objectively reasonable” standard ties websites’ hands far more tightly than the subjective “considers to be” language. It would also complicate their efforts to show they were acting in “good faith,” which is already expensive to litigate.
The “good faith” requirement is another flashpoint for conservative anti-Section 230 crusaders, who claim that the mythical discrimination against right-wingers is actually in bad faith. In fact, websites could choose to indiscriminately ban every user to the right of the Bolsheviks without compromising their ability to claim they are acting in good faith. Instead, the “good faith” requirement is intended to prevent situations like a moderator selectively deleting words from a user’s sentence to reverse its meaning.
For example, if Facebook intentionally and maliciously modified your comment saying, “I hate discrimination against Ooompa-Loompas,” to say, “I hate Oompa-Loompas,” you might have grounds to sue on the basis Facebook acted in bad faith to put anti-Wonkitic words in your mouth.
Goldman wrote in his blog post that in any case, the legislation may be found unconstitutional, as the removal of the “otherwise objectionable” catch-all privileges certain types of speech over others.
“In particular, the revised Section 230(c)(2)(A) would condition a government-provided privilege on the removal of only certain types of content and not others, and it’s arbitrary which content is privileged,” Goldman concluded. “(For more, see this). This raises the possibility of strict scrutiny for the amendments.”
Here’s the rest of the FAQ, with what the authors of OFVDA are claiming and what we think they actually meant:
Q: Will this bill protect against election interference campaigns?
A: Foreign interference in elections is unlawful. This bill won’t prevent Big Tech companies from removing content posted by these bad actors.
Translation: We’re just throwing in this completely irrelevant point to make us look more reasonable.
Why not repeal and start over?
The tech industry relies on Section 230’s liability shield to protect against frivolous litigation. If we repeal the law, we risk increasing censorship online, and encouraging the creation of a government body ill-equipped to act as judge and jury over speech and moderation. Repealing Section 230 in its entirety could also be detrimental to small businesses and competition.
Translation: We’re very considered about frivolous litigation, except frivolous litigation brought by people who attend anti-lockdown protests in their free time.
Why not create a new cause of action?
Creating a new tort will only help enrich trial lawyers.
Translation: We specifically only want to enrich lawyers representing @magamom1488.
Why didn’t you cover medical misinformation?
We believe that platforms will be able to remove this content under the “self-harm” language in the bill.
Translation: Hey, pal, are you trying to say hydroxychloroquine doesn’t work or something?
Why can’t we use the courts to course-correct?
If we left this to the courts, they’d be litigating content moderation disputes all day, every day. This bill creates a clear framework; it’s important for companies to own their moderation practices, and follow them.
· More broadly, history doesn’t support a court-led strategy. The courts have so broadly interpreted the scope of 230 that tech companies are now incentivized to over-curate their platforms.
Translation: We don’t believe the courts should be “litigating content moderation disputes all day, every day,” which is why we are proposing revisions to Section 230 intended to make it easier for aggrieved conservatives (and through the law of unintended consequences, anyone else) to launch lawsuits against any website that pisses them off. By “history doesn’t support a court-led strategy,” we mean that judges have historically tried to restrain themselves from bursting into laughter during content moderation cases.
What is your position on fact checking?
· We will always find better solutions from the free market concerning fact checking.
· This bill provides a starting point for discussion on objectivity by updating the statutory language to include a new “objectively reasonable” standard.
Translation: By “free market concerning fact checking,” we mean that it is fundamentally impossible to arbitrate the truth and all people should feel free to choose their own beliefs—except when a platform does it in a manner that is politically inconvenient for us. We may be under the impression the new “objectively reasonable” standard has something to do with the lying liberal mainstream media.
Will this require companies to create more warning labels?
· Putting a warning label on a tweet could constitute “editorializing,” which would in turn open platforms up to potential legal liability. The idea is to make companies think twice before engaging in view correction.
Translation: Actually, we want to make it so that if the president tweets that the only scientifically proven preventative measure that can be taken against the novel coronavirus is letting the love of Jesus Christ into your heart, Twitter somehow is the one who gets sued.
Will this allow hate speech/racism/misogyny to “flourish” online, as some congressional Democrats claim?
· No, but we invite opponents of the bill to discuss their views in the Senate Commerce and Judiciary Committees all the same.
Translation: Go fuck yourself.
Is this legislative push motivated by the President’s social media presence or the 2020 election?
· No. The Commerce Committee has spent the past several years working on Section 230 reform. Repeated instances of censorship targeting conservative voices have only made it more apparent that change is needed.
Translation: Haha of course not why would you bring that up? Also, yes.
It’s only taken about a week for Twitter to become the ultimate symbol of Big Tech (Big T) censoring conservative speech in the minds of Republican politicians. And the GOP outrage has manifested in a telling blunder.
On Wednesday afternoon, the account for the Republican members of the House Judiciary Committee tweeted the following:
In case it’s not immediately apparent from that screenshot, House Judiciary members are implying that Twitter is trying to slap a warning label onto their retweet of an article from Fox News host Sean Hannity. The social network has used warning labels on President Trump’s tweets that spread false information or violate its policies, but that’s not what’s going on here. In fact, the screenshot shows a message that’s simply suggesting that the owner of the account read the article before they tweet it.
Earlier this month, Twitter announced a number of adjustments it’s making to help slow the spread of misinformation in the lead up to the election. So, in general, you may see some unusual functions on the platform. But the prompt encouraging people to read an article that they haven’t opened on Twitter was announced as a test for Android back in June, started rolling out on iOS this month, and may become a permanent fixture of the platform.
G/O Media may get a commission
Responding to the GOP account, Twitter Support explained the situation, saying, “We’re doing this to encourage everyone to read news articles before Tweeting them, regardless of the publication or the article. If you want to Retweet or Quote Tweet it, literally just click once more.”
Last week, Twitter came under fire after it banned a link to a New York Post story that was factually suspect on a number of levels. Critics, like myself, often mock conservatives’ incessant claims of victimhood and violations of the First Amendment when it comes to private companies’ moderation policies. But Twitter’s move crossed a line into reckless censorship with broad implications, and its CEO, Jack Dorsey, quickly apologized while announcing a new policy shift.
For once, Republicans had something fairly legitimate to gripe about, but now they’re back into their old ways—willfully misunderstanding the issues and demonstrating an inability to read.
The House Judiciary GOP account’s tweet is still up at the time of writing, no correction was issued, and at no point has it gotten around to properly retweeting that Sean Hannity link that it was so enthusiastic about sharing. It has retweeted about 10 other things in the meantime, though, including several complaints about censorship and an exciting trailer previewing the Senate Commerce Committee’s hearing big tech CEOs next week.
A pair of Democratic lawmakers on Tuesday introduced the latest bill proposing to amend Section 230 of the Communications Decency Act on the grounds that algorithms used by social media platforms—namely, Facebook—have facilitated extremist violence across the country resulting in U.S. citizens being deprived of their constitutional rights.
The “Protecting Americans from Dangerous Algorithms Act,” authored by U.S. Representatives Anna Eshoo and Tom Malinowski, targets only companies with more than 50 million users. Companies that use “radicalizing” algorithms, the lawmakers say, should not be given immunity if they programmatically amplify content involved in cases alleging civil rights violations. The bill additionally targets algorithm-promoted content involved in acts of international terrorism.
In a statement, the lawmakers pointed specifically to a lawsuit brought last month by victims of violence during recent protests against racial injustice in Kenosha, Wisconsin. The suit, reported by BuzzFeed, accuses Facebook of abetting violence in Kenosha by “empowering right-wing militias to inflict extreme violence” and depriving the plaintiffs of their civil rights.
The suit cites Reconstruction-era statutes that the Supreme Court applied unanimously in 1971 against white plaintiffs who had harassed and beaten a group of Black plaintiffs in Mississippi after mistaking for civil rights organizers.
G/O Media may get a commission
In Kenosha, a 17-year-old gunman killed two men and wounded another after traveling across state lines with a semi-automatic weapon to confront demonstrates affiliated with the Black Lives Matter movement. Rittenhouse has been charged with six criminal counts in Wisconsin, including first-degree intentional homicide.
The civil suit brought by, among others, the partner of one of those Rittenhouse killed, also accuses the self-described militia group Kenosha Guard of taking part in a conspiracy to violate plaintiffs’ constitutional rights. A Facebook event started by the Kenosha Guard, which had encouraged attendees to bring weapons, was flagged by users 455 times but was not taken down by Facebook.
In August, Facebook CEO Mark Zuckerberg labeled the company’s failure to take down the page “an operational mistake” during a companywide meeting, BuzzFeed reported.
“I was a conferee for the legislation that codified Section 230 into federal law in 1996, and I remain a steadfast supporter of the underlying principle of promoting speech online,” Congresswoman Eshoo said. “However, algorithmic promotion of content that radicalizes individuals is a new problem that necessitates Congress to update the law.”
“In other words, they feed us more fearful versions of what we fear, and more hateful versions of what we hate,” Congressman Malinowski said. “This legislation puts into place the first legal incentive these huge companies have ever felt to fix the underlying architecture of their services.”
Facebook did not respond to a request for comment.
Section 230 is one of the hottest topics in Washington right now. Passed in 1996 and known widely today as the “twenty-six words that created the internet,” the law is credited with fostering the rapid growth of internet technology in the early 2000s, most notably by extending certain legal protections to websites that host user-generated content.
More recently, lawmakers of both parties motivated by a range of concerns and ideologies have offered up numerous suggestions on ways to amending Section 230. The law was intended to ensure that companies could host third-party content without exposing themselves to liability for the speech in said content—giving them a shield—while also granting them the power to enforce community guidelines and remove harmful content without fear of legal reprisal—and a sword.
Some have argued that Section 230 has been interpreted by courts far too broadly, granting companies such as Facebook legal immunity for content moderation decisions not explicitly covered by the law’s text. Others have tried using the law as a political bludgeon, claiming, falsely, that the legal immunity is preconditioned on websites remaining politically neutral. (There is no such requirement.)
Gizmodo reported exclusively last month that Facebook had been repeatedly warned about event pages advocating violence, and yet had taken no action.
Muslim Advocates, a national civil rights group involved in ongoing, years-long discussions with Facebook over its policies toward bigotry and hate speech, said it had warned Facebook about events encouraging violence no fewer than 10 times since 2015. The group’s director, Farhana Khera, personally warned Zuckerberg about the issue during a private dinner at his Palo Alto home, she said.
Facebook claimed it was banning white nationalist organizations from its platform in March 2019, but has failed to keep that promise. London’s Guardian newspaper found numerous white hate organizations had continued their operations on Facebook in November 2019, including VDARE, an anti-immigrant website affiliated with numerous known white supremacists and anti-Semites. BuzzFeed reported this summer that Facebook had run an ad on behalf of a white supremacist group called “White Wellbeing Australia,” which had railed against “white genocide.”
The company said in June it had removed nearly 200 accounts with white supremacist ties.
“Social media companies have been playing whack-a-mole trying to take down QAnon conspiracies and other extremist content, but they aren’t changing the designs of a social network that is built to amplify extremism,” Malinowski said. “Their algorithms are based on exploiting primal human emotions—fear, anger, and anxiety—to keep users glued to their screens, and thus regularly promote and recommend white supremacist, anti-Semitic, and other forms of conspiracy-oriented content.”
UC Berkley professor Dr. Hany Farid, a senior advisor to the Counter Extremism Project, called the Eshoo-Malinowski bill “an important measure” that would “hold the technology sector accountable for irresponsibly deploying algorithms that amplify dangerous and extremist content.”
“The titans of tech have long relied on these algorithms to maximize engagement and profit at the expense of users,” he added, “and this must change.”
HellfeedHellfeedHellfeed is your bimonthly resource for news on the current heading of the social media garbage barge.
It’s hard to believe that the 2020 election is just 18 days away, which may count for the inexorable sense of doom hanging over everything, or the sensation of that time is warping like the event horizon of a black hole. Before we’re smashed into an accretion disk, you might as well catch up on the present:
Twitter fucked up big time
This week, the New York Post published a factually inaccurate, hole-riddled article—cited to a stolen hard drive obtained by ambulatory wineskin Rudy Giuliani—supposedly offering evidence that Joe Biden colluded with his son, Hunter Biden, on corrupt deals in the Ukraine. Facebook, where the story was going viral, announced it would take steps to limit its further spread. Twitter went one step further and banned sharing links to the Post’s story entirely.
The response by right-wingers was immediate and predictable: The bans were another example of liberal big tech companies abusing their power and more grounds to terminate their Communications Decency Act Section 230 protections, which protect sites like Twitter from most legal liability for user-generated content or their moderation decisions. That’s noise. The real issue here is that Twitter found itself in a no-win scenario and instead tried to cover its own ass.
Twitter could have ignored the Post link—which would have opened it to criticism by Democrats furious about election interference. It could have also attached warning labels to the links saying the information wasn’t verified under its policies on disinformation, or at least cited that policy in banning the link. That would have still infuriated Republican politicians and pundits, but it would have at least made sense. Instead, Twitter cited a policy supposedly imposing a blanket ban on the distribution of hacked materials.
G/O Media may get a commission
That was, from a civil-liberties perspective, incredibly troubling. Whether or not Twitter has a First Amendment right to block whatever it wants, investigative journalists routinely republish information that was leaked, hacked, or otherwise obtained without consent of whoever held it in the first place to expose abuses of power and major crimes. Twitter’s hacked materials policy was also always selectively enforced, usually in a way that seemed to reflect outside pressure and with little rationale as to why it was in the public interest. It’s less of a coherent policy than a convenient loophole.
Twitter has since backed down and is no longer blocking the article, with CEO Jack Dorsey saying that “straight blocking of URLs was wrong” and Twitter exec Vijaya Gadde saying the policy would no longer block material unless it “is directly shared by hackers or those acting in concert with them.” That’s commendable, but as colleague Dell Cameron noted, Twitter has yet to actually move to reverse bans on sites like law enforcement data repository @DDoSecrets.
A few weeks ago, the Republican-controlled Senate Commerce Committee unanimously voted to subpoena three tech CEOs: Twitter’s Dorsey, Google’s Sundar Pichai, and Facebook’s Mark Zuckerberg. Democrats reportedly signed on to the subpoenas after committee chair Senator Roger Wicker assured them there’d be time to address anticompetition and privacy issues as opposed to ranting about anti-conservative bias.
The date of the hearing is now set for October 28, on which we will learn whether the GOP honors the requests of their Democratic colleagues or this will just be five hours of yelling about how few pageviews the Daily Wire got that week or something.
Again with the 230 bullshit
Donald Trump’s ridiculous, almost certainly unconstitutional order tasking the Federal Communications Commission with revoking websites’ Section 230 protections if they “discriminate” against the links of @MAGAhotdog1488 refuses to die. The FCC’s spineless chair, Ajit Pai, announced the agency will seek to “clarify” Section 230 in accordance with the president’s order, which might not mean much unless Trump wins re-election.
Not to be outdone, the president simply called for Section 230 to be revoked wholesale, which make his order meaningless.
WeChat still not banned
The U.S. government has been trying to ban wildly popular social media app TikTok (unless it’s sold to a U.S. company) and messaging app WeChat, which it insists are both security risks because they’re owned by Chinese firms. WeChat has 19 million users in the U.S., many of them Chinese Americans who use the app to keep in touch with relatives, friends, and business partners abroad.
Oracle’s shady offer to buy a slice of TikTok is still in limbo, but no ban has gone into effect. As for WeChat, a federal judge indicated an injunction against the ban will remain in place, as the Department of Justice has yet to offer a compelling rationale beyond something something Communism.
It’s not a good documentary
The Social Dilemma, Netflix’s hot-button, over-the-top panicked documentary on social media apps, has a lot of holes. Among them is that one of its examples of coronavirus disinformation might have actually been Tik Tok satire of conspiracy theories, according to the person who uploaded it.
YouTube is becoming a mall
Facebook and Instagram have long had “social shopping” features, which is a euphemism for injecting content with ads that try to lure viewers into adding an item into a cart without ever leaving the page. Now YouTube is reportedly doing exactly that, which may threaten to make it more annoying and ad-laden than ever. One can only pray no one gets the bright idea to load up Netflix shows with this crap.
Good news for once
Twitter confirmed to Gizmodo that Trump will no longer qualify for its definition of a “world leader” if he fails to remain president after Inauguration Day, which means he will no longer be above the site’s rules or be able to try literally governing by tweet. This in theory means the ex-president would be eligible for a Twitter ban, though don’t hold your breath.
The ban list
Here’s who and what got the shaft over the past few weeks:
Trump’s post claiming the coronavirus is no deadlier than the flu got axed from Facebook, which means you’ll have to wait 20 seconds to see him say the same thing somewhere else.
Republicans have been on this warpath before over an ongoing perception of social networks’ “conservative bias” (which typically involves fact-checking disinformation and limiting its spread). The latest slight is Facebook and Twitter’s decision to restrict the spread of the New York Post’s questionably-sourced, disinformation-ridden “bombshell” report on Joe Biden’s son, Hunter. In letters to Mark Zuckerberg and Jack Dorsey, Sen. Josh Hawley (R-Mo.) called on the CEOs to testify before the Senate Judiciary Crime and Terrorism Subcommittee on a supposed violation of FEC rules by contributing something “of value” to support presidential campaigns. This assumes that providing Donald Trump a platform to run campaign ads that would otherwise violate their own terms of service isn’t considered valuable.
Historically, Republicans have believed that they deserve to see Section 230 repealed on the misguided assumption that Section 230 protects platformsbecausethey are not publishers. The go-to portion, Section 230(c)(1) reads:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Time and again, they seem to wrongly interpret this to mean that if a platform decides to check falsehoods and limit propaganda, the platform has lost its Section 230 privileges because it’s now in the business of editing, which makes it a publisher. But it is not. Facebook is a business, and businesses can refuse service to people for all kinds of reasons, especially if they’re harmful, just as brick-and-mortar shops can turn away a customer who refuses to wear a mask during a pandemic. This is why Facebook and Twitter have terms of service, even ones that they’ve bent considerably for the president.
G/O Media may get a commission
Pai, too, invoked the idea that social media companies should follow the same rules as “other media outlets.”
“Social media companies have a First Amendment right to free speech,” Pai concluded in his statement. “But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.”
But this is where Republicans typically discard the publisher comparison. Literally considering social media companies publishers, with the right to select whatever content they choose to run, and legal liability for libelous claims, is the last thing they want. (This, on the other hand, is closer to what Joe Biden would like to see: an amended Section 230 which would force Facebook to remove Trump’s falsehoods about his son.)
This has been reflected in recent attacks intended to limit another portion of Section 230 exemptions. Section 230(c)(2) protects platforms from civil liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
Those who claim censorship on the part of Twitter and Facebook have argued that Section 230’s immunity does not apply to content stricken from a site by its owner if it doesn’t fall into one of these categories: material that is overly gory, threatening, or pornographic. Others draw attention to the very end of Section 230(c)(2), the reference to “otherwise objectionable” material, hoping to portray this as a catch-all. But traditionally that’s not how it works.
When a law includes a list of specific things like “obscene, lewd, lascivious” content, it’s understood adding a vague term at the end doesn’t mean “and anything else under the sun.” A general term, such as “or otherwise objectionable,” applies only to the same class of things previously mentioned. (If a law reads, “apples, oranges, pears and other things,” you can’t interpret “other things” to mean “elephants.”)
This was the case in a bill introduced last month by Sen. Lindsey Graham (R-S.C.), Sen. Roger Wicker (R-Miss.), and Sen. Marsha Blackburn (R.-Tenn.), which proposes to narrow the phrase “otherwise objectionable” down to “promoting self-harm, promoting terrorism, or unlawful.” It’s pretty clear that self-harm, terrorism, and illegal content already qualify as “objectionable”; rather than adding stipulations, it removes the necessary leeway to cover the unforeseeable breadth of harmful content that comes with each fresh news cycle, like conspiracy theories and health misinformation.
We can guess that Pai’s rulemaking will similarly limit moderation powers, since his statement focuses tightly on concerns that Section 230 has been broadly interpreted to a fault. Specifically, he paraphrases Supreme Court Justice Clarence Thomas, who wrote in a denial of certiorari this week that lower courts have “long emphasized nontextual arguments when interpreting [Section 230], leaving questionable precedent in their wake.” In other words, Thomas believes that the lower courts have strayed too far from the statute’s literal meaning; as he puts it, “reading extra immunity into statutes where it does not belong.”
Thomas first takes issue with a 1997 Fourth Circuit case in which describes the appellate court, concluding that Section 230 “confers immunity even when a company distributes content that it knows is illegal.” The petition denied by the court this week involved a company that sought immunity under Section 230 after it was accused of intentionally reconfiguring its software to make it harder for consumers to access a second company’s product; Thomas wrote that he agreed with the ruling of the Ninth Circuit, which found the immunity “unavailable” against allegations of anticompetitive conduct.
Section 230 was written to shield website operators from liability for defamatory statements made by their users; however, Thomas argues that the definition of user-generated—or, as the statute describes it, content “provided by another information content provider”—has been misconstrued by courts to include content website owners have had a hand in creating. He also makes clear that he believes Facebook and other websites can, and should, be held liable for any user-generated content it selectively promotes (and appears not to differentiate between a Facebook employee intentionally boosting a post and an algorithm that does this automatically).
Based on Pai’s statement chiding others for advancing “an overly broad interpretation” that, he claims, often wrongly shields social media companies from liability in particular, it’s likely that whatever rule he attempts to pass will focus mostly on emphasizing, like Thomas, a need to adhere more to the literal meaning of Section 230’s text, rather than the so-called “spirit of the law.”
Section 230 was passed in 1996 when gore, porn, and harassment were really the only types of content that needed taking down. For example, it did not take into account the deluge of disinformation plaguing social media sites, which did not yet exist. Regardless, even in the event that it’s determined that Section 230 does not grant sites like Facebook immunity for certain moderation decisions, it doesn’t mean they’re automatically liable either.
Less than a week before the 2020 presidential election, three of the biggest names in tech—Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey—will testify before the Senate Committee on Commerce, Science, and Transportation about a longstanding law that protects websites from liability for user-generated content.
The committee unanimously voted to subpoena the men on Thursday. They’re scheduled to testify on Oct. 28, according to committee aides who spoke with Politico on Friday on the condition of anonymity. While the subpoenas are ready to go out, they will not be formally issued because the CEOs have voluntarily agreed to appear before the committee, one aide told the outlet.
Their testimony will address Section 230 of the Communications Decency Act, a key legal shield that protects tech companies both large and small from liability for most of the content their users post online. Codified more than 20 years ago, Section 230 has become a flashpoint over the last few years for both political parties, with Republicans, including President Donald Trump, contending without evidence that major tech companies quietly censor conservative content and Democrats arguing that websites should lose their Section 230 protections entirely for hosting misleading political ads, among other offenses. According to Politico, the hearing will also touch on “data privacy and media consolidation.”
The hearing date, which falls just six days before November’s contentious presidential election, was reached after lengthy deliberations, a committee aide said. The tech CEOs originally pushed for a more far-off date, but after Republican committee members refused, they agreed to testify voluntarily if the subpoena authorization vote passed.
“On the eve of a momentous and highly-charged election, it is imperative that this committee of jurisdiction and the American people receive a full accounting from the heads of these companies about their content moderation practices,” the committee’s chairman, Sen. Roger Wicker, said during Thursday’s session per the Wall Street Journal.
G/O Media may get a commission
Both political parties are championing for integral changes to the legislation. Some proposals that have gained backing from both sides of the aisle include revisions to hold tech companies liable for user-generated content involving child exploitation or threats of violence, according to Bloomberg Businessweek. The PACT Act, a bill that would compel tech companies to be more transparent about their moderation policies and remove illegal content, has also garnered bipartisan support.
This will be the second time this year that top tech executives will testify before Congress. Over the summer, the heads of Amazon, Apple, Google, and Facebook appeared before the House Judiciary Committee to address federal antitrust concerns.