YouTube has deleted President Donald Trump’s latest video over concerns that it could inspire violence, the video sharing site announced late Tuesday. Comments are being blocked on all of the president’s remaining videos and his account has been suspended for at least the next seven days. Notably, the inauguration of president-elect Joe Biden is seven days away.
“After review, and in light of concerns about the ongoing potential for violence, we removed new content uploaded to Donald J. Trump’s channel for violating our policies,” YouTube announced on Twitter. “It now has its 1st strike & is temporarily prevented from uploading new content for a *minimum* of 7 days.”
“Given the ongoing concerns about violence, we will also be indefinitely disabling comments on President Trump’s channel, as we’ve done to other channels where there are safety concerns found in the comments section,” YouTube continued.
Trump’s suspension on YouTube follows other major social media platforms banning the president outright, including Twitter and Facebook. The tech companies booted Trump only after he incited a coup attempt at the nation’s Capitol on Jan. 6 that left five people dead, including a Capitol police officer.
The video that was removed from Trump’s account on Tuesday was a clip from C-SPAN showing the president talking to reporters while departing the White House on his way to Texas. The video was being actively dissected in the YouTube comments section by Trump followers who seemed to believe it was a message to commit more violence. The commenters were particularly fixated on Trump’s phrases, “there is always a countermove,” and “our journey is just beginning.”
G/O Media may get a commission
President Trump refused to take responsibility for the insurrection he incited last week and said Tuesday that a move to impeach him is inspiring more “anger” among his supporters. At least five House Republicans, including Republican Rep. Liz Cheney of Wyoming, have now broken ranks and have said they’ll support impeachment.
Capitol Police warned House Democrats on Monday there are at least three worst-case scenarios they’re planning for in the coming days leading up to Joe Biden’s inauguration, including one plan to encircle the White House to “protect” Trump. Other plans reportedly include a plot to assassinate Democratic members of Congress as well as any Republicans who don’t support the president, according to HuffPost.
Disturbingly, it appears at least three Republican members of Congress were intimately involved in plans to descend on the Capitol, according to reporting from the Intercept, including Rep. Andrew Biggs of Arizona, Rep. Paul Gosar of Arizona, and Rep. Mo Brooks of Alabama.
Some members of Congress even refused to go through metal detectors on Tuesday, a concern not just for the immediate safety of the U.S. Capitol building but for the Jan. 20 inauguration, when armed insurrectionists are expected to arrive. Rep. Marjorie Taylor Greene, a QAnon supporter and Trump neo-fascist, refused to have her bag inspected after she set off the metal detectors and wore a mask on the House floor that reads, “Molon Labe,” which means “come and take it.” The phrase is popular with gun-obsessed Republicans and is a clear call to violence.
Gizmodo reached out to YouTube about more details on Trump’s suspension from the platform but the company declined to explain anything more on the record.
Work on hacking and customizing the new Nintendo Game & Watch has progressed quite a bit since mid-November, but this morning ‘stacksmashing’ woke up to a notice from YouTube that Nintendo had made copyright claims on two of his G&W hacking videos and as a result, they were no longer viewable on YouTube:
According to ‘stacksmashing’ who spoke with Gizmodo this morning, one of the videos features only in-game footage of the version of Super Mario Bros. included with the new Nintendo Game & Watch—footage that countless YouTube reviewers have also included in their reviews and hands-ons of the device—as well as a video featuring the handheld modified to play the NES version of The Legend of Zelda. Prior to these claims, Nintendo hadn’t reached out to ‘stacksmashing’ in any way about their YouTube videos or the G&W hacking content they share via their Twitter account.
In response to the claims ‘stacksmashing’ has edited both of the videos in question and is filing disputes in an attempt to have them allowed back on YouTube again. Gizmodo has reached out to Nintendo for comment on why copyright claims were made for these two specific videos when the gameplay footage they both include has also been featured on countless other gaming-focused channels on the site. One of the videos taken down does include instructions on how users can backup the G&W’s included firmware (allowing them to revert back to it at any time) including guides to using a couple of scripts, but no ROM files are shared. The copyright claims made by Nintendo specifically refer to the use of the game footage featured in both videos.
G/O Media may get a commission
Nintendo has long taken a strong stance against hacking its hardware and consoles to circumvent security features and facilitate game piracy (or accessing games that have been region-locked) but it’s not like the new Game & Watch has the processing power to allow gamers to enjoy the latest and greatest Switch titles. And in these videos ‘stacksmashing’ is in no way advocating that anyone interested in hacking the G&W and expanding its capabilities should also download ROM files for titles they don’t already own. Hacking the new Nintendo Game & Watch also doesn’t in any way hinder sales of the hardware. If anything, more people will be encouraged to buy it knowing it could potentially play more than a disappointing roster of just three included games.
HellfeedHellfeedHellfeed is your bimonthly resource for news on the current heading of the social media garbage barge.
Last week, Donald Trump riled up a mob of his supporters to storm the U.S. Capitol while members of Congress were voting to certify the results of the 2020 elections—a de facto act of sedition that failed to bring about his fantasized coup and resulted in five deaths.
As a reward for his efforts, social media sites which theoretically could have punished the president or his wild mob at pretty much any time in the past few years if not for cowardice—perhaps when he called for Muslims to be banned from the country, or lied to the public about the novel coronavirus, or spread lies about voter fraud, or any other number of rule violations—have finally gotten irritated enough to do something about it. Trump is now permanently suspended from Twitter, his Facebook account is locked down, and he’s been banned from Snapchat, TikTok, and Pinterest.
Republicans of the extremely online variety have long swallowed the idea that the likes of Facebook and Twitter secretly have it out for them—hook, line, and sinker—despite the fact those companies have coddled them for threat of backlash for years. (Facebook even reportedly throttled traffic to left-wing media in 2017 in favor of their conservative counterparts.) It probably won’t help their suspicions that tech companies are now scrambling to look like they didn’t have any role enabling last week’s insurrection.
For this week’s edition of Hellfeed, we’re gonna wildly speculate about where the president will satiate his need to post from here on out and tally the social casualties among his supporters so far.
Where’s our big banned president gonna land next?
The smart money would have been on conservative nightmare factory Parler, the chud-friendly, mostly unmoderated Twitter knockoff that surged earlier this year after being endorsed by a slew of Fox News talking heads and other right-wing media personalities. Parler’s whole raison d’être was to provide a safe space for MAGA fanatics banned from other sites for espousing their horrible beliefs, and it’s hardly a stretch to ponder whether the company was intentionally posturing itself to be Trump’s backup echo chamber.
G/O Media may get a commission
Sadly, Parler’s ambitions have been kneecapped by Apple and Google, which booted its network from their respective app stores over the weekend, and Amazon, which decided to stop hosting the network on Monday. In a new lawsuit against Amazon, Parler accused the company of antitrust violations, arguing it was backstabbed at precisely the time it stood to benefit most from any exodus from other sites.
That leaves Trump with few options. He can’t just take over another Twitter account; he did that and it simply resulted in @WhiteHouse, @POTUS, @TeamTrump, and even an account belonging to one his campaign’s digital directors either being subjected to new restrictions or suspended outright.
As of Monday night, Trump’s still up on YouTube, but he hasn’t posted anything since Jan. 7—and unless the president spontaneously decides to start livestreaming or uploading selfie videos, he’d have to rely on his aides to push anything out there. With Parler down, remaining alternatives quickly start getting pathetic or worse. Trump could humiliate himself by turning to fringe sites like Facebook clone MeWe, for example, penning missives on LinkedIn, or firing up an encrypted chat service like Telegram. Then there’s the worst-case scenario: Fox Business is uncritically talking up Gab, a white supremacist hub that pretends to bill itself as a free speech site. (Gab was the preferred social network of a neo-Nazi terrorist who killed 11 people and wounded a number of others at Tree of Life synagogue in Pittsburgh, Pennsylvania, in 2018.)
It’s theoretically possible, we suppose, that the president could join Fetlife.
The ban list
Here’s a partial list of the consequences for pro-Trump pundits, neo-fascist agitators, and various other right-wingers who are now finding themselves unwelcome guests. It’s true that in one sense this tally is emblematic of the alarming power of a relative handful of tech companies to dominate who says what online. It’s also true that until now, those tech companies were using said power to let these folks run rampant and are flexing because they’re coming dangerously close to consequences.
Please start playing “Mad World” by Gary Jules before reading.
Payment processor Stripe and e-commerce firm Shopify blacklisted our banned president’s “campaign” website to prevent him from commissioning new funds for… whatever the fuck he’s doing now.
Twitter said it has suspended some 70,000 accounts involved in the promotion of QAnon, the conspiracy theory that asserts Democratic politicians and celebrities are part of a child-raping cabal of Satanic overlords. This happens to explain why Rep. Matt Gaetz, Fox News host Brian Kilmeade, White House coronavirus advisertruther Dr. Scott Atlas, former Trump press secretary Sarah Huckabee Sanders, and a bunch of other horrible people spent much of the weekend screaming about their falling follower counts.
Twitter’s ban on content promoting the pedophile-devil-antifa theory snagged former National Security Adviser Michael Flynn, Trump’s depraved lawyer Sidney Powell, and former 8chan admin/fake voter fraud expert Ron Watkins. Also gone are a number of prominent QAnon-loving weirdos going by monikers like “Praying Medic” and “Tracy Beanz.”
Chat app Discord purged a server for “The Donald,” a notorious community of Trump worshipers banned from Reddit that has since migrated to their own hellhole of a site.
GoFundMe has implemented a blanket ban on all fundraisers “for travel expenses to a future political event where there’s risk of violence by the attendees,” i.e. Trump rallies.
Airbnb says it will review all reservations placed in DC around the Jan. 20 inauguration and revoke any placed by known members of hate groups, as well as anyone arrested in relation to the Capitol riot.
You can no longer pay Gavin McInnes, the downwardly mobile Vice News co-founder who later started the Proud Boys, to wish you happy birthday on celebrity video greeting app Cameo.
Washed-up corpse Steve Bannon’s podcast War Room Pandemic was previously banned from a number of sites after the eponymous host called for the executions of Dr. Anthony Fauci and FBI director Chris Wray. It is now no longer on YouTube.
DLive, a livestreaming platform whose top users include a slew of white supremacists and fascists, is trying to lose some heat after those users inevitably took advantage of it to profitfrom livestreams of the Capitol riot. Recent bans include hatemongering talk show host Nick Fuentes, Nich Ochs of the street-brawling Proud Boys gang, and boring far-right troll Tim “Baked Alaska” Gionet. (Baked Alaska is previously best known for responding to his Twitter ban in 2017 by going to an In-And-Out parking lot and raving at random customers.)
The president’s attorney, Lin Wood, who is kind of like if the Confederates had their own Lionel Hutz, was banned from Twitter after calling for Trump supporters to join the Capitol riots to “fight for our freedom” and “pledge your lives, your fortunes, & your sacred honor.”
D-d-double kill: Parler subsequently took down one of Wood’s posts calling for Donald Trump to teamkill Mike Pence.
You may know cartoonist Ben Garrison from his prolific output of terrifying pro-Trump comics (a rough analogue a mind capable of producing them would be the brains of Willy Wonka, David Cronenberg, and Mussolini smushed in a blender with two tabs of bad acid). He’s gone from Twitter for penning comics celebrating the Capitol riot.
Libertarian politician Ron Paul claims he has been (temporarily) blocked from managing one of his Facebook pages for “repeatedly violating community standards,” but has provided no further context on why exactly that is.
One of the editors of hysterical conservative site The Federalist went viral claiming that Twitter nefariously censored the hashtag “#1984.” Twitter does not support hashtags made entirely of numbers.
Conservative radio host Mark Levin claimed he would be suspending his own Twitter account in protest of Trump’s ban in order to move to Parler and Rumble. Levin never actually did that.
GOP Representative Devin Nunes complained to millions of Fox News viewers this weekend that Republicans have “no way to communicate” after the Parler crackdown and still has a Twitter account with 1.2 million followers.
YouTube decided to add to that list on Friday when it cracked down on the channel for War Room, the podcast of former White House chief strategist Steve Bannon. He is currently accused of defrauding hundreds of thousands of people who donated millions to a crowdfunding campaign to build Trump’s border wall. The ban came into effect hours after the president’s lawyer, Rudy Giuliani, appeared on Bannon’s podcast and denied that Trump incited an angry mob to go to the Capitol to interrupt the certification of President-elect Joe Biden’s victory by Congress, per Business Insider.
Giuliani pinned the blame for the riot on the Democratic Party. The incident left five people dead and several police officers injured.
“Believe me, Trump people were not scaling the wall,” Giuliani said. “So, there’s nothing to it, that he incited anything. Also, there’s equal if not more responsibility on the fascists who are now running the Democratic Party, who have imposed censorship on these people, who have been singling them out for unfair treatment since the IRS started going after conservative groups.”
G/O Media may get a commission
YouTube told CNET that it used its community guidelines’ strike system when evaluating Bannon’s channel. Under that system, channels receive one warning and three strikes. Warnings, or when the platform assumes that its policies are not violated intentionally, are usually given when YouTube detects a first violation of its policies. Subsequent violations, however, result in strikes.
Channels that receive strikes lose their ability to post or live stream, among other penalties. Three strikes in the same 90-day period will result in the channel’s permanent removal from YouTube. On Thursday, one day after the riot at the Capitol, YouTube said that any channel with content that violates its policies would receive a strike beginning that day.
“In accordance with our strikes system, we have terminated Steve Bannon’s channel ‘War Room’ and one associated channel for repeatedly violating our Community Guidelines,” a YouTube spokesman told CNET in a statement.
The platform told CNET that War Room’s violations were related to its policy announcement on Thursday. YouTube said it had issued two strikes against Bannon’s channels for videos that violated its rules, although it didn’t specify which videos they were. War Room received another strike in November when Bannon called for Dr. Anthony Fauci, the nation’s leading infectious disease expert, and Christopher Fray, the FBI director, to be beheaded and for their heads to be put on spikes.
Gizmodo has reached out to YouTube to confirm why it took down Bannon’s channel. Searching for “War Room” on YouTube did not bring up the channel’s page. We’ll make sure to update this blog if we hear back.
On Saturday, Bannon addressed the ban in his audio podcast, which I found on Apple Podcasts. He indicated that there would be legal confrontations and told people where else they could find the show.
“If you have been watching us on YouTube, you’re not watching us today,” Bannon said.
Virtual reality is a cool medium for a lot of things. One of those is art—programs like Tilt Brush let artists like That Sabby Life create three-dimensional art pieces entirely in VR. It’s not an easy way to create—what an overwhelming canvas!—but when the results work out, it’s quite something. This piece from The Sabby Life, as shared by Disney’s YouTube channel, captures the wonder of music and the joy of New York City as communicated in Pixar’s latest. It’s extremely good.
This piece, which I do believe was created in Tilt Brush, shows both the process of making the art and the art itself. It really comes out to be something lovely, sketchy and vivid and using its three-dimensional space very well. I’d love to actually check it out in VR.
Yesterday, a nauseated and tired public witnessed a clear, on-the-ground, real-time feedof Trump supporters committing countless potential felonies and misdemeanors. They saw it not through security footage or journalists’ reports but mostly from the culprits themselves, who gleefully livestreamed and tweeted from the Capitol building as if it was a field trip. As the high wore off, tweets and videos vanished—some deleted by the platforms themselves, others likely pulled by slack-jawed Trumpers covering their own asses.
Fortunately, archivists familiar with digital mass takedown events had the foresight to immediately crowdsource the evidence of rioting, and potential destruction of government property, weapons-related offenses, and unlawful entry, to name a few examples.
An extensive directory can be found on the New Zealand-based file hosting service MEGA; it’s the miraculously tidy result of a miles-long thread on the datahoarder subreddit, which amassed over 1,700 comments abounding with links totweets and videos cross-posted all over the internet. A parallel archive mostly containing the same content can be found on the Prague-based search engine and data archive Intelligence X. (While redditors need to rely on MEGA, a third-party platform which can remove content if it likes, Intelligence X owns its own infrastructure. Intelligence X specifically preserves content which might be wiped elsewhere—which can mean Hunter Biden’s emails and private Bitcoin keys). The combined dossiers include MAGA rioters’ posts on DLive, Facebook, YouTube, and Twitter, some of which are still live on those platforms at this writing.
While platforms generally look better without these posts stoking government overthrow, yesterday made abundantly clear why laypeople need to preserve this content before social media companies remove it. It’s useful to know the face and badge number of a law enforcement officer taking a selfie with a rioter, for example.
The relatively consequence-free siege feels similar to the infamous white supremacist rally in Charlottesville, Virginia in 2017, when organized street brawlers injured dozens and a neo-Nazi terrorist rammed his car into a crowd of counter-demonstrators, killing protester Heather Heyer and wounding over a dozen others. Donald Trump failed to denounce that violent mob, too. It was thanks to far-right groups’ brazen, publicity-hungry tactics that directly resulted in many of their members being doxxed in the aftermath. Numerous attendees, many of whom had previously attracted the attention of anti-fascist groups and/or had left extensive trails of digital evidence, were easily identified and doxxedfrom footage by both activists and the professional media. Some lost their jobs, while a number were prosecuted. Others simply lost the anonymity that allowed them to comfortably espouse violent, bigoted beliefs without consequences.
G/O Media may get a commission
In this case, maybe the most self-incriminating evidence originated on DLive—a gaming platform and known alt-right haven—which was quick to remove some of yesterday’s streams. Popular right-wing streamer BakedAlaska, who recently tested positive for covid-19 and is banned on virtually every other platform, offered a full display of himself and fellow rioters damaging government property, and breaking into an office and a conference room while cops mulled around like they were on recess. Fellow traveler Zykotik documented himself and others outside, stomping a pile of camera equipment, and shouting “this is the real news media!” and “fuck fake news!” (This is still viewable on DLive, and you can see a Bloomberg reporter’s view of the destruction here.)
While we await to see whether law enforcement plans to pursue charges, archivists have made sure to keep unmistakable photo and video evidence available for public scrutiny. Founder and CEO of Intelligence X, Peter Kleissner, told Gizmodo via email that the company “sprung into action at midnight local time” in Prague as they noticed Twitter and Facebook removing posts. He says his company has now gathered around 1,000 files.
“Shame on Facebook for deleting evidence related to yesterday’s riots while keeping up accounts and videos of violence and extremism (including ISIS propaganda and QAnon content) for years,” Kleissner wrote. “Didn’t Mark [Zuckerberg] say they ‘Won’t Be ‘Arbiter of Truth’? While censorship is a complicated topic, one thing is for sure: Mark is usually on the wrong side.” Kleissner believes that these self-incriminating acts should be preserved for historical purposes. “Thinking long-term, people in 2121 will hopefully benefit and appreciate these efforts that we take in this moment,” he said. “Looking back in history and the 1812 breach of the Capitol as well as other events such as the 1933 German Reichstag fire highlight the need for accurate and original data in historical context.”
In the immediate future, the act of group documentation can also backfire disastrously for far-right groups as all it takes is one security slip-up or revealing a few too many personal details for police, activists, and the media to compile enough information to identify the individual behind a username or expose their poorly-laid plans.
For example, left-wing media collective Unicorn Riot has repeatedly leaked Discord chat logs detailing the inner workings of white supremacist groups such as Identity Evropa, Atomwaffen Division offshoot Feuerkrieg Division, the now-defunct Traditionalist Workers Party, and the National Socialist Legion, as well as a bevy of others based in the Pacific Northwest. In 2019, an unknown individual or individuals leaked the SQL database for Iron March, a message board that served as one of the major hubs of the white supremacist movement until its dissolution in 2017. That data exposed numerous individuals who had hitherto kept their offline identities hidden, including a Canadian Royal Navy sailor who had advertised arms deals to other users, a U.S. Navy sailor who had previously recruited members for Atomwaffen, and a prison guard captain at a Nevada detention center used to house federal immigration detainees who had attempted to create a white nationalist group.
While Twitter has treated Trump’s account as a national emergency and temporarily locked him out, the company seems to be using a lighter touch on people who’ve glorified rioters. Though many of the more incriminating first-person tweets have been removed, other viral tweets spreading conspiracies and cheering on the insurrection remain up.
After complaints yesterday, YouTube told Gizmodo via email that it has demonetized the YouTube channel for Elijah Schaffer—a right-wing Blaze TV reporter who tweeted an image of an open inbox on a computer inside Nancy Pelosi’s office—and suspended him from the Partner Program, as the channel doesn’t follow YouTube’s advertiser-friendly guidelines. YouTube told Gizmodo that it’s looking into other posts that are stilllive.
Facebook, which has blocked Trump until the end of the presidential transition, and DLive were not immediately available for comment.
After years of warnings that YouTube is fostering the political radicalization of a generation by allowing debunked conspiracy theory content to thrive on its platform, the biggest streaming video site on the web seems to be feeling a little jumpy. Following the Wednesday attack by rioters on Capitol Hill, YouTube says it will be taking a less forgiving approach to users who spread disinformation.
Donald Trump’s decision to fire up a crowd of thousands of supporters to descend on the congressional session to certify the presidential election has thrown social media companies into panic mode. YouTube first responded by removing a video the president posted in which he made false claims about the election and praised the people responsible for a riot that left four people dead and constituted the first breach of the Capitol since the War of 1812.
Now, YouTube says any user who violates its Presidential Election Integrity Policy will receive an automatic first strike on their channel. The offending video will be removed, and the user will be blocked from uploading new videos or live streaming for one week. If the user receives three strikes, they’ll be permanently banned from YouTube.
In December, YouTube implemented its new election policy in response to the growing cesspool of disinformation claiming the race was stolen from Trump. Originally, the policy included a grace period in which users would receive a warning before the strike process began. The grace period was planned to expire on January 21st, the day after President-elect Joe Biden’s inauguration. In a statement today, the company said that “due to the disturbing events that transpired yesterday, and given that the election results have now been certified” it’s ending the grace period early.
President Trump has been quiet today, though his temporary suspension from Twitter has been lifted. Twitter is his favored platform but the company warned if the president violates its policies again, he will be banned permanently. In the last 24 hours, we’ve also seen Trump banned from Facebook indefinitely, locked out of Snapchat, and suspended from Twitch. By all accounts, he can still launch a nuclear weapon.
When it comes to the rules of the internet, you can always count on two things to be true: crowdsourced projects will fly off the rails, and YouTube commenters are the worst. Despite these foundational principles, YouTuber Sean Hodgins has thrown caution to the wind and hacked together a little experiment that allows people like you to determine the look of the thumbnail on one of his clips.
In his latest video, Hodgins explains that his dissatisfaction with the thumb nail on his last clip inspired him to open up the choice to viewers. In order to do that, he’s integrated a Python script with the YouTube API that allows users to paint on a digital canvas pixel-by-pixel. He also made a cartoonishly over the top panic button that he can slap at any time if he needs to stop the posting because something’s inappropriate or malfunctioning.
All you need to know to start adding pixels to the thumbnail is the formula (x, y, r, g, b). X and y represent your x- and y-axis on the canvas. The top left is (0,0) and the bottom right is (838,563). R is red, g is green, b is blue, and determining the mix of each value creates the color of your pixel. You can go through the trouble of manually figuring all that out or you can just use Hodgins’s handy tile-maker tool that allows you to draw more naturally. After you’ve drawn your pixels in the tool, copy the code, and throw it in a comment.
It’s important to keep in mind that this process might change or fail entirely. Hodgins has already added an update removing the “$$” command requirement before comments in the hope of avoiding any erroneous errors from YouTube’s spam filters. He also upped the time between a comment being posted and added to the thumbnail from five minutes to 10 minutes. It’s been a long time since I’ve tried to make anything with YouTube’s API, but I still have nightmares about it. All I know is my comment appeared on the thumbnail with no issues. Also, the video embed doesn’t seem to want to play with Kinja, so head to YouTube to check it out.
G/O Media may get a commission
Below you can what the progress on the thumbnail looks like at the time of writing. Do your thing, internet.
Lawmakers in China are considering new legislation that would impose fines against anyone who creates videos where people eat large quantities of food or binge drinks, according to a new report from Chinese state media outlet China News. The proposed media rule, part of broader legislation to discourage food waste, would also allow restaurants in China to charge extra for customers who don’t finish their meals.
The Chinese government under President Xi Jinping started a campaign called the Clean Plate Campaign earlier this year in an effort to waste less food domestically. Xi said over the summer that the goal was to create a social order where “waste is shameful and thriftiness is applaudable.” Fines for breaking the new law would range from 10,000 yuan to 100,000 yuan, or roughly $1,530 to $15,300 in U.S. currency.
Binge-eating and drinking videos, known by the word “mukbang,” became popular in China in recent years. Mukbang is a combination of the Korean words for “eating” and “broadcast, with viral stars like Pangzai doing precisely that and enjoying internet fame with English-speaking audiences on platforms like Twitter and YouTube.
What exactly does Pangzai do? He eats a lot of food, smokes cigarettes, and drinks alcohol stylishly—all while sometimes telling stories about his life, his home, and his family. It’s all much cooler and more visually compelling than it sounds, but Pangzai announced in August that he would no longer be making videos due to a government crackdown. Amazingly, Pangzai reemerged this month on Twitter.
The draft legislation, first reported in English by Sixth Tone, was submitted to China’s Standing Committee of the National People’s Congress on Tuesday, and covers “radio stations, television stations, and online audio and video service providers.” An audio-only version of mukbang sounds kind of gross, but who are we to judge?
G/O Media may get a commission
Traditionally, it’s very polite in China to serve guests large portions of food, a way to show generosity that would be familiar to many Americans. But that generosity is creating a culture of waste, where an estimated 17 million pounds of food in China gets thrown out every year. That much food could feed an additional 30 million people each year at the very least—roughly the entire population of Texas—according to a recent study from the Chinese Academy of Science and the World Wildlife Fund.
While we’re sympathetic to the idea of reducing food waste, this does seem to be another area where the Chinese government has overstepped. If you’ve ever seen a video by the 34-year-old Pangzai, you know there’s no food or drinks there going to waste.
Pangzai is a champ who’s simply sharing his life with the internet, even if he’s swirling a “tornado beer” while he’s doing it.
In fact, this blogger initially assumed Pangzai’s videos might be officially approved by the Chinese government when they started going viral last year, given their popularity on western social media platforms. Pangzai is a middle class guy in China who arguably helps internet users outside of the country relate to a culture they often only hear about negatively in the mainstream American press. Alas, Pangzai wasn’t engaging in state-sanctioned Tornado Beer Diplomacy with the rest of the world. If this new law passes, he’s officially a renegade.
China’s cyber police reportedly shut down over 13,000 mukbang video accounts recently in a bid to stamp out food waste, arguably pushing the U.S. and China further apart during a contentious period in the New Cold War. We need more mukbang, not less, if we want average people on both sides of this stupid conflict to understand each other better and avoid the mistakes of the first Cold War.
It’s not clear what will happen to Pangzai’s Twitter account if the new anti-binge law passes, but he’s still doing his thing after his triumphant return on December 1.
We almost didn’t survive that bizarre period from 1945-1990 when the Soviet Union and the U.S. were always 20-minutes away from global destruction, given the nuclear near-misses we’d only learn about decades later. But today we have the internet and mukbang to help bridge the cultural divide. We should keep it that way.
In July, Harry “hbomberguy” Brewis shared a video on his popular YouTube channel called “RWBY Is Disappointing, And Here’s Why.” The two-and-half-hour video — a sharp, detailed critique of the cartoon RWBY — was the result of a lot of work by Brewis and his producer, Kat Lo. It also took an extra week and a half of editing and $1,000 in legal fees just to get and keep the video up on YouTube. All because of YouTube’s copyright filter. And thanks to a new proposed law by Sen. Thom Tillis, Brewis’ experience could become virtually everyone’s.
YouTube’s copyright filter is a labyrinthine nightmare called Content ID. Content ID works by scanning all the videos on YouTube and comparing them to a database of material submitted by copyright holders—often music labels and movie and TV studios—which have been given the ability to add things to the database by YouTube. Once Content ID matches a few seconds of an uploaded video to something in the database— regardless of context — a number of automatic penalties can be imposed. According to Google, most of the time the rights-holder chooses to just take the money generated by ads placed by Google on the video. If the original creator didn’t want any ads put on their video, too bad. But in other cases, the rights-holder can make something much worse happen: They can make sure no one sees the video at all.
The problem with filters like Content ID is that their restrictions have nothing to do with the law. The ability to use copyrighted material without permission or payment—especially short clips for purposes such as criticism, commentary, education, and so on—is protected by something called “fair use.” It’s easy to get into the weeds of fair use, but the important thing to note is that whether or not a use is fair depends on a lot of context. Context that Content ID simply can’t determine. All it does is determine is whether elements of a work match to its source, not what is actually being done with the material. For example, a movie review using a 14-second bit of a film to illustrate what is good or bad will trigger a Content ID match to the whole movie. As far as Content ID is concerned, those 14 seconds are no different from a complete copy of the film being uploaded. So while algorithms like this might be useful in flagging potential infringement, the fact that Content ID automatically applies penalties, with no human review involved at all, is a problem.
Brewis’ situation is not unique. And it’s possible that it’s about to be the best-case scenario for anyone trying to share videos, music, or art online. You may remember the overzealous EU copyright directive that passed last year. We are seeing a call for new, faster, harsher penalties in the United States, too. In the giant spending and covid-19 relief package, there are two new copyright bills: the CASE Act, which creates a weird quasi-court in the Copyright Office that can deal out $30,000 worth of “small claims” judgments with limited appeal options, and a bill to make certain streaming operations a felony. Earlier this year, the Copyright Office issued a report that argued that the problem with the internet is that not enough content is removed, and not enough people are losing their internet access because of unproven accusations of copyright infringement. Content ID and YouTube have existed for a long time. Why dig into filters now? Because we keep hearing calls to make them mandatory. Just last week, the Senate subcommittee on intellectual property had a hearing on such things, where it was claimed, over and over, that perfect filters do exist, it’s just that tech companies haven’t been forced to make them.
G/O Media may get a commission
After a year of hearings in which the public interest was routinely not represented or mostly ignored (and constantly sparking the question of why there were monthly hearings about copyright in a year filled with other concerns), Sen. Tillis has produced a draft bill that does all sorts of dangerously bad things to the internet. Most relevant to this discussion is that it requires internet services to monitor uploads, requires what is called “notice and staydown,” and calls for the establishment of “standard technical measures.” Within the draft it’s clear: All of these things require filters. And it appears that Tillis has joined the chorus claiming that filters will solve all our ills.
They’re wrong. I spent the last year digging through YouTube’s own documentation, reports from others, and doing interviews as part of a research project I did into Content ID for the Electronic Frontier Foundation. You can see a year of work and almost 10,000 words in the whitepaper, “Unfiltered: How YouTube’s Content ID Discourages Fair Use and Dictates What We See Online.” But I’m not writing here as an EFF employee—I am writing as a private internet user who, through a love of media, has forced herself to become an expert in the arcane world of intellectual property. And it’s a mess. Content ID alone is so complicated that those who rely on YouTube for their livelihood are constantly trying to divine, through trial and error, how it works.
So let me tell you: There is no secret, better filter out there that is just hidden, waiting for tech companies to use it. YouTube’s Content ID is one of the best-funded and most-used filters online. It’s not just removing legal speech from the internet, it’s dictating it. It’s forcing every website, every ISP, every whatever internet service Congress decides new laws should apply to, to have a filter that will ruin lives. If Google’s filter doesn’t work, why does anyone think a cheaper, less-tested one will?
So when we talk about the problems with YouTube, we are talking about a possible future of the internet. We are talking about people trying to make use of their right to free expression being blocked by an algorithm. People trying to make a living as independent creators seeing their work shut down or their wages taken, with no reasonable way to appeal.
In Brewis and the RWBY video’s case, the penalty chosen by the studio was the extreme one—destruction. So Brewis first tried uploading 20-minute portions of his video to YouTube to see what Content ID matches it triggered so that he could edit them out. But the 20-minute videos came back clear of any matches. So he uploaded the full video. It came back with two matches. So he trimmed the claimed portions and reuploaded. It came back with two new matches. He edited again. And reuploaded again. And again. And again.
Brewis discovered that the studio behind RWBY, Roosterteeth, has set Content ID to automatically take down any video that has a Content ID match. Roosterteeth explained that it expects creators using its material to go through YouTube’s Content ID dispute process, which lets the studio know that someone is using it. Then, Roosterteeth will decide if it approves of the use, then get the YouTuber to agree to its terms, and then manually change Content ID to “just” put ads on a video and collect the money it generates.
To avoid having to have the company behind a series he was criticizing determine the fate of his criticism, Brewis re-edited the whole video so that not a single clip of the show was over five seconds long. This added a week and a half to production time. He also paid a lawyer $1,000 to check that he was within his rights to use the clips.
While I have spent untold hours unwinding the tangled web of Content ID, none of the broad generalities of my research will be surprising to anyone who has experienced the capricious and ever-changing algorithm behind Content ID. There are stories of creators making videos to suit the algorithm, of simply handing over revenue to the people you are criticizing in your work to avoid the hassle, and of certain works or even whole types of art simply left uncriticized because getting through the copyright filter is just too difficult.
Challenging Content ID matches is fraught as hell for creators uploading videos. Trying to map out how things work inevitably turns you into that Charlie Day meme. It’s so confusing that literal experts in copyright law have been confused by this system. So it’s not a surprise that many YouTubers have decided to just submit to the almighty algorithm. While fair use does not have any concrete number of seconds that makes a use legal, Content ID triggers matches on, anecdotally, snippets of five to 10 seconds. So YouTubers choose clips under five seconds long. No, not the clip that matches their point best. Just the clip that will pass Content ID. Fair use does not require payment. But YouTubers will let the revenue generated by their videos go to the rights-holders rather than chance a fight. Hey, at least that way the video is visible, right?
This is a ridiculous outcome, by the way. A critic shouldn’t be handing over their wages to the major corporation behind the movie, show, game, or song they are reviewing. That’s never how that job has worked.
Content ID is much more sensitive to audio-only material, matching music much more often than full audiovisual material. You may have heard about the classical musicians who have been consistently blocked while trying to post videos of themselves playing music that no one currently owns because those composers have been dead for hundreds of years. This is why.
What’s happening there is that while the compositions are in the public domain, there is still copyright in specific performances. Let’s say CBS is the label behind an album of classical music written by Beethoven but played in 1990 by Yo-Yo Ma. Ma’s performance is copyrighted, but Beethoven, dying as he did almost 200 years ago, no longer has a copyright in the composition. CBS puts the recording of Ma into Content ID. Content ID then flags anyone playing Beethoven because, unsurprisingly, two people playing the same compositions on the same instrument sound the same to a computer.
Music reviews are simply less common on YouTube for this reason. It’s so much harder to make a living at it since the videos get blocked or the money gets taken away.
Content ID is not the only filter out there, but we have to take the problems with it more seriously. First of all, Google has spent over $100 million on Content ID, and it’s still a pile of garbage. Second, YouTube’s basically cornered the market on user-uploaded video. The very existence of the word “YouTuber” proves that. People are far more often called YouTubers than they are vloggers, video essayists, or any other generic term. Everyone I talked to was clear that they were on YouTube not because it was the best option, but because it was the only option.
While I dearly love to bash Big Tech, technology isn’t magic. The algorithm is not going to save us. We should be very skeptical of anyone who says this, in any context. Google has spent $100 million trying to get Hollywood off its back, and it hasn’t worked. And if filters become a requirement, who is going to be able to afford that? Just YouTube and its parent company Google, probably.
The promise of the internet was lowering barriers to expression. Studies keepshowing that the world of mainstream criticism is overwhelmingly white and male. Part of the solution has been bypassing traditional gatekeepers and going directly to audiences. But Content ID stands between creators and audiences by blocking videos. It makes independent criticism a difficult job since it unjustly redirects revenue from the critic to the criticized. If the content cartel has its way, filters will be everywhere, and again all sorts of voices will disappear.
Artists who complain about infringement have real concerns, but the idea that filters will save them is deeply misguided. Instead, a whole other set of artists will be all but wiped out. We should be asking ourselves, constantly, why it is that there are so many ways to fast-track copyright claims, but not anything else.
This isn’t to say that platforms should be taking down more speech, but that they spend a disproportionate amount of time and energy on policing intellectual property. Why are intellectual property claims the fastest way to get something vanished off the internet? Possibly because the groups making those claims are some of the largest companies in the world, with the resources to make Big Tech worry and get Congress to do their bidding.
Laws that increase penalties for intellectual property violations and make more and more hoops for people to jump through to share their thoughts are not wins for free expression or for any regular internet users. The only winners will be Big Tech and Big Content.
Any proposed law that prevents people from being heard, which is a boon to the monopolies who already make obscene amounts of money and shreds the ability of regular people to use the internet, should be fought as hard as can be.
Since I talked to Brewis three months ago, he has had a years-old video completely blocked and a video about the BBC Sherlock blocked in the UK by Content ID. This is the future some people want for everyone. And we need to stop it.