Chances are you’re probably familiar with using a command like, “OK Google,” “Hey Siri,” or “Alexa” to summon a voice assistant. But now, the company that brought you pokes and likes—and who could forget fake news?— is trying to add, “Hey Facebook” to your rotation.
Facebook’s new vocal command officially rolled out today with the company’s announcement that it’s introducing “Hey Facebook” as an opt-in wake phrase on the Oculus Quest 2 to help deliver a more seamless hands-free VR experience.
Now on a certain level, as we continue moving into this era of ambient computing where you don’t have to be sitting in front of a monitor and keyboard to actually use a computer, the addition of another wake phrase to everyday use shouldn’t come as a big surprise.
However, there’s something that just feels off about “Hey Facebook,” and after ruminating on it a bit, I think I’ve figured out why. The difference between “Hey Facebook” and other voice command triggers is that when you are directing a question to Siri or Alexa, you are directing your request to a specific entity, which in this case is an AI-powered digital assistant. And even though, “OK Google” would seem to be the same as “Hey Facebook,” Google has always made it clear that you are talking to the Google Assistant, not the company itself. (For what it’s worth, I still think the Google Assistant badly needs some kind of normal, human name.)
G/O Media may get a commission
But with Facebook, there is no assistant or AI to speak. Facebook killed its previous assistant M, so with “Hey Facebook,” it feels like you are calling out to the faceless company that reminds people when your birthday is and keeps tabs on unspecified amounts of personal data. In the same fashion, “Hey Facebook” is also different from saying “Hey Portal,” which inherently refers to a specific device in your home. “Hey Facebook” just doesn’t feel the same.
So while saying “Hey Facebook” is extremely weird, that won’t stop Facebook from pushing its new wake phrase instead of alternatives like “Hey Portal” or even “Hey Oculus.” Facebook usually gets what it wants, even if it’s something no one else is on board with.
Twitter is finally rolling out a way to get paid for tweeting that doesn’t involve putting a Venmo link in your bio, promoting a Patreon, or using the app to hunt for a rich spouse.
On Thursday, the company announced a new feature that could change the way the app functions entirely: Super Follows, which is essentially paid subscriptions for individual Twitter feeds. Users will now be able to paywall certain types of content away from others on Twitter with “Super Follows,” which allows them to charge more for various types of content. According to the Verge, that could include giving paid subscribers access to private tweet feeds, Twitter’s new newsletter feature, or profile badges. Another feature announced on Thursday, the ability for users to create and join groups called Communities, can also be paywalled. Both of these additions won’t be rolled out for a few months, and according to the Verge, it’s not clear how big a cut Twitter will take from the revenue.
This is a big shift in the way Twitter operates: a long-running and pretty tired joke on the site has been that “this site is free,” referring to none of its content directly costing any money whatsoever. The flip part of that equation is that monetizing a Twitter presence is impossible without referring fans somewhere else, even if that’s just to pay for access to a private Twitter feed. So this is sort of a big shift, in that it could reshape the incentives for users to participate in the site in the first place and allow Twitter to compete directly with crowdfunding app Patreon and similar payment tools on Facebook and YouTube.
It’s also easy to see how this could open a Pandora’s Box of sorts for Twitter. It’s long struggled to rein in toxic communities like white supremacists, conspiracy theorists, and far-right trolls, all of whom could now potentially use the app as a way to make money. The addition of private feeds for subscribers could also let those so inclined hide stuff like harassment campaigns behind paywalls, where such content will be accessible to a smaller pool of paying followers unlikely to report it to the site’s moderators. (It’s already possible to do this via direct messages, locked accounts, and off-site coordination, but still.)
Similarly, the Communities feature sounds pretty close to Facebook Groups. Facebook pivoted from the news feed to an emphasis on Groups in 2019, which had disastrous consequences after said Groups were infested with death threats, harassment, and calls to violence.
G/O Media may get a commission
Another thing Twitter hasn’t clarified is whether it will allow Super Follows for sexual content, a type of content which is only subject to a handful of restrictions elsewhere on the site (like not posting it in banner images or profile pictures.) Allowing it would put the site in direct competition with places like OnlyFans, though when Motherboard’s Samantha Cole asked Twitter whether or not it will allow users to pay for porn the company responded with a non-answer, claiming that it was “examining and rethinking the incentives of our service.”
The announcement has also set off a wave of am-I-kidding-or-aren’t-I speculation from reporters and other media types about whether or not their employers will allow them to charge for tweets. It’s not any kind of secret that journalists are among the most Twitter-addicted people on the planet and comprise a large percentage of the power users that dominate the app’s feed… and thus easy to see why this is an appealing fantasy for them.
Suffice it to say that while anything that subsidizes, say, tech bloggers buying fancy aquariums is welcome, how big the reader appetite to fund 280-character insights is or how willing news organizations are to let staff run sidelines remains speculative at best.
According to the Verge, Twitter said during a business presentation on Thursday that paid subscriptions and the Communities feature are marked as “what’s next” without putting forward a solid timeline for implementation. Per CNBC, Twitter told analysts and investors it hopes the new features will help it hit its goal of $7.5 billion in annual revenue by 2023, about double how much money it makes now.
Just two months after running a full-page ad decrying Apple’s impending updates, Facebook is rolling out another campaign meant to defend the targeted ads that make up about 98% of its multi-billion dollar revenue stream.
Per CNBC, the one-minute ad will air across digital platforms, radio, and television starting today. Facebook says the spot is meant to highlight “how personalized ads are an important way people discover small businesses on Facebook and Instagram,” and “how these ads help small businesses grow from an idea into a livelihood.”
You can give it a watch below:
The ad features a few stand-ins for the small businesses that seemingly rely on Facebook’s ad-targeting tech for their livelihoods. There’s one woman who’s shown using Instagram ads to promote her goat farm to people who want to give goat yoga a try. There’s a pair of influencers shown advertising an indie bag brand that—as Facebook points out—doesn’t only pay homage to her West African background, but also supports “empowerment work” in the region. All the while, Grace Jones (yes, the Grace Jones) does some spoken word about how wonderful ad targeting is, and how it brings these sorts of interesting businesses to people’s attention. The end of the spot then directs viewers to a dedicated site that reminds us right at the top that “good ideas deserve to be found.”
This is Facebook’s latest attempts to butter up users ahead of Apple’s planned rollout of certain anti-tracking tools meant to give iOS users a but more transparency and control over the data that their apps are allowed to collect. Since this past summer, Facebook has argued at everypossibleopportunity to argue that without the ability to freely track users and pelt them with ads, small business will suffer. In response, Apple fired back that it was simply standing up for iOS users that were tired of Facebook’s ongoing disregard for user privacy. Facebook shot back that this was all baldfaced attempt on Apple’s part to monopolize that juicy user data all for themselves. Apple responded that these updates aren’t eliminating targeted ads entirely, but simply giving users the chance to opt-out.
These iOS updates are still on track to roll out in early spring, which means Facebook needs to do all the damage control it can before then. Earlier this month, the company announced it would be testing some pop-up prompts of its own across iPhones and iPads asking users to allow Facebook to track them across apps and sites “for a better ads experience.”
G/O Media may get a commission
What Facebook’s trying to do here is remind us all that while you probably don’t love targeted ads, you probably love yoga studios, handbags, and the people behind them. Facebook’s webpage for the campaign does what it can to convince us that those to ideas are one in the same: if you click on the landing page’s definition of what “personalized ads” are, it doesn’t tell you anything about how Facebook’s ads are targeted or how they manage to track you across the web. Instead, Facebook says that “personalized ads (aka ‘targeted ads’) help small businesses grow by reaching customers that are more likely to be interested in their products or services.” That’s it.
Facebook goes on to say that these ads don’t only support the businesses you love, but actually preserve your privacy, regardless of what Apple tells you:
Personalized ads help us connect you with businesses that are most relevant to your interests, without sharing who you are with the advertiser. Individual data that could identify you, like your name, posts, or contact information is never shared with businesses using personalized ads.
On one hand, this is all technically true: the data that these businesses use for ad targeting is so aggregated that they’re typically getting a birds-eye view of the number of clicks from a few hundred or thousand people at a time, rather than just one. But the only reason those quasi-anonymous pools of data even exist in the first place is because Facebook’s spent more than a decade tracking us all.
It’s also worth noting here that with the latest iOS update, that creepy cache of data won’t be going anywhere. It’ll just make sure that Facebook isn’t able to build up more data on all of us. Regardless, some analysts suspect that losing access to this ongoing data trickle could cost Facebook about 10% of its quarterly revenue—about $8 billion dollars by the end of this year.
But what about those small businesses? The ones that Facebook says rely on its ad platform for their survival?
Nobody can deny that the ongoing global pandemic has devastated countless small businesses across the country, many of which don’t see an end to the current economic climate in their near future. However, it’s unlikely that the impact of Apple’s update will be anywhere nearly as catastrophic as Facebook’s saying here. Back in December, Dipayan Ghosh—an ex-Facebook executive turned public critic of the company—pointed out as much. Small businesses, he said, don’t only advertise on Facebook, and they don’t only rely on Facebook’s massive reams of data to do that work. Over time, some small business owners on forums like Reddit have reachedthesameconclusion: advertising might be a little harder with Apple’s new update, but it won’t be impossible.
What would be truly egregious would be if a company were willfully misrepresenting the efficacy of its targeted ads to those same cash-strapped small businesses. But Facebook wouldn’t know anything about that, would it?
Finally, you can sort the music in your Liked Songs playlist on Spotify.
Hitting play on the Liked Songs playlist in Spotify has always been a bit of a crapshoot for me—I never know whether I’ll get Steely Dan or Phoebe Bridgers or Ginuwine’s “Pony.” If this sounds like you (maybe sans “Pony” but that’s your business), Spotify mercifully began rolling out mood and genre filters today for both free and premium accounts.
Spotify says that for anyone with at least 30 songs in their Liked Songs playlist, they’ll be able to filter their music with up to 15 personalized genre and mood categories. However, these mood and genre filters are populated based on the music in your playlist. That means if you change the playlist by adding or removing titles, so too can your mood and genre filters change.
To enable the feature, head to Your Library and select Liked Songs. Below the “add songs” button but above the actual playlist, you should see additional bubbles that display your mood and genre filters. To filter by a specific category, select the bubble. To disable it, just click the “X” that’ll appear next to it. In Spotify’s demo of the feature, some of the filters included things like chill, indie, electronic, rap, and folk.
Don’t be alarmed if you don’t see the feature immediately. Spotify said that it’s coming to iOS and Android in the U.S., Canada, UK, Ireland, South African, New Zealand, and Australia “over the coming weeks,” so keep an eye out.
Live Captions is one of the most useful features on Android phones, allowing your mobile device to automatically transcribe any audio it’s currently playing. And now it seems Google is bringing Live Captions to Chrome, with the feature already available as a hidden option in the browser.
First noticed by Chrome Story, Live Caption can actually be activated now in Windows, macOS, and Chrome OS versions of Chrome 88. But if you want to try out Live Captions for yourself, you’ll need to manually enable it as it’s currently still listed as an experimental feature. To activate Live Captions, you can paste this command chrome://flags/#enable-accessibility-live-caption into Chrome’s search bar, and then search for Live Captions to see the toggle option.
Once you have Live Captions turned on, you’ll be asked to relaunch Chrome. From there, to get it working, all you need to do is browse over to a video or something like a podcast in Chrome, and a small bar should automatically pop up along the bottom of the browser displaying live captions.
That said, Live Captions is still an experimental feature and there are a few bugs. The first is that it doesn’t seem to work with YouTube at all (unless you are running Chrome Canary), though that’s not necessarily a huge deal as YouTube already offers automatic closed captions for many videos.
G/O Media may get a commission
Additionally, depending on the audio source, transcriptions may not automatically appear as you expect or might stop working if you pause a video, so you may have to restart the Live Captions feature by turning it on and off from Chrome’s Global Media Settings controls (the music note icon in the top right corner of Chrome). And on Chromebooks and other Chrome OS devices, Live Captions doesn’t seem to work for audio coming from Linux or Android apps either.
Still, some bugs are to be expected for something that hasn’t been officially released yet, and even though in my experience the accuracy of Google’s Live Captions can be somewhat hit or miss, the feature is still a valuable upgrade for general accessibility.
Facebook finally banned the military in Myanmar, known as Tatmadaw, from the social media platform several weeks after the military staged a coup that toppled the democratically elected government. The ban on the country’s military includes Instagram, which is owned by Facebook.
“Events since the February 1 coup, including deadly violence, have precipitated a need for this ban. We believe the risks of allowing the Tatmadaw on Facebook and Instagram are too great,” Rafael Frankel, director of policy for the Asia-Pacific region, said in a statement posted online late Wednesday.
“We’re also prohibiting Tatmadaw-linked commercial entities from advertising on the platform,” Frankel continued. “We are using the UN Fact-Finding Mission on Myanmar’s 2019 report, on the economic interests of the Tatmadaw, as the basis to guide these efforts, along with the UN Guiding Principles on Business and Human Rights. These bans will remain in effect indefinitely.”
Facebook has already taken down military-connected pages like Tatmadaw True News Information Team, MRTV, and MRTV Live since the coup earlier this month.
Facebook’s statement doesn’t mention the 20-year-old protester, Mya Thwate Thwate Khaing, who was shot in the head during an anti-coup protest in Myanmar and later died in the hospital, but that event has attracted condemnation from around the world.
G/O Media may get a commission
The Myanmar government is currently being run by the military, but Facebook made sure to stress that certain parts of government that are vital to public health and wellbeing, such as the Ministry of Health and Sport and the Ministry of Education, will not be affected by the new ban.
Facebook is tremendously popular in Myanmar and one of the first things the military government did after taking power was to ban the social media platform. Service has been highly restricted ever since, with Netblocks reporting that Facebook, WhatsApp, and Instagram are all currently down.
Facebook came under heavy criticism after the platform was used to incite genocide in Myanmar in 2018 but the company insisted on Wednesday that it held the military to the same standards as everyone else. The new statement lists four factors that caused Facebook to make this decision:
The Tatmadaw’s history of exceptionally severe human rights abuses and the clear risk of future military-initiated violence in Myanmar, where the military is operating unchecked and with wide-ranging powers.
The Tatmadaw’s history of on-platform content and behavior violations that led to us repeatedly enforcing our policies to protect our community.
Ongoing violations by the military and military-linked accounts and Pages since the February 1 coup, including efforts to reconstitute networks of Coordinated Inauthentic Behavior that we previously removed, and content that violates our violence and incitement and coordinating harm policies, which we removed.
The coup greatly increases the danger posed by the behaviors above, and the likelihood that online threats could lead to offline harm.
The difficult part to understand, of course, is why points one, two, and four in the list weren’t enough for a ban on February 1 or earlier. The word “history” is used in points one and two, an implicit acknowledgement that none of this is new.
Optimists are fond of saying “better late than never,” but that’s a tough pill to swallow when you’re talking about things like genocide and military coups. But, better late than never, Facebook.
Attention, haters: you have officially been put on notice by Facebook’s VP of Global Affairs Nick Clegg.
This morning, Clegg unleashed a very salty, very strongly-worded rebuke to sordid charges propagated by publishers supposedly looking for a cash grab. It’s titled “The Real Story of What Happened With News on Facebook in Australia,” and reads like a closing argument in a courtroom drama—in this case, essentially accusing Australian lawmakers of allowing the media industry to pick Facebook’s pocket through a proposed law which would compel the company to pay for journalism. While I reject every assertion in this blog post, it’s nice to finally get a human on the line—rather that the unbroken chain of prerecorded denialism and we’ll-get-back-to-yous from Facebook which rarely relate in any way to criticisms at hand.
Here’s the reported version of the Story of What Happened With News on Facebook in Australia: Australian lawmakers have been wrapping up some new legislation (the News Media Bargaining Code). Specifically, it gives Australian news businesses the power to bargain over a rate which Facebook and Google would have to pay in exchange for hosting news articles in full, as excerpts, or in link form. Facebook did not care for this plan and retaliated by pulling all news links shared by publishers and users from its site in Australia. (Clegg said that Facebook had to do so to protect itself from liability.)
But many also noted that this kind of proved the point that Facebook wields way too much power over news access online. Yesterday, the company flipped the news switch back on after lawmakers agreed to a handful of amendments. They would give Facebook and Google a month’s notice before enforcement and potentially exempt Facebook entirely if it proves that it already pays Australian media companies through alternative deals. (Google, on the other hand, struck a deal with NewsCorp to share some ad revenue and create a subscription service. Google already pays some participating publishers to give readers free access to paywalled articles in its News Showcase product. Facebook reportedly pays a select number of outlets to present their full stories in its News Tab.)
In a blog post yesterday, Facebook said it was “pleased” with the agreement, but Clegg saved a few choice words for (presumably) legislators and journalists. Claiming that the Australian lawmakers were deluded by “a fundamental misunderstanding” of how news on Facebook works, Clegg argued that Facebook actually provides news outlets a free marketing service. More to the point, what you’ve heard are lies [emphasis theirs]:
The assertions — repeated widely in recent days — that Facebook steals or takes original journalism for its own benefit always were and remain false.
G/O Media may get a commission
Okay, depends on your vantage point. Moreover, that wasn’t really the lesson from the past week. We just learned that Australians like getting their news from Facebook.
Clegg could have left it there, but he decided to let it rip:
Of course, the internet has been disruptive for the news industry. Anyone with a connection can start a website or write a blog post; not everyone can start a newspaper. When ads started moving from print to digital, the economics of news changed, and the industry was forced to adapt. Some have made this transition to the online world successfully, while others have struggled to adapt. It is understandable that some media conglomerates see Facebook as a potential source of money to make up for their losses, but does that mean they should be able to demand a blank check?
I’m guessing the money-grubbing failures to which Clegg refers include the dying local papers that have struggled to adapt in part specifically because they’re losing out on locally-targeted advertising revenue which is now almost entirely pocketed by Google and Facebook. Anyway, okay, we get it! Not done yet [emphasis, again, Clegg’s]:
It’s like forcing car makers to fund radio stations because people might listen to them in the car — and letting the stations set the price. It is ironic that some of the biggest publishers that have long advocated for free markets and voluntary commercial undertakings now appear to be in favor of state sponsored price setting. The events in Australia show the danger of camouflaging a bid for cash subsidies behind distortions about how the internet works.
This is a wildly skewed metaphor; Facebook is less like the car and more like one of two radio stations that get to decide which record labels to promote. That kind of broadcast dominance has directly led to newroom layoffs through (allegedly knowingly misleading) emphasis on video. It’s also algorithmically suppressed outlets now competing for attention with fake and inflammatory sources. For a sense of how much an even playing field matters, the Pew Research Center recently found that 36% of Americans regularly get their news from Facebook. Its influence over the flow of information is so patently obvious that every few years we circle back to insisting that Zuckerberg just admit that he’s running a media organization.
Maybe Australian politicians, in needling Facebook to pay its fair share, finally struck a nerve. Or maybe the thrill of winning a pissing match against a sovereign nation has the company’s executives willing to gloat. Whatever the case may be, I sincerely hope that Facebook keeps the line of honest dialogue open.
YouTube is attempting to bridge the gap between its dedicated Kids app and regular YouTube for parents with tweens and teens.
YouTube announced Wednesday that it will launch a new “supervised” experience in beta that will introduce additional features and settings for regulating the types of content that older children can access on the platform. Content will be restricted based on the selection of one of three categories. “Explore” will introduce videos suitable for kids 9 and older, “Explore More” will bump them into a category with videos for kids 13 and older, and “Most of YouTube” will show them nearly everything except age-restricted and topics that might be sensitive to non-adults.
YouTube says it will use a blend of machine learning, human review, and user input to vet content—a system that has worked spectacularly for YouTube in the past. Seemingly trying to get out ahead of whatever issues will arise from its busted moderation system, the announcement blog stated that YouTube knows “that our systems will make mistakes and will continue to evolve over time.”
Clearly, any tool that attempts to filter inappropriate content on YouTube is welcome and necessary. But guardians cannot rely on YouTube alone to take the wheel and guide the experience of their kids. We’ve seen how well that’s worked in the past over on YouTube’s dedicated Kids app—which is to say, not great.
Part of the problem is that YouTube’s platform, like those of other social media giants, is just too big to adequately moderate. One wrong turn can send your kid down a rabbit hole of conspiracies whether they were looking for them or not. Plus, if we’re being honest, teens and tweens are probably going to find a way to watch whatever content they want to watch regardless of how kid-proofed the home computer is anyway.
G/O Media may get a commission
All that said, creating a middle ground between YouTube Kids and the chaos of normal YouTube is something. Just don’t bank on a perfect moderation system. Even YouTube says so.
Moving into 2021 and forward, conservatives angry about cancel culture, censorship, shadowbans, or the attention of the FBI have a rich array of social destinations to choose from. We’ve prepped a travel guide for the unwitting observer who might be thinking of checking any of these conspicuous and lesser-known internet hellholes out—whether it’s to keep an eye on what the far-right is up to or to tell you exactly why you shouldn’t be going to these places.
Donald Trump and the Republican media ecosystem spent the last few years building an elaborate fantasy world for his supporters. They insisted, at every turn, that any unflattering portrayal of his unpopular administration was the product of a liberal media establishment staffed by socialist journalists and amplified by Silicon Valley tech companies angling to take him down.
A wide array of alternative social media sites cropped up to cater to right-wingers convinced that Facebook and Twitter were censoring them, despite all evidence indicating otherwise. They also cater to far-right groups ranging from fascists and white supremacists to QAnon truthers whom mainstream sites actually had been, with varying levels of commitment or success, trying to rid themselves of.
The riots in DC on Jan. 6, when a mob of pro-Trump rioters charged into Congress trying to overturn the results of the election, resulted in a wave of platform bans targeted against the perpetrators and Trump himself. This fueled a sense of urgency among conservatives that their days on Facebook, Twitter, YouTube, and other sites were numbered. So here’re some of the sites, platforms, and apps where they might set up shop in 2021, whether as a forever home or just a pit stop on a never-ending ride out into the fringe.
G/O Media may get a commission
Trump dedicated what counted, for him, as considerable time, effort, and energy into indoctrinating supporters with the idea that tech companies are hunting down and eliminating conservative accounts like it’s The Most Dangerous Game. Parler, which is sort of like if Facebook and Twitter were around in 1939 and allied with the Axis, was the primary beneficiary of this conspiracy theory—at least until its role in the Capitol fiasco saw it stabbed in the back by Amazon, Google, and Apple, which collectively trashed the app by killing its hosting contract and app store access in January.
Parler launched in 2018. But in the days after the November 2020elections, Parler leapt to top spots on the App Store and Play Store, surging to over 10 million users in a very short period of time. That’s in large part because conservative media personalities with huge audiences, including pundit Dan Bongino, numerous Fox News hosts such as Maria Bartiromo, former Trump campaign official Brad Parscale, former Turning Point USA comms director and Hitler endorser Candace Owens, radio host Mark Levin, and a number of GOP members of Congress had been urging their followers to #WalkAway and set up shop there.
Parler managed to maintain the outward appearance of being one of the most mainstreams of the alternative sites on this list—an extremely low standard—as it was flooded with conservative celebrities and hadn’t been implicated in any horrifying acts of violence yet. Rank-and-file Republicans may have been attracted to Parler from its promise of a moderation-free environment free from the influence of effete tech titans. But so were neo-fascist street-brawling groups like the Proud Boys, racists and anti-Semites, grifters, people posing as senators to sell CBD oil, porn spammers, campaigns begging for money, and disinformation purveyors (some from Macedonia), who thanks to those same policies were all able to rub shoulders with the normies in the endless feedback loop they’d always dreamed of. Now-former CEO John Matze said in an interview that “community jury” groups handled most moderation, which sort of helps explain why the moderation sucked.
If this sounds like absolute hell, that is probably a positive statement reflection of your mental health. Well before the Jan. 6 riots at the Capitol, where a large number of the crowd were members of Parler live-streaming crimes, it was clear that was exactly where it was headed.
“Parler is a mix of hard-right extremists, right-wing influencers, and mainstream conservatives who feel they’ve been personally abused by Silicon Valley,” Cassie Miller, a Southern Poverty Law Center senior research analyst, told Gizmodo in December. “It acts largely as a pro-Trump echo chamber and amplifier for misinformation. It will likely contribute to an even greater fracturing of our information system, which we know has immense consequences for elections and the larger political process. For example, the notion that the country is inevitably heading toward civil war is pretty pervasive on the platform.”
Miller told Gizmodo that the Proud Boys, which had been staging brawls in the streets of D.C. for months, used Facebook for recruitment until they were pushed off in 2018. She added Parler had “largely solved that problem for them, and it now acts as their main platform for propaganda and recruitment.” A half-dozen Proud Boys have since been arrested for their alleged role in instigating and carrying out the riot.
There were a number of reasons to be skeptical that Parler’s success would last through 2021. Few, if any, of its celebrity proponents actually deleted their accounts on Facebook, Instagram, Twitter, YouTube, or what have you, because they were not actually being censored there. Parler’s target demographic include droves of trolls, assholes, racists, and other unpleasant people whose activities online tended to be centered around trying to piss off liberals, leftists, and minority groups, almost none of whom were actually on Parler to hold their attention. The site hadn’t demonstrated that it was anything more than a fad driven by feverish rhetoric from conservative media that would drop off as soon as they moved on to some other bogeyman.
For a blessed few weeks, Parler’s blacklisting by Amazon, Apple, and Google seemed like it might mean the app wouldn’t come back anytime soon or possibly ever. The social media service spent most of its time helplessly petitioning for the courts to intervene and restore their service, and for weeks the only sign of actual business operations was a “Technical Difficulties” page that listed letters of support from such luminaries as Sean Hannity. Its CEO, John Matze, got fired in some kind of power struggle over moderation policies.
Unfortunately, Parler is back, baby, with a new web host that seems to believe something will turn out different this time. New safety measures the company announced on Feb. 15 included a “privacy-preserving” algorithm to identify threats or incitement to violence, a “trolling filter” to hide potentially bigoted posts, and a ban on attempts to use the site to commit a crime. Seeing as that’s pretty much the bulk of Parler, one wonders how studiously the new restrictions can possibly be enforced.
“The fact that Parler’s interruption in service was only temporary tells us something about where tech is going,” Miller told Gizmodo this week. “We are going to continue to see a growing number of platforms that are looking to cater specifically to right-wing and extremist users, as well as infrastructure to support them. This is going to have a major impact on the information landscape and is something we’ll increasingly have to take into consideration as we try to tackle problems like disinformation and political polarization.”
Parler was so desperate to have Trump sign up that it reportedly tried to negotiate an equity deal with the Trump Organization while he was still in office, something that could be viewed as an, uh, bribe. Trump had reportedly been toying with joining the site, possibly under the moniker—we shit you not—“Person X.” He’s also reportedly had so little idea what to do without his Facebook and Twitter access that he’s spending a lot of his time suggesting tweets to those aides around him that remain unbanned.
This leaves open the possibility that Trump could still decide to make Parler his own little post-presidential posting palace. Suffice it to say that would be nice for him.
MeWe was created by Mark Weinstein, a tech entrepreneur behind such previous best hits as the short-lived SuperFriends.com and SuperFamily.com, early social networks that spanned just a few years from 1998 to the early 2000s. It bills itself as a privacy-focused, subscription-based “anti-Facebook.” Its primary selling point to conservatives, however, is that it promises it has “absolutely no political agenda and no one can pay us to target you with theirs.”
MeWe has millions of users, who are subject toa fairly long list of rules. But in practice, a Rolling Stone report in 2019 found, its primary draw appears to be users fleeing either bans or just paranoia one is forthcoming on Facebook. Its policy of not intervening against dishonest, hoax, or factually incorrect content had made it a landing spot for anti-Semites, mass shooting deniers, and other conspiracy theorists who are apparently largely free to run wild because of the site’s narrow definition of hateful speech.
Other groups that have migrated to MeWe include anti-vaxxers who feel suppressed by Facebook. In 2020, according to Business Insider, it became one of the staging areas for right-wingers organizing anti-lockdown protests during the novel coronavirus pandemic, who created numerous groups and flooded feeds with recruitment messages.
Weinstein suggested to Rolling Stone that because MeWe does not allow advertisers to promote or boost content, that effectively eliminates any concern about groups boosting hoaxes and propaganda because“I have to go find those groups and I have to join them. They can’t find me.” He later penned a Medium post demanding the retraction of the Rolling Stone article, stating the site’s terms of service clearly state “haters, bullies, lawbreakers, and people promoting threats and violence are not welcome.”
As Mashable noted, MeWe also appears to be inflating the perception of how busy it is by creating dummy profiles for everyone from Donald Trump to the New York Times and then auto-populating them with content posted by those individuals or organizations on other sites.
(The site was originally named Sgrouples, like “scruples,” Weinstein said in an October interview, but like Parler, the original name didn’t stick due to users mispronouncing it.)
“MeWe — ugh,” Elon University professor and online extremism expert Megan Squire told Gizmodo. “MeWe reminds me of what would happen if MySpace and the ‘blink’ HTML tag had a baby. Users who try MeWe after being on Facebook complain that it is horribly designed, very ugly, hard to use, and feels frantic with chat messages popping up everywhere. Probably the most notable groups that moved to MeWe in 2020 were the Boogaloo-style groups that had been removed from Facebook and other platforms.” (Boogaloo refers to loosely affiliated groups of internet denizens who figure the country is probably headed towards a second civil war, such as far-right militia orgs that are particularly wishful it would hurry up and start already.) Squire added that those groups and others had “struggled” to build audiences on MeWe.
“Their exodus looked very similar to the Proud Boys did the same thing back in 2018 when they were first banned from Facebook,” Squire added. “Once on MeWe, both groups struggled to re-build the numbers they’d seen on Facebook, and many members of these groups left for other platforms.”
Jared Holt, visiting research fellow at The Atlantic Council’s Digital Forensic Research Lab, told Gizmodo he didn’t think MeWe had what it takes to compete for the hearts and minds of right-wingers.
“I use MeWe for research because it currently homes the remnants of a fair amount of banned Facebook groups and pages that belonged to militia, QAnon, and ‘Boogaloo’ movement figures,” Holt wrote. “The site gives its users a lot of control over privacy, which likely contributes to its appeal for some of those groups. Each MeWe group has a wall that users can post to—like Facebook—but MeWe groups also have a simultaneous group chat function. Those group chats are often chaotic and can be steered in some very strange directions depending on who is active in the conversation at any given moment in time.”
“Though some extremist groups are camping out on MeWe, I don’t see this platform capturing the attention of broader right-wing internet users in a way like Parler has,” Holt added. “Because of its privacy design, the platform can be a bit hard to grasp for users who don’t already know of specific people or types of groups they want to find. It has some territory carved out among awfully specific parts of the right-wing internet, but it’s hard for me to imagine this will become the next big conservative stomping ground.”
To give MeWe some credit, however, its default avatars—smiling cartoons of bread—are pretty cute.
Gab was founded in 2016 by the thoroughly unpleasant pro-Trump figure Andrew Torba, who was banned from seed money accelerator Y Combinator that same year “for speaking in a threatening, harassing way toward other YC founders,” according to YC via BuzzFeed. (Torba’s outbursts allegedly include telling YC founders to “fuck off” and “take your morally superior, elitist, virtue signaling bullshit and shove it.”)Since then, it’s become one of the primary dumping grounds for explicitly fascist and white supremacist posters who got tired of creating yet another Twitter alt.
The site likes to market itself, unconvincingly, as one of the last refuges of free speech on the internet in the face of Big Tech censorship, rather than a congregation of various sociopaths. Following a series of neo-Nazi terror attacks in Charlottesville, Virginia, and Pittsburgh, Pennsylvania—the latter of which was committed by a Gab user—the site was forced off the App Store, Play Store, cloud host Joyent, payment processors PayPal and Stripe, domain registrar GoDaddy, and various other services. In 2020, its alternative registrar, Epik, was banned by PayPal for running a suspicious “alternative currency.”
Suffice to say that Gab has a far more toxic reputation than, say, Parler. Mashable reported this year that analysts at a Florida police fusion center had warned participating agencies that its new encrypted chat service, Gab Chat, was likely to become a “viable alternative” for “White Racially Motivated Violent Extremists” leaving Discord, a gaming-focused chat app that had a reputation for being overrun with Nazis during its years of explosive growth.
Gab remains a “prominent organizing space for far-right extremists,” Michael Hayden, a senior investigative reporter at the Southern Poverty Law Center, told Gizmodo. “While interest in Gab has declined since the site became so closely associated with the terror attack at Tree of Life synagogue in Pittsburgh in 2018, [Torba] has made a big push to bring in QAnon adherents who have been suspended elsewhere.”
The site provides “the type of infrastructure hateful, terroristic people need to organize mayhem,” Hayden added.
Torba has been telling anyone who will listen that Gab usership has surged as aggrieved right-wingers look for a post-Parler home, specifically claiming that as of early January, it had 3.4 million signed up. None of these figures are to be trusted, Hayden said, noting that an engineer for web host Sibyl System Ltd. had told the SPLC in 2019 that Gab’s quoted figure of 800,000 users at the time was not backed up by its usage statistics. Instead, the engineer said Gab’s usership was “a few thousand or a few tens of thousands.”
“It’s extremely difficult to get an accurate accounting of Gab’s real user numbers due to the degree to which the site is inflated with what look very much like inactive if not openly fake accounts,” Hayden told Gizmodo.
8kun originally launched in November 2019 as a rebrand of 8chan, an image board that was itself founded as a “free-speech” alternative to internet troll-hub 4chan. 8chan was knocked off the web after it was deplatformed by numerous internet companies and hit with DDOS attacks after its /pol/ board, a hub for right-wing extremists flooded with hate speech, was implicated in several mass shootings by white supremacist terrorists in Christchurch, New Zealand; Poway, California; and El Paso, Texas. The perpetrators of those attacks, where a cumulative 75 people died and 66 others were injured, had all posted manifestos to 8chan before the attacks.
8kun is also where “Q,” the unknown individual or individuals who started the QAnon movement, has continued the hoax after 8chan went offline. Watkins and his son, (ostensibly) former 8kun admin Ron Watkins, heavily promoted QAnon and are widely suspected to either be Q or know their identity.
Q hasn’t posted since Dec. 8, 2020—though 8kun also served as one of the several venues where Trump supporters rallied each other ahead of the Jan. 6 riots.Trump’s loss, subsequent humiliation in the courts, and failure to stop the Biden inauguration hasn’t exactly been great for the conspiracy theory’s brand. The younger Watkins has tried to rebrand himself as an election security expert just in time to score interviews with pro-Trump media boosting ridiculous theories of voter fraud.
8kun is completely delisted from Google, making it somewhat harder to find for the kind of normies with limited navigational understanding of the internet flocking to sites like Parler, and it’s been sporadically knocked offlineby attackers. While Q posted there, most QAnon aficionados actually followed them through a labyrinth of QAnon promoters, aggregation sites, and screenshots on other social media. That all means its gravitational draw has been somewhat blunted (a “rouge administrator” deleted its entire /qresearch board with no backups available last month, though it was later restored).
“The 8kun imageboard continues to be driven mostly by Q followers hoping for the anonymous poster’s return,” Julian Feeld, a researcher on conspiracy theories and co-host of the QAnon Anonymous podcast, told Gizmodo. “On the ‘Q Research’ board the usual cauldron of conspiracy theories stirs—‘anons’ are tracking media reports of famous illnesses, deaths, and suicides to see if ‘the storm’ might still secretly be on track. It feels like they’re trying to stay positive as the days tick on, which is nothing new for them.”
Feeld added that 8kun’s replacement for /pol/, /pnd/, was just as openly extreme but appeared to be slowly fizzling out.
“Meanwhile the ‘Politics, News, Debate’ board is increasingly less active and currently serves as a hub for Neo-nazi propaganda,” Feeld wrote. “So far Jim Watkins has managed to keep the site functioning despite the many public outcries and activists’ efforts to keep it offline.”
Both Watkinses have been suggested as potential targets in the lawsuits being brought by Dominion Voting Systems, a company that is currently suing Trump campaign lawyers Sidney Powell and Rudy Giuliani as well as MyPillow CEO Mike Lindell for billions after they spread hoaxes claiming the company fraudulently flipped the 2020 elections. While that might be too much to hope for, 8kun doesn’t exactly seem to be on the rebound.
DLive is a video site that found an audience with right-wingers banned or demonetized on other sites like YouTube and who weren’t keen on the prospect of moving to places like Bitchute that explicitly cater to the far-right, but offer a limited audience and unwelcome associations. Unlike Bitchute, DLive briefly attracted some mainstream talent—video game personality Felix “PewDiePie” Kjellberg, one of the most-viewed streamers on the planet, signed a live-streaming exclusivity deal in April 2019 with the site before going back to YouTube exclusively in May 2020.
DLive, like the other sites on this list, has very lax rules. But it also has distinguishing features: It has an internal economy based on tokens called “lemons,” which are worth a fraction of a cent each, that runs on blockchain, the decentralized digital storage system that powers bitcoin and other cryptocurrencies. Lemons can be purchased or sold in cash and accrued by engaging in activities on the site, effectively making it gamified. DLive is also popular with gamers as a Twitch alternative, giving it access to a more youthful audience.
These factors made DLive an attractive option for extremists to continue making money. Elon University’s Squire recently published research with the SPLC showing some 56 extremist accounts had made a total of $465,572.43 between April 16 and late October of last year.
“I don’t think there is any real advantage that DLive has compared to any other niche live-streaming site that facilitates donations,” Squire told Gizmodo. “There is nothing particularly ‘fashy’ about the site other than an apparently hands-off management style and a tolerance for hate speech and proximity to younger demographic game streamers. … The biggest advantage DLive has going for it is traditional network effects: like other social media platforms, the more people who use the service, the more valuable it gets.”
“Contrast this to Telegram’s file sharing/encryption/stickers or 8kun’s anonymity or Keybase’s file-sharing/encryption, for instance—these are technical features that drive adoption by extremists,” Squire added. “DLive is just a seemingly-normal platform that is also friendly to white supremacist streamers; it allows them to appear normal as they make money after they’ve been removed from the more mainstream sites.”
Squire’s research showed that over the time period in question, DLive generated $62,250 for Owen Benjamin, a comedian known for racist and anti-Semitic “jokes”; $61,650 for white nationalist “Groyper” chud Nick Fuentes; and $51,500 for Patrick Casey, who used to be a leader of the now-defunct white supremacist group Identity Evropa and its similarly disbanded offshoot, the American Identity Movement. Others making thousands on the site included a prominent Gamergater, a white supremacist media brand, and a pseudonymous contributor to far-right publications. According to an August 2020 Time article, data from Social Blade showed eight out of the 10 highest-earning accounts on DLive were “far-right commentators, white-nationalist extremists or conspiracy theorists.”
But DLive had its own recent day of reckoning after it was highlighted in numerous news reports as playing a role in the Capitol riots—Fuentes, for example, used the site to float the idea of murdering members of Congress and later streamed on DLive from outside the building. Fuentes and Tim “Baked Alaska” Gionet, another far-righter to find a soapbox on DLive, were subsequently banned. Some alt-right streamers on DLive, such as Casey, have taken to telling their audiences that their days using it are numbered.
However, a report by Wired early this month indicated that Casey and other streamers on DLive continued to monetize with Streamlabs and StreamElements, third-party integrations that allow viewers to donate directly to creators (and allow streamers to bypass bans on major payment processors like PayPal). StreamElements told the magazine that it had removed Casey’s account after it reached out for comment, but Wired found that “dozens of Streamlabs and StreamElements accounts attached to white supremacist, far-right, or conspiracy theorist content are still live.”
The “only real actions” DLive has taken, Squire told Gizmodo, was the bans in January, a prohibition on streaming from DC implemented late last month, and demonetizing accounts with an “X” tag, which is required for political streamers.
“Different streamers have been trying to game the system, for example by taking the X down so they can make money during the stream and then putting it back up and removing their videos,” Squire added. “It’s very tedious. Others are trying to pretend that they are just video game streamers.”
Conservatives are convinced that YouTube, despite playing host to a sprawling network of right-wing commentators and pundits and possibly doing the least of any major social network to fight GOP-friendly misinformation, is secretly conspiring against them. Enter Rumble, which is like YouTube if it was designed by me using WordPress.
Rumble has been around since 2013 and managed to rake together a number of partnerships with companies including MTV, Xbox, Yahoo, and MSN. Per Tech Times, it has a rather confusing number of monetization options, two of which rely on signing over ownership rights to Rumble and a non-exclusive option where each video can make a max of $500. Rumble appears to generate a significant amount of its revenue by licensing viral videos, as well as its video player technology. In other words, this is sort of a weird place for conservatives to end up.
Still, Rumble intentionally courted right-wingers as a growth strategy that seems to have paid off—it told the New York Times it had exploded from 60.5 million video views in October to a projected 75 million to 90 million in November. Rumble particularly benefited from the Capitol riots; Axios reported that downloads of its app doubled by the next week.
As of Tuesday afternoon, its “battle leaderboard” was headed by content from Bongino, Donald Trump Jr., far-right filmmaker Dinesh D’Souza, pro-Trump web personalities Diamond & Silk, and radio host Mark Levin. The most-viewed video from the previous week was a video of Trump Jr. arguing the left was “trying to cancel” Senator Ted Cruz for fleeing Texas while freezing weather knocked out electricity statewide, lying that Cruz had no ability to do anything about the situation.
Of the 50 most-viewed videos of the last week, all but five videos (four videos in French from a Quebec-focused site and an aggregated news roundup) were viral fodder for right-wingers. Much of it was either reuploads of videos that could be found elsewhere, such as clipsof Bongino’s show, videos from Trump Jr.,or just clips taken from networks like CNN or C-SPAN coupled with angry or exaggerated captions.
Slate noted that in addition to a slew of content spreading conspiracy theories that the “deep state” had stolen the election from Trump, QAnon content and videos lying about the nonexistent link between vaccines and autism were gaining a large audience through Rumble. A search of the site shows that while many conservatives on Rumble were criticizing QAnon, videos promoting or covering the conspiracy theory were still widely posted.
CEO Chris Pavlovski told the Washington Post that while the site has rules against obscene content and certain categories of content like videos showing how to make weapons, he views his approach to moderation as akin to bigger tech companies’ a decade ago.
“We don’t get involved in political debates or opinions. We’re an open platform,” Pavlovski said. “We don’t get involved in scientific opinions; we don’t have the expertise to do that and we don’t want to do that.”
The Post reported that Rumble was heavily reliant on traffic from Parler, with Pavlovski telling the paper more of its traffic clicked over from there than Facebook or Twitter. That may leave Rumble in a tough spot, though according to BuzzFeed, Bongino took an equity deal with Rumble to promote it to his followers on Facebook.
Encrypted messaging service Telegram had long been a safe space for various fascists, racists, and quacks, and it servedas one of their last havens after being squeezedout of competitors like chat server app Discord. Telegram has a far more laissez-faire approach to content moderation and was host to hundreds of white supremacist groups with thousands of members by mid-2020; it also serves as a central hub for fascist groups like the Proud Boys as well as a remaining outlet for far-right activists like failed congressional candidate Laura Loomer and distant memory Milo Yiannopoulos to reliably stay in contact with supporters.
Of course, Telegram isn’t just used by extremists. It and Signal, another encrypted chat app, have become wildly popular and are used by everyone from random suburbanites to political dissidents. The governments of Russia and Iran took use of Telegram by protest movements seriously enough to warrant attempting to shut them down (Russia’s attempt backfired big time with major collateral damage on unrelated web apps, while Iranians simply dodged restrictions with VPNs). A Belarusian news organization based out of Poland, Nexta, has been using Telegram to coordinate protests against dictator Alexander Lukashenko.
Moderation is inherently more complicated on Telegram, as it’s privacy focused, mixes public and private messaging functions, has various encryption types, and content flows by in realtime. Telegram has shown limited interest in moderation of its social networking dimension, and it’s based out of London, insulating it somewhat from the political debates raging around U.S.-based sites. All of these factors have contributed to its popularity with extremists.
“Telegram is the largest safe haven for the most extreme parts of the far-right,” Miller told Gizmodo. “While white power accelerationists were, until relatively recently, largely confined to small, highly vetted forums that had a limited reach, they can now reach far larger audiences on Telegram. There is a large network on Telegram that exists solely to encourage members of the white power movement to commit acts of violence.”
“We’re seeing the white power movement as a whole shift away from formalized groups in favor or small, clandestine terror cells, and Telegram is playing a major role in facilitating that reorganization,” Miller added.
In 2020, however, Telegram began banning some of the most extreme groups on the site, including a neo-Nazi hub called Terrorwave Refined with thousands of followers, a militant group tied to foreign recruiting for a white supremacist movement fighting in eastern Ukraine, and a Satanist group obsessed with rape. But it’s not clear that Telegram is putting up much more than a token effort in response to media pressure. Terrorwave easily slipped back onto the service under another name. In November 2020, Vice News reported that Telegram didn’t delete a dual English/Russian language channel dedicated to the “scientific purposes” of distributing bomb-making instructions until after it published an article on the topic. While it banned dozens of far-right channels following the Capitol riot, many others continue to operate.
“Telegram’s attempts to ban white supremacist content had little effect on the extremist communities already established on the platform,” Miller told Gizmodo. “Most banned channels simply created backups, and had already used the platform’s export feature to preserve their content. The bans forced extremists to become slightly more agile but, beyond that, had little impact. Telegram continues to be a safe haven for extremists, allowing users to participate in the radical right without ever joining a defined group. More than any other platform, it’s helping to facilitate a shift toward a leaderless resistance model of far-right organizing.”
Thinkspot, the site founded by Canadian psychiatrist and surrogate dad to a cult-like fanbase of disaffected libertarians and anti-feminists Jordan Peterson, barely registers a mention on this list. While Peterson founded the site in 2019 in response to a series of bans on fringe conservatives and commentators sympathetic to the “alt-right” on Patreon, it’s not a hub of extremism, just pseudointellectual conservative drive. It is more or less a vanity site designed to facilitate giving Peterson money under the cover story of enabling intellectual discourse banned elsewhere on the web, and it appears to have been largely abandoned after he dropped out of the public eye in 2019 amid a months-long medical crisis.
Peterson announced his return in October but has only mentioned the site on his Twitter feed five times since February 2020. His posts in the past few months have largely been reposts of podcast episodes or YouTube videos with only a few dozen “likes” and the same captions that appear on other sites. On Monday, only a handful of the featured posts seen upon logging into Thinkspot were listed as having more than a hundred views, with the one highest on the “Top Posts” leaderboard having 850 views and eight comments.
“What’s that?” you might ask. “I thought all of these conservatives were fleeing Facebook?”
Just this weekend, BuzzFeed reported that executives including Mark Zuckerberg and the policy team headed by former GOP lobbyist Joel Kaplan had intervened to safeguard conservative pundits from Facebook’s own mod team and shut down news feed changes that might anger pundits like Ben Shapiro. Facebook is built on juicing engagement on emotionally stimulating content, which aligns naturally with the rhetorical style of the right, the business incentives of reactionary pundits like Ben Shapiro, and explosive growth of conspiracy movements like QAnon and “Stop the Steal.”
Facebook is now trying to rid itself of certain kinds of content that have proven particularly PR-hostile, like hate groups, Boogaloo, and QAnon, and right-wing extremists have indeed sped up their pattern of migrating to platforms where they are more easily ignored or shielded from scrutiny. It’s also trying to fix messes like its pivot to boosting private groups, sparking a wave of toxic “civic” groups.
Nothing about the basic pattern has changed, though. Facebook amplifies some type of reactionary mind gruel, ignores that specific strain until its exponential growth blows up in the company’s face, and then promises a quick fix while ignoring some other looming disaster. There’s no reason to expect that will change in the near future, or that conservatives won’t take advantage of it again, and again, and again. Welcome home.
In the absence of broadly appealing new films and TV series (no Oscar hopefuls this month, I’m afraid), Netflix appears to be looking to the podcast market to figure out how to keep its massive subscriber base happy. Its new offerings in March include a host of documentary series and specials that I would totally listen to, were they podcasts. Will I watch? Well, I will not. (I haven’t even seen Ted Lasso yet.) But you might.
Operation: Varsity Blues (March 17) is sure to draw eyeballs, fascinated as we all were by the college admissions scandal that toppled such titans of culture as Felicity Huffman, Aunt Becky from Full House, and the fashion mogul who once designed a paper towel holder I bought at Target. Everyone is still pissed at the way these already-hads manipulated a system already weighted in their favor to get their kids into “good” colleges, and with good cause. The documentary feature comes from some of the same team that produced early pandemic sensation Tiger King.
Murder Among the Mormons (March 3) is the kind of lightly exploitative true-crime story that seems like it already was a podcast you subscribed to last year but forgot to listen to. It delves into a rash of bombings that terrorized Salt Lake City in the mid-1980s.
And an inspiring story of perseverance in the Hoop Dreams mold, Last Chance U: Basketball (March 10) is a spinoff from Netflix’s long-running series Last Chance U. It shiftsthe focus from football to collegiate basketball players who have struggled in their lives and studies and must play at the junior college level if they hope to get back into Division play.
If you prefer some more fiction in your TV viewing diet, I’m personally excited to see how well the Pacific Rim film series translates to anime in Pacific Rim: The Black, launching March 4 (giant robots in anime? It just might work!). The Irregulars (March 26) has great Buffy/Sabrina potential: a series about young paranormal crime fighters based on Sherlock Holmes’ famed “Baker Street Irregulars.”And then there’s Moxie (March 3), a dramedy film about a girl who launches a ‘zine to expose the sexism at her high school, which sounds pretty culturally relevant and is also the directorial debut of one Amy Poehler.
Here’s everything else coming to and leaving Netflix in March 2021.