Facebook’s Long-Stalled Digital Currency Could be Tested This Year: Report

Facebook CEO Mark Zuckerberg testifies before the House Financial Services Committee on Capitol Hill in Washington on Oct. 23, 2019.

Facebook CEO Mark Zuckerberg testifies before the House Financial Services Committee on Capitol Hill in Washington on Oct. 23, 2019.
Photo: Susan Walsh (AP)

Facebook hopes to launch a trial of its long-stalled digital currency project by the end of this year, according to a new report from CNBC. The currency, first announced in 2019 as Libra and then renamed Diem after some bad publicity, will now be pegged to the U.S. dollar, provided the tech giant can actually get it off the ground this time.

Advertisement

Facebook first announced plans for the digital currency in June of 2019 and was hit with immediate backlash from governments and consumer groups around the world that worried what would happen if a huge tech monopoly like Facebook competed with the world’s largest currencies. Facebook has roughly 2.8 billion active users on a planet of 7.9 billion people.

Facebook’s plan in 2019 was to launch the “blockchain” currency by early 2020, something that obviously didn’t happen after the tech company’s partner organizations like PayPal and eBay started to pull out after the wave of negative press.

But CNBC, and whoever leaked this Facebook news to the financial outlet, seem to hint that Facebook is taking a much more cautious approach this time, even if details are still extremely scarce.

From CNBC:

The Diem Association, the Switzerland-based nonprofit which oversees diem’s development, is aiming to launch a pilot with a single stablecoin pegged to the U.S. dollar in 2021, according to a person familiar with the matter.

The person, who preferred to remain anonymous as the details haven’t yet been made public, said this pilot will be small in scale, focusing largely on transactions between individual consumers. There may also be an option for users to buy goods and purchases, the person added. However, there is no confirmed date for the launch and timing could therefore change.

What the hell is the Diem Association? It appears to be the next iteration of the Calibra Association, the supposedly independent organization set up by Facebook to oversee the currency back when it was called Libra.

When reached for comment about the CNBC story, Facebook’s Head of Communications for Australia, Antonia Sanda told Gizmodo by email, “looks like this could be a leak as there are no official announcements from the Diem site, but I’ll leave that for the Diem team to confirm.”

Advertisement

Sanda provided Diem’s email address and wrote, “We now send all media queries direct to the Diem organisation, as it is separate from FB […] if you’d like to contact their team direct.” Gizmodo has not yet heard back from Diem but will update this post if we do.

Governments around the world are setting up committees and task forces to examine the pros and cons of creating their own digital currencies, with China, Japan, and the UK announcing their own explorations in recent months. And it’s no secret that cryptocurrencies like bitcoin and ether have gained traction in recent years, with large companies like PayPal starting to get in on the action. PayPal announced last month it was launching a way for consumers to pay using cryptocurrencies at millions of retailers, handing the merchant fiat during the transaction.

Advertisement

But will Facebook’s digital currency flourish after already experiencing one very embarrassing false start? Only time will tell. But you can bet that government regulators will be keeping a close eye on Facebook’s plans for the future of money, especially since most world leaders think CEO Mark Zuckerberg already has too much power.

Congressman Brad Sherman even told Zuck in a July 2019 hearing that his new digital currency—which Sherman mockingly called “Zuck Bucks”—could cause the next 9/11, apparently referring to the possibility that criminals would use Facebook’s new currency for illegal activities. And when that’s your starting point of conversation with politicians who could help decide the fate of your new business idea, it’s tough to see it getting very far.

Advertisement

How to Check if Your Phone Number Is in the Huge Facebook Data Leak

Illustration for article titled How to Check if Your Phone Number Is in the Huge Facebook Data Leak

Photo: Chip Somodevilla (Getty Images)

Hacked data on over 553 million Facebook users was leaked online over the weekend, including names, birthdates, a Facebook user’s relationship status, the city where they live, their workplace, and sometimes even email addresses. But the most sensitive data included in the leak is arguably the phone numbers, which are often used for two-factor authentication. And now there’s a way to easily check if your phone number is in the leak—at least if you live in the U.S.

Advertisement

The website The News Each Day has a simple tool where you can input your phone number and see if it’s in the leak. Gizmodo tested the tool against some data from the actual Facebook leak and found it to be accurate. For example, we tested Mark Zuckerberg’s phone number, which is included in the leak. It worked. (We assume Zuck has changed his phone number by now.)

If you’re willing to risk handing over your phone number, all you need to do to check is input your phone number without any hyphens or periods. You also need to include the international country code at the beginning. For example, if you’re used to seeing your phone number in this form, 555-212-0000, you should get rid of the hyphens and add the digit “one” in front.

Using the same fake number above, the number you input should look like this: 15552120000. If you include a variation that’s anything but the string of numbers, the tool will falsely tell you that your number is not included in the leak. In reality, it very well could be.

Illustration for article titled How to Check if Your Phone Number Is in the Huge Facebook Data Leak

Screenshot: The News Each Day

Of course, it’s totally understandable if you don’t want to enter your phone number into some website you’ve never heard of. (There’s very little available information about The News Each Day, and its privacy policy says that all it tracks is clicks through Google Analytics.) So, if you want to check if the Facebook data dump affected you using a better-known tool, HaveIBeenPwned has updated its database to include the Facebook leak. Just head to the site and enter the email address associated with your Facebook account, and you can see if your personal information was compromised. You’ll also be able to see if your email address was included in any other breaches included in HaveIBeenPwned’s database.

Facebook hasn’t said much about the leak, except that the info was hacked in 2019. The data was offered in hacking forums for a price two years ago, but the thing that makes this weekend’s leak different is that the data has now been leaked for free. Anyone can find the 16GB of data with just a simple Google search.

Advertisement

Update 12pm ET, April 5: HaveIBeenPwned has updated its database to include the Facebook leak. Use that tool if you’re sketched out by The News Each Day tool.

The ‘Voice of President Trump’ Gets Double-Banned by Facebook

Illustration for article titled The ‘Voice of President Trump’ Gets Double-Banned by Facebook

Photo: Tasos Katopodis (Getty Images)

It must be hard, getting 86’ed from a venue, watching as 2.7 billion people stroll past a skinny, pale bouncer. Maybe you mull around the dumpster outside, in a cloud of cigarette smoke with the Nazis and Laura Loomer, talking about how much that party sucks, actually, that this alley area is way better, and we should start our own thing. Or maybe grab the nearest blonde woman and scream, I’M WITH HER!

Advertisement

Donald Trump tried the latter, and Facebook has booted him again.

Late last night Miami time, Donald Trump’s daughter-in-law, Fox News commentator, and presumptive senatorial candidate Lara Trump posted screenshots of emails from Facebook. The first warns that she is not to post a planned interview with Donald Trump on her Facebook and Instagram pages. It reads:

Hi folks,

I saw on Lara Trump’s Instagram account that she will be hosting an interview with President Trump tonight. Just a reminder that content posted on Facebook and Instagram in the voice of President Trump is not currently allowed on our platforms (including new posts with President Trump speaking) and will be removed if posted, resulting in additional limitations on accounts that posted it.

This guidance applies to all campaign accounts and Pages, including Team Trump, other campaign messaging vehicles on our platforms, and former surrogates.

Hours later, Mrs. Trump shared an excerpt from the following email which she apparently received after the first message:

We are reaching out to let you know that we removed content from Lara Trump’s Facebook Page that featured Donald Trump speaking.

In a caption, Lara Trump referenced 1984, a novel about a regime that squashes all non-sanctioned alternative modes of communication, and then posted a still-live link to the interview on Rumble.

Facebook confirmed to The Verge and NBC News that it removed the interview.

Donald Trump has attempted numerous times to wedge back into conversation with updates on what he’s been up to. There were the early failed takeovers of the POTUS and TeamTrump accounts, the chaotic imitation tweet on official letterhead, the reported handwritten Liz Cheney burns for aides to tweet, the new official website, and the allies’ buzz about his own social media platform.

Advertisement

Facebook banned Trump on January 7th after he incited a violent attack on the U.S. Capitol that resulted in five deaths. The company’s “Supreme Court,” whose judgments can overrule Mark Zuckerberg, is currently reviewing whether to let the former-President make a comeback. That arrangement takes some heat off the company after four long years appeasing screaming people over smoke alarms.

The banishment of content “in the voice of President Trump” seems to be new, and it isn’t mentioned in Facebook’s previous newsroom statements on the ban. Since January 7th, various news outlets have posted videos of him speaking, such as a statement from the Oval Office and his speech at CPAC.

Advertisement

We’ve asked Facebook to clarify which types of videos of Donald Trump speaking are banned from the platform and we’ll update this post when we hear back.

Donald Trump and Mike Lindell Should Cage Fight Over Whose New Social Media Site Is Actually Real

Mike Lindell, CEO and founder of MyPillow, and a possibly entirely fictional social media site named Frank.

Mike Lindell, CEO and founder of MyPillow, and a possibly entirely fictional social media site named Frank.
Photo: Stephen Maturen (Getty Images)

HellfeedHellfeedHellfeed is your bimonthly resource for news on the current heading of the social media garbage barge.

We’re jumping the gun on Hellfeed’s normally bi-weekly schedule—because dear lord, the last five days were something else. This week’s social media hellscape kicked off with news Donald Trump is investigating opening his own, presumably even more racist social media platform (uh-huh) before drunkenly veering everywhere to a beleaguered mega-ship clogging the bowels of shipping to Mark Zuckerberg’s vaccination to Amazon tweeting about peeing in bottles to the Shrimp Guy getting Milkshake Ducked (something that I swear will make more sense if you scroll down the page).

Advertisement

This is Hellfeed: Emergency Edition.

Hearing commentary

The CEOs of Facebook, Google, and Twitter went before the House Energy and Commerce Committee for precisely the kind of bipartisan struggle session they’ve faced at multiple prior hearings. While Jack Dorsey, Sundar Pichai, and Mark Zuckerberg absolutely deserve to be dragged by whatever means possible, the hearings are quickly becoming a ritualistic washing of hands in which the assembled members of Congress yell at unpopular tech CEOs instead of actually passing any legislation to address their pet concerns (misinformation and hate speech for Dems, why a grainy .bmp file of Donald Trump giving a thumbs up doesn’t appear at the top of every webpage for Republicans).

Some highlights:

  • Members of Congress still cannot pronounce the surname “Pichai,” which is two syllables and not all that complicated.
  • Republicans finally added another issue to their playbook than screaming about censorship of conservatives: social media’s impact on children.
  • Zuckerberg explained that misinformation about the climate isn’t as harmful as misinformation about the coronavirus, which conveniently explains why Facebook doesn’t do anything about it.
  • Representative Peter Welch asked the three CEOs whether they would support the creation of a Federal Trade Commission-like agency to regulate social media sites; Zuckerberg, who has been a major beneficiary of the FTC’s half-hearted approach to regulation, enthusiastically responded that could be “very effective and positive.”
  • More generally, the CEOs agreed that there needs to be some type of regulation of social media—though possibly just to placate Congress into summoning them to fewer hearings, and they were generally vague on what kind of regulations they would actually support beyond mandating greater transparency and accepting more liability for user-generated content.
  • Confronted on the issue of whether they would ban a dozen anti-vaxxers who bear wildly disproportionate responsibility for hoaxes, misinformation, and conspiracy theories circulating about vaccines on their sites, all three CEOs waffled.
  • In an extremely uncomfortable moment starting at 2:35:15 in this YouTube stream, Representative Billy Long asked each of the CEOs whether or not they understood the difference between “yes” and “no” before asking them if they had been vaccinated against the coronavirus yet. Pichai was the only one who said yes.
  • The assembled CEOs generally evaded addressing or defending their actual business models, which is prioritizing user growth and engagement and thus revenue over just about anything else.

Amazon is now tweeting about whether or not its employees piss in bottles

Everyone’s favorite robber-baron empire has had a lot of fun online this week trying to “own” critics and failing miserably in the process. This all started when Dave Clark, the CEO of Amazon Worldwide Consumer, practiced his tight five for the Comedy Store by tweeting a fun little jab: he often says “we are the Bernie Sanders of employers, but that’s not quite right because we actually deliver a progressive workplace.”

Advertisement

This could be charitably described as misreading the room. The heckling escalated rapidly when Representative Mark Pocan pointed out the well-documented trend of Amazon warehouse workers being pressed so hard they have to urinate (and sometimes poop) in bottles, which the official Amazon News account condescendingly responded to with “You don’t really believe the peeing in bottles thing, do you? If that were true, nobody would work for us.”

Advertisement

This is more than a little like some cartoon banker dressed like Mr. Monopoly yelling, “You don’t really believe the locking the shirtwaist factory stairwells thing, do you? If that were true, nobody would work for us,” over the sound of a fire alarm.

Advertisement

Amazon workers and drivers on numerous occasions have confirmed they sometimes have to pee and poop in things that are not toilets to hit company quotas, something the company is quite aware of. As a result of their pathetic little attempt at a clapback, the Google News results for “Amazon pee in bottles” now looks like this (and goes on and on like this):

Illustration for article titled Donald Trump and Mike Lindell Should Cage Fight Over Whose New Social Media Site Is Actually Real

Screenshot: Google News

Advertisement

Just absolutely phenomenal work here, boys.

In what is presumably entirely unrelated news, Amazon is hiring a new social media manager.

Advertisement

Elon finally regrets a tweet

Tesla and SpaceX CEO Elon Musk, the only person in history to be fined $20 million by the Securities and Exchange Commission, sent a tweet at 4:18 a.m. on Friday stating “I think there is a >0% chance Tesla could become the biggest company.” He perhaps had that settlement on his mind when he deleted a subsequent tweet saying that could happen “Probably within a few months.”

Advertisement

Per the Washington Post, Musk mashing that delete button caused a minor panic among Tesla stockholders:

Musk boasted early Friday to his nearly 50 million Twitter followers that his company could be “the biggest” in a few months. It came less than a day after the National Labor Review Board upheld a 2019 ruling that determined Tesla engaged in unfair labor practices and called on the company to have Musk delete a tweet from 2018.

Tesla shares were hovering near $608 shortly before 2 p.m. EDT, after an otherwise uneventful morning session. The company’s market cap tumbled to $586.7 billion, losing more than $26 billion over the span of four hours.

Advertisement

As the Post noted, this is just one day after the National Labor Relations Board ordered Tesla to have Musk delete this 2018 tweet threatening labor organizers, which Musk has not done.

Advertisement

You live by the post, you cringe die by the post.

Shrimp guy gets milkshake ducked in record time

Social media was briefly delighted by the tale of a man named Jeremy Karp, who tweeted a complaint to the Cinnamon Toast Crunch account asking it to explain why cinnamon-encrusted shrimp tails had ended up in his bag of cereal. After his initial tweet went viral, Karp spent days tweeting many, many more times about the incident.

Advertisement

Advertisement

Unfortunately for Karp, the attention also drew a massive amount of attention to his backstory. That began with fun revelations, such as that he is married to Danielle Fishel, who played Topanga in Boy Meets World, and was once an unsuccessful rapper named “Hot Karl”. It ended with considerably more disturbing ones, as several women on Twitter accused Karp of being a serial manipulator and emotional abuser and disrespectful to Black colleagues. (Podcaster Melissa Stutten wrote he was a “manipulative gaslighting narcissistic ex-boyfriend who once told me he was surprised I hadn’t killed myself because my life was so worthless,” while writer and former Karp colleague Brittani Nichols wrote he had inserted racist lines into the scripts of TBS rap battle show Drop the Mic.)

In other words, he got Milkshake Ducked in record time:

Advertisement

One could call this a cautionary tale, but the moral isn’t ‘never tweet’ so much as don’t be like this guy.

Ship. Ship. Ship.

Everyone is living vicariously through the big ship that’s blocking the Suez Canal (and a massive percentage of world shipping) and has exhibited no signs it intends to get moving anytime soon. It’s possibly the first relatable news event in years! Anyhow, here’s a bunch of tweets about it.

Advertisement

Advertisement

Advertisement

Advertisement

Advertisement

Advertisement

Advertisement

Advertisement

Advertisement

We regret to inform you the

The entire Earth is now being converted into a giant block of computronium that will be worth approximately $42.50 after a “market correction,” as evidenced by the fact that the “Cash Me Outside” meme girl Danielle Bregoli—who is somehow now the rapper Bhad Bhabie—is getting in on non-fungible tokens (NFTs). NFTs are essentially a complicated, blockchain-powered way of turning massive amounts of electricity into digital trading cards that in some cases are selling for millions of dollars, despite the fact they will likely be worth absolutely nothing in just a few months or years.

Advertisement

Anyhow, Bhad Bhabie is selling 20 NFTs, per HypeBeast, which writes that the sale includes “original works based on the biggest meme of 2017 and focusing on its dominance, her rise to fame, the success of her music and meme culture.” That includes the chance to own the “Cash Me Outside” meme:

The first group of NFTs will be released on March 26, Bregoli’s 18th birthday, via Opensea, then on March 29 via Rarible and March 31 via Zora. The collab between Bhad Bhabie and Flue Block Arts will also include a mega package on Opensea that includes ownership of the “Cash Me Outside” meme transferred from the artist to the buyer, one NFT of each of the visual works, a personalized video of the transfer sale from Bregoli to the buyer that will be posted on both her Instagram and YouTube and a 16-bar verse feature from Bhad Bhabie.

Advertisement

Also, recording artist Ja Rule, who previously managed to jettison himself mostly clear of the explosive radius of the Fyre Fest debacle, is selling an NFT of the Fyre Fest logo for $122,000. OK.

If nothing else, you have to respect Ja Rule’s deep commitment to scams.

Frank. It’s just called Frank

MyPillow goblin Mike Lindell, who is currently being sued for $1.3 billion by Dominion Voting Systems for promoting hoaxes and conspiracy theories claiming it helped steal the 2020 elections for Joe Biden, is launching a social media site. Allegedly. No one really knows whether it exists or is just another Lindell fantasy. It’s possible there is a small army of coders locked in the basement of the MyPillow factory, who knows.

Advertisement

But this week we learned two critical pieces of info: Mike Lindell’s new social media site is named Frank, and it’s a platform for Americans who want to defend life, liberty, and all the freedoms that have marked America as the longest-running Constitutional Republic in the history of the world.

This poses a dilemma, though, because as we previously noted, the former president also has ephemeral plans for a censorship-free social media site for people who think their guns whisper to them at night.

Advertisement

There’s only one solution: Donald Trump and Mike Lindell must fight to the death. Possibly in a gladiatorial format, maybe jousting, could also be a cage match, or perhaps an old-timey duel? What’s important is that two old men of dubious lucidity enter, one old man leaves—as the tech bro CEO of a startup social media firm that possibly exists entirely within their heads. But watch out, Mr. Trump. Lindell looks like a biter.

Big Tech CEOs Waffle on Banning the 12 Major Anti-Vaxxers at Congressional Hearing

Anti-vax conspiracist Robert F. Kennedy Jr. in Berlin in August 2020.

Anti-vax conspiracist Robert F. Kennedy Jr. in Berlin in August 2020.
Photo: Sean Gallup (Getty Images)

After a report from the Center for Countering Digital Hate (CCDH) and Anti-Vax Watch found that a huge percentage of misinformation and conspiracy theories about vaccines can be traced back to just a dozen people, the CEOs of Facebook, Google, and Twitter told Congress they weren’t sure they would ban them.

Advertisement

The CCDH/Anti-Vax Watch report found that some 73 percent of misinformation on Facebook, and 17 percent on Twitter, is linked to a group of 12 accounts including prominent anti-vaxxers Joseph Mercola, Robert F. Kennedy Jr., Ty & Charlene Bollinger, Sherri Tenpenny, and Rizza Islam. The report also identified what it concluded were clear violations of platform policies on the spread of disinformation about the novel coronavirus pandemic and vaccines in general. The report was prominently cited in a letter by 12 state attorneys general to Twitter CEO Jack Dorsey and Facebook CEO Mark Zuckerberg demanding they do more to fight coronavirus-related misinformation; according to the Washington Post, this mirrors internal Facebook research showing relatively tiny groups of users are primarily responsible for flooding the site with anti-vaccine content.

“Analysis of a sample of anti-vaccine content that was shared or posted on Facebook and Twitter a total of 812,000 times between 1 February and 16 March 2021 shows that 65 percent of anti-vaccine content is attributable to the Disinformation Dozen,” the report states. “Despite repeatedly violating Facebook, Instagram, and Twitter’s terms of service agreements, nine of the Disinformation Dozen remain on all three platforms, while just three have been comprehensively removed from just one platform.”

“Research conducted by CCDH last year has shown that platforms fail to act on 95 percent of the Covid and vaccine misinformation reported to them, and we have uncovered evidence that Instagram’s algorithm actively recommends similar misinformation,” they added. “Tracking of 425 anti-vaccine accounts by CCDH shows that their total following across platforms now stands at 59.2 million as a result of these failures.”

At Thursday’s hearing on disinformation in the House Committee on Energy and Commerce featuring Dorsey, Zuckerberg, and Alphabet/Google CEO Sundar Pichai, all three either hedged or simply reiterated that their companies have rules when pressed by Representative Mike Doyle on the dozen anti-vaxxers, per TechCrunch.

Zuckerberg said Facebook would need to see specific examples of the content in question, after which Doyle interrupted him. Pichai pivoted to a statistic about how many YouTube videos with misleading information about the pandemic the company had also deleted but added that “some” otherwise infringing content was allowed “if it’s people’s personal experiences.” (Doyle also cut him off after re-asking the question.) Dorsey simply stated Twitter moderators “remove everything against our policy.”

Facebook has previously raised a similar defense to Pichai, with Facebook’s head of health Kang-Xing Jin writing in an editorial last month in the San Francisco Chronicle that “Vaccine conversations are nuanced, so content can’t always be clearly divided into helpful and harmful. It’s hard to draw the line on posts that contain people’s personal experiences with vaccines… We are working with experts to identify and remove widely debunked hoaxes that could put people at risk for harm, while also providing the facts from trusted sources that can help us combat vaccine misinformation during this critical time.”

Advertisement

Antivax misinformation on social media and video sites like YouTube is believed to be by no means the only, but a major factor in what medical researchers call vaccine hesitancy across the country. Research has shown a strong link between anti-vax content on social media sites and the propagation of skepticism about the effectiveness and safety of vaccines. Polling has shown that double-digit percentages of Americans say they won’t take a coronavirus vaccine, with a clear partisan bent where Republicans and Donald Trump supporters are more opposed. However, the number of people who claim they are unwilling to get vaccinated has dropped rapidly as it becomes more widely available and health officials work to reassure the public vaccines on the market are safe and effective, CNN reported earlier this month.

“Coronavirus vaccines only work if people actually get them. Pseudoscience coronavirus conspiracy theories peddled by a small number of uninformed anti-vaxxers have reached tens of millions of social media followers,” Connecticut Attorney General William Tong, one of the state attorneys general who signed the letter to Facebook and Twitter, told the Washington Post in a statement. “These posts are in flagrant violation of Facebook and Twitter policies. Facebook and Twitter must fully and immediately enforce their own policies, or risk prolonging this pandemic.”

Advertisement

Facebook told Engadget in a statement that actions it takes to limit the reach of accounts spreading misinformation include “reducing their distribution or removing them from our platform” and that it has “already taken action against some of the groups in this report.” A Twitter spokesperson told the site it does not act against “every instance of misinformation” but removes tweets that pose a risk of “serious harm.”

Here’s the Weird ‘Clock’ on Jack Dorsey’s Kitchen Counter

Illustration for article titled Here's the Weird 'Clock' on Jack Dorsey's Kitchen Counter

Screenshot: Gizmodo/C-SPAN

The House Energy and Commerce Committee hearing with the CEOs of Google, Facebook, and Twitter is already raising questions. Namely, what the heck is that thing on Jack Dorsey’s kitchen counter?

Advertisement

The freshly trimmed Twitter CEO appeared before the congressional committee to answer questions about social media’s role in spreading disinformation, particularly in the lead-up to the Jan. 6 attack on the U.S. Capitol. He appeared alongside (to the extent that anything is “alongside” in a Zoom-based hearing) Google’s Sundar Pichai and Facebook overlord Mark Zuckerberg.

As Dorsey promised to “make our internal processes both more robust and more accountable to the people we serve,” focus quickly turned to his suspiciously tidy background. In the lower-left corner of the screen, viewers spotted a bizarre-looking… thing. It looked like a clock at first, reading, “1952″—which seemed at first glance like the current time, or some approximation of it. But then the numbers switched to this:

Illustration for article titled Here's the Weird 'Clock' on Jack Dorsey's Kitchen Counter

Screenshot: Gizmodo/C-SPAN

“676274″ is not a time I’m aware of. And, in fact, it’s not the time at all. The gadget is called a BlockClock, and it’s exactly the kind of thing you’d expect to find in Jack’s kitchen.

The BlockClock, created by bitcoin hardware company Coinkite, allows users to automatically display the prices of a variety of cryptocurrencies like bitcoin or ether. The wifi-connected device also lets users “see blocks as they are published by miners and connect Opendime to display balance, fiat value, and deposit QR codes,” as Coinkite describes its functionality.

Dorsey, who is also founder and CEO of fintech company Square, is famously a cryptocurrency evangelist. In early February, he set up his own bitcoin node, which means he’s tapped into the system that makes bitcoin work on a technical level. Later that month, Dorsey announced that he and Jay Z had invested 500 BTC in an endowment to develop bitcoin in Africa and India. At current exchange rates, that translates to nearly $26 million—a conversion you’d know at a glance if you, like Jack, owned a BlockClock Mini.

For those of you who see this insufferable device and think, “hell yeah, I need that,” you can order the BlockClock Mini at Coinkite for $400 (0.0078 BTC).

Senate Considering Hauling in Facebook, Twitter CEOs so They Can All Ramble for Hours About Whatever BS Gripe They Have This Time

Twitter CEO Jack Dorsey at a Congressional hearing in November 2020.

Twitter CEO Jack Dorsey at a Congressional hearing in November 2020.
Photo: Hannah McKay (Getty Images)

HellfeedHellfeedHellfeed is your bimonthly resource for news on the current heading of the social media garbage barge.

If you thought last year’s clusterf*ck of a Senate hearing on social media was a good use of everyone’s time, congrats! The Senate is considering calling Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and the rest of the gang back together for another hearing, this time before the Judiciary Committee.

Advertisement

Per Politico, Senator Chris Coons told the site on Thursday that he “[thinks] there’s reason for us to ask them to come before us again.” While the plans aren’t final and Coons said he was still negotiating with his Republican counterparts, he added his expectation is that “we’ll look at the dynamics of social media and I think we’ll look at the intersection between privacy, civil liberties and civil rights in the digital context.”

Last year’s hearing was before the Commerce Committee. At the time, it was still controlled by Republicans, but Democrats joined their colleagues across the aisle in a unanimous vote to subpoena Zuckerberg, Dorsey, and Alphabet-Google CEO Sundar Pichai. Democrats’ rationale at the time was that the committee chair, GOP Senator Roger Wicker, had promised the hearing would reserve time for Dems’ preferred issues like antitrust and not solely serve as a vehicle for conservatives to scream at the assembled CEOs about liberal bias. Of course, the latter thing is exactly what happened.

With Democrats in control, perhaps this hearing will go a little smoother. Anything’s possible, right? ¯\_(ツ)_/¯

It’s been a while since our last edition of Hellfeed, so here’s some of the biggest developments in the social media world over the last few weeks.

Facebook is building a version of Instagram for, uh, kids

It’s long been the case—based both on safety concerns like bullying and pedophiles and, more cynically, laws surrounding the collection of user data on children—that Facebook and its subsidiary Instagram have been age-gated to those 13 and older. Of course, this has been completely unenforceable without solutions nobody likes, such as requiring new users to provide photos of their IDs. Children have slipped onto the site in droves, and like their teenage counterparts, sometimes face extreme amounts of bullying and harassment, not to mention the occasional message from pedophiles.

Advertisement

As originally reported by BuzzFeed, Facebook has a jaw-dropping solution to this: A post on an internal company message board by Instagram vice president of product Vishal Shah said the company is working on “a version of Instagram that allows people under the age of 13 to safely use Instagram for the first time.” What could go wrong? Well, YouTube Kids—which unlike an Instagram for children, doesn’t even involve kids uploading videos of themselves—resulted in claims of illegal data collection and the site being flooded with disturbing videos uploaded by bots or horrible trolls. YouTube was eventually forced to overhaul the whole product. Facebook is mulling a product for children based around one that lets adults upload everything from drug cartel glamour posts to pro-eating disorder content, so… yeah.

As Gizmodo colleague Matt Novak pointed out, pretty much everything about this product and how it will function is an unknown at this point. But it does reek of an effort to get ever-younger users signed up for the Facebook data machine, thinly veiled with the excuse that it’s trying to make kids already on Instagram safer. Yeesh.

Advertisement

Facebook Groups: Now with slightly more oversight!

Facebook also announced this week that it’s taking steps to clean up Groups, the interest-based communities that it tried to juice in recent years before many of said groups inevitably became hives full of QAnon conspiracists, election truthers, anti-vaxxers, far-right propagandists, and the people who organized the Capitol riots. Changes include prohibiting users who break rules from posting or commenting in Groups for a period of time, putting warning labels on groups that have broken rules, and requiring tighter moderation of rules-violating communities. Surely they’ll whack that mole this time!

Advertisement

Parler is somehow getting worse, actually

A few fun updates from our friends at Parler, the far-right Facebook/Twitter clone for people who love issuing death threats and would marry a gun if they could just choose one:

Advertisement

  • While the site has managed to crawl back onto the web after losing its web hosting and app store placements over its role in the Jan. 6 riots at the Capital, it hasn’t convinced any of the tech companies that ditched it—Amazon, Apple, and Google—to do business with them again.
  • Parler claims to now have algorithms to detect content calling for violence now, but there’s no reason to believe anything will change. Apple rejected the company’s appeal to get back on the App Store, after which Parler reportedly fired its whole iOS team.
  • Republican megadonor and Parler investor Rebekah Mercer, a hardliner on the whole giving-racists-and-conspiracy-theorists-a-giant-megaphone-to-spew-hate-online issue, is reportedly personally bankrolling the site with “big checks” at this point and flexing her muscles to preserve that vision. The new CEO, apparently a Mercer pick, is a Tea Party activist.

Advertisement

Definitely not a ticking time bomb waiting to go off for a second time or anything.

Posting on Gab was maybe not the smartest idea

Gab, Parler’s neo-Nazi uncle, has been hacked—big time. Whistleblower site DDoSecrets announced the release to a group of reporters of some 70 gigabytes of data lifted from the company’s servers, including profile and user data, posts, private messages, and more.

Advertisement

A similar situation played out on a far smaller scale with white supremacist forum Iron March, which had its SQL database dumped on the Internet Archive by an unknown hacker in 2019. The result was numerous white nationalists/supremacists, fascists, and current/former members of violent groups like the terroristic Atomwaffen Division had their identities publicly revealed, which is sort of inconvenient when you’re trying to anonymously spark a race war.

The Gab leak is already providing a similar look at what’s going on behind closed doors there, and the sheer size of the leak is likely to keep researchers and reporters busy for a while.

Advertisement

I shall simply open my own failing internet hellhole

You may remember MyPillow founder Mike Lindell from his previous best hits, such as months of increasingly depraved promotion of voter fraud hoaxes (TL;DR: Donald Trump won, apparently!) and the $1.3 billion lawsuit he is facing from an election tech manufacturer over that. He’s definitely not mad that he got banned from Twitter, which is why he’s announced he is launching his own free speech site, Vocl. Per Business Insider:

In an interview with Insider, [Mike] Lindell said he plans to call the site “Vocl” and he described it as a cross between Twitter and YouTube.

“It’s not like anything you’ve ever seen,” he said to Insider in a Wednesday interview. “It’s all about being able to be vocal again and not to be walking on egg shells.”

Vocl, he said, isn’t like Gab or Parler, two far-right social-media sites. It’s a cross between Twitter and YouTube meant “for print, radio, and TV,” he said.

Advertisement

Sure thing, Mike.

ISIS is trying to hit its crowdfund goal

Facebook, Telegram, PayPal, and other big tech firms are continuing to serve as a vehicle for crowdfunding the Islamic State terror group, often via accounts that are fake or run by sympathizers and middlemen posing as humanitarian interests, according to an in-depth feature on Rest of World:

Vera Mironova, a visiting fellow at Harvard University who has extensively monitored online terrorist fundraising campaigns, notes that posts follow the mores of their host platform. “So secretive campaigns would not be posted on Facebook, or if they were, they would sound more humanitarian and not use words like ‘ISIS.’ But the ones on Telegram go full hurrah,” she explained. This same dynamic plays out on a country-by-country level, Mironova added, and is especially apparent on payment platforms. “Some countries — let’s say Russia or parts of Eastern Europe, Uzbekistan, Tajikistan — they just do not care,” she said. “ISIS-linked campaigns coming from those places absolutely won’t hide anything. … They could use any platform; they even transfer money between bank cards.”

Advertisement

The full thing is worth a read, because this type of thing is now a permanent fixture of the internet and will only become more relevant going forward.

You are not going to get rich tweeting. You are not going to get rich tweeting

Twitter, which has been introducing new features at a rate of approximately 10 per minute, has announced that it is working on Super Follows, a tool for users to launch paid subscriptions with access to private feeds or posts. While feed-addicted journalism and media types might be salivating at the prospect of being paid to waste time, Twitter has yet to clarify whether it will allow the most obvious application that will actually make money: porn.

Advertisement

Accessibility on social media apps continues to be a challenge

The Washington Post has an interesting feature on how apps like TikTok have tried to implement accessibility features, but still lag far behind on implementing or improving features like speech to text transcription—making them harder to use for those with deafness, hearing loss, or visual impairments. A good roundup of the technical challenges behind implementing such features on the one hand, but also how tech firms have sometimes failed to prioritize working on them on the other.

Advertisement

Hoo boy, Substack sure made a mess

Newsletter platform Substack isn’t really a social media site. But it essentially wouldn’t exist without Facebook and Twitter, where the various journalists, commentators, and web personalities that actually write those newsletters generated and cultivate their followings in the first place. Besides, what we will euphemistically refer to as “Substack discourse” is now approximately three hundred percent of Twitter.

Advertisement

In the past week Substack has come under fire for its practice of luring high-profile writers to set up shop on the site by writing huge “advance payment” checks. That might be less controversial were it not for the fact that many of its most prominent power users regularly write raving diatribes about supposedly out-of-control leftism, “cancel culture,” “identity politics,” and stuff like that. Glenn Greenwald, one of the site’s biggest success stories (and who says he did not accept an advance check from Substack), uses his account to further vitriolic feuds such as one with a specific New York Times reporter. Another, Irish TV writer Graham Linehan, aggressively promotes anti-trans rhetoric.

Annalee Newitz, founder of our sister blog io9, penned a Medium post arguing that Substack’s habit of paying writers, sometimes without disclosure, and seemingly allowing others with huge followings to violate its rules essentially makes it less of a platform than an editorial publication—except one with none of the editorial standards followed by reputable ones:

So Substack has an editorial policy, but no accountability. And they have terms of service, but no enforcement. If you listen to [co-founder Hamish McKenzie], they don’t even hire writers! They just give money to people who write things that happen to be on Substack. It’s the usual Silicon Valley sleight-of-hand move, very similar to Uber reps claiming drivers aren’t “core” to their business. I’m sure Substack is paying a writer right now to come up with a catchy way of saying that Substack doesn’t pay writers.

Advertisement

(No, no one means “publication” in the way Josh Hawley does, stop asking.)

Substack wrote in a blog post that misunderstandings about the actual makeup of the advance payments program has resulted in a “distorted perception of the overall makeup of the group, leading to incorrect inferences about Substack’s business strategy.” But because there’s no transparency into who Substack is paying beyond those writers which have chosen to disclose they cashed a check, you’re just gonna have to take their word for it.

Advertisement

And the Q of QAnon is…

An HBO documentary series airing this weekend claims to have discovered the identity of QAnon’s Q, the individual or individuals behind a sprawling pro-Trump conspiracy theory that infected the Republican Party (primarily via Facebook) and provided much of the manpower at the Capitol riots. It’s not exactly a huge surprise that the culprit named here is Ron Watkins, the administrator of imageboard sites 8chan/8kun, where Q posted for years after leaving 4chan.

Advertisement

That doesn’t necessarily solve the mystery of who came up with Q in the first place, as Watkins may have simply took over the Q account from its original creator, and whatever case Q: Into the Storm believes it has to prove Watkins is Q has yet to be vetted. Either way, don’t think we’re done with this whole mess anytime soon.

The ban list

Ladies and gentlemen, drum roll please…

  • QAnon cheerleader and (unfortunately) Representative Marjorie Taylor Greene was temporarily suspended from Twitter for 12 hours thanks to an “error,” though one could argue one wasn’t actually made.
  • YouTube took down a video from bigoted talk show host Steven Crowder, not for mocking Black speech and culture in an explicitly racist way or suggesting Chinese restaurants spread the novel coronavirus, but for violating anti-misinformation policies by conflating the pandemic death toll with that of the common flu. That’s because they’re cowards afraid of backlash from conservatives.
  • Facebook banned the military of Myanmar, which perhaps might have been more effective had it done so before they used the site to incite genocide.
  • Also, Facebook briefly banned news links across the entire country of Australia in an inspiring corporate protest against a law forcing them to pay out a share of revenue to news sites.
  • Twitter accidentally auto-banned a lot of people, including Gizmodo weekend editor Alyse Stanley, for posting the word “Memphis.”
  • TikTok banned the use of the “super straight” hashtag, which claimed that being transphobic is a gender identity, and its creator Kyle Royce.
  • World’s worst lawyer Rudy Giuliani was banned from YouTube for two weeks for refusing to stop insisting his ex-boss, who hates him, won the 202 elections.

Advertisement

Honorable mention: Neera Tanden, Joe Biden’s nominee to run the Office of Management and Budget, didn’t get banned from Twitter. But her tweets attacking numerous members of Congress did get her “banned,” in a sense, from further consideration for the job.

Zuck Slowly Shrinks and Transforms Into a Corncob Ahead of Apple’s Looming Privacy Updates

Illustration for article titled Zuck Slowly Shrinks and Transforms Into a Corncob Ahead of Apple's Looming Privacy Updates

Photo: Drew Angerer (Getty Images)

Facebook has pushed back against Apple’s planned rollout of anti-tracking tools at every possible opportunity, but now the social media giant seems to be changing its tune in a last-ditch effort to save face. On Thursday, CEO Mark Zuckerberg said Facebook may actually be in a “stronger position” after the privacy updates to iOS and is optimistic about how the company will weather this change, according to CNBC and CNET.

Advertisement

“The reality is is that I’m confident that we’re gonna be able to manage through that situation well and we’ll be in a good position,” he said in a Clubhouse room Thursday per the outlets.

With Apple’s planned privacy updates for iOS 14, which are scheduled to roll out sometime this spring, the company aims to give iOS users more transparency and control over their data by requesting permission before apps can track their activity across other apps and the web.

Facebook hasn’t been too keen on that idea given that roughly 98% of its revenue stream depends on targeted ads, which are built around monitoring a person’s browsing habits. The company launched a campaign to convince folks that personalized ads are good, actually, which has so far involved taking out full-page ads in several leading newspapers to condemn Apple and running a video ad claiming that Apple’s privacy updates are killing small businesses by not giving Facebook and other apps free rein to hoover up your data.

(As you might already suspect, Facebook’s claims have been found to be misleading at best, and self-serving propaganda at the worst. While advertising might become slightly more difficult for small businesses and developers with Apple’s new updates, Facebook stands to take the biggest revenue hit, not the little guys.)

Now though, with Apple’s updates looming close on the horizon, Facebook is apparently adopting a new strategy: corncobbing. Aka, to continue to embarrass oneself rather than admit to being brutally owned.

On Thursday, Zuckerberg reiterated concerns that Apple’s decision could still hurt small businesses and developers, but also expressed hope that Facebook might benefit from the situation, CNBC and CNET report.

Advertisement

“It’s possible that we may even be in a stronger position if Apple’s changes encourage more businesses to conduct more commerce on our platforms by making it harder for them to use their data in order to find the customers that would want to use their products outside of our platforms,” he said.

That’s a far cry from the bleak picture Facebook painted before. In August 2020, the company warned that Apple’s updates could lead to a more than 50% drop in its Audience Network advertising business, which lets mobile software developers personalize ads based on Facebook’s data. Facebook’s chief financial officer David Wehner also expressed concern it could hurt the social network’s ability to effectively target ads to users.

Advertisement

Apple and Facebook did not immediately respond to Gizmodo’s request for comments. Apple has repeatedly defended its planned privacy updates against Facebook’s accusations, arguing that these new features aren’t getting rid of targeted ads entirely but instead giving users the chance to opt-out if they wish to.

Social Media Execs Fuel Extremist Violence Globally, Report Finds

CEO of Facebook Mark Zuckerberg appears on a monitor as he testifies remotely during a hearing to discuss reforming Section 230 of the Communications Decency Act with big tech companies on October 28, 2020 in Washington, DC.

CEO of Facebook Mark Zuckerberg appears on a monitor as he testifies remotely during a hearing to discuss reforming Section 230 of the Communications Decency Act with big tech companies on October 28, 2020 in Washington, DC.
Photo: Michael Reynolds (Getty Images)

Under the pretext of “newsworthiness,” U.S. tech leaders have, for years, aided political leaders in spreading extremist views underpinned by racial animus and all other forms of prejudice, in turn giving rise to an explosion of violence and persecution targeting vulnerable communities worldwide.

Advertisement

A new report this week by the Global Project Against Hate and Extremism (GPAHE) lays bare the consequences of Silicon Valley’s complicity and neglect. As high-profile executives ignored and frequently profited off the work of prominent extremists—some of whom they also directly collaborated with in shady business arrangements—they helped to arm them with powerful constituencies, often by casting immigrants and minorities as existential threats in their respective countries.

The report comes as the United States is in the midst of grappling with a wave of anti-Asian sentiment online, which preceded thousands of documented accounts of harassment, physical assault, and civil rights violations against Asian Americans. The Pew Research Center last summer reported that 31% of Asian Americans had experienced slurs or jokes about their race or ethnicity, more than any other group since the start of the pandemic.

Many users and news commentators aghast by the uptick in violence against Asians and Asian Americans have rightly drawn a straight line to racist rhetoric depicting the covid-19 outbreak as a “Chinese virus,” sentiments that platforms such as Twitter helped to normalize throughout 2020. A cursory review of the Trump Twitter Archive reveals that former President Donald Trump’s tweets containing the phrase “China virus” or “Chinese virus” received more than 2.1 million retweets before his suspension in January over an unrelated offense.

“For years, Trump violated the community standards of several platforms with relative impunity,” GPAHE’s report notes. “Tech leaders had made the affirmative decision to allow exceptions for the politically powerful, usually with the excuse of ‘newsworthiness’ or under the guise of ‘political commentary’ that the public supposedly needed to see.”

Only after instigating a mass-violence event at the U.S. Capitol was Trump permanently expelled from Twitter; Facebook, Instagram, and YouTube, meanwhile, chose only to “indefinitely suspend” the president while they internally deliberate the decision. Facebook has passed the buck to its newly formed Oversight Board (its so-called “Supreme Court”), which is currently comprised of 20 members hand-picked by the company.

“[T]he fact that it took an actual insurrection, planned and encouraged on the companies’ own services, to get Facebook, Twitter, et al., to move is unbelievably discouraging,” GPAHE says, while noting that Facebook COO Sheryl Sandberg’s response was to push the blame onto her competitors. “I think these events were largely organized on platforms that don’t have our abilities to stop hate and don’t have our standards and don’t have our transparency,” Sandberg said during a Jan. 11 interview.

Advertisement

Citing a Media Matters for America (MMFA) investigation last month, GPAHE notes that more than 6,000 of Trump’s Facebook posts in the year preceding the Capitol violence—nearly a quarter of his 2020 posts—contained extremist rhetoric and disinformation about the pandemic and the 2020 election. In the year leading up to Jan. 6, Trump used Facebook to push false or misleading information about the election specifically at least 363 times, MMFA found.

Advertisement

Like Twitter, Facebook openly and intentionally authorized Trump to violate its community standards in exchange for user engagement, influence, and profit.

In one noteworthy incident, amid last summer’s protests and adjacent property destruction incited by police brutality against Black people, Trump declared via Twitter that “when the looting starts, the shooting starts”—a remark widely viewed as an endorsement of violence by his own supporters, some of whom traveled in heavily armed caravans to major protest sites.

Advertisement

While Twitter “hid” Trump’s post behind a written warning, likely to be ignored by his own followers, having been conditioned to view social media companies as hostile to their politics, Facebook CEO Mark Zuckerberg decided to do nothing, despite the outcry by thousands of his own employees. Zuckerberg instead sought to frame Facebook’s inaction as a public service, claiming that “accountability for those in positions of power can only happen when their speech is scrutinized out in the open.”

With its CEO having acknowledged the post publicly, Facebook was itself arguably encouraging violence at that stage by way of its decision to continue circulating the message to at least hundreds of thousands of users. “Within days,” GPAHE notes, “it had been shared over 71,000 times and reacted to over 253,000 times. The message was also overlaid onto a photo shared on Trump’s Instagram account, which quickly received over half a million likes.”

Advertisement

“The company insists the use of incendiary populist language predates social media, so its spread is unrelated to Facebook,” the report says. “This position completely ignores how Facebook has manipulated the online space in favor of extremism and how political abuse of social media has altered the American political landscape.”

Advertisement

Facebook’s inaction in the face of warnings by experts about the growing calls for violence by extremists across its platform is well documented.

The group Muslim Advocates, whose leaders have for years sat in meetings with top Facebook officials—including Zuckerberg and Sandberg, personally—published a timeline last year showing its efforts to warn company officials about armed militias and white supremacist groups organizing events that targeted communities based on their race and religion. The group said it was forced to release the timeline after Zuckerberg claimed an “operational mistake” was responsible for Facebook hosting a militia event page telling members to bring weapons to a protest in Kenosha, Wisconsin. (The event page had been flagged by users at least 455 times.)

Advertisement

The GPAHE report further highlights international extremists fueled by far-right ideologies that have risen to power with the help of U.S.-based social media platforms. The Alternative Fur Deutschland (AfD), for instance, which is “rabidly anti-Muslim, anti-refugee, and anti-LGBTQ,” has relied heavily on Facebook to spread messages of hate. As GPAHE notes, in 2017 the AfD rose to become Germany’s third-largest political party, the first to promote far-right views in the country in over a half-century.

Advertisement

AfD’s rise from a fringe party in 2013 to an increasingly formidable extremist force is deeply tied to the party’s harnessing of social media, researchers found. Starting in 2016, the AfD built up a large following on both Facebook and Twitter by sharing a high volume of sensationalist tweets and posts,” GPAHE says, adding that by 2019: “AfD maintained 1,663 Facebook pages, more active pages than all the other German political parties combined.”

The report further highlights the rise of President Rodrigo Duterte of the Philippines, a serial human rights abuser who, upon taking power, oversaw the unlawful killings of thousands of his own citizens and jailed his opponents on baseless charges. Facebook, a platform in use by an estimated 97% of Filipinos, worked closely with the Duterte campaign and even co-sponsored a forum that was broadcast, according to an exhaustive 2017 Bloomberg report, on 200 television and radio stations. Once elected, the company doubled down on the relationship. Facebook celebrated Duterte’s fame, which it helped to create, dubbing him at one point the “undisputed king of Facebook conversations.”

Advertisement

Human Rights Watch last year documented the Duterte government’s extreme punishments for citizens accused of violating the country’s covid-19 restrictions, often broadcast on Facebook, including videos of people placed in “dog cages” where police and other officialsforced them to sit in the midday sun as punishment, among other abuses.” Broadcast on Facebook Live by a police official, three members of the LGBTQ community were forced to kiss each other and “do a sexy dance” in front of onlookers, including a minor.

Similar accounts included in the report detail Facebook’s participation in helping to elect violent, bigoted officials in other nations, including the rise of Prime Minister Narendra Modi of India, who has sanctioned bloodshed against Muslims by rioters.

Advertisement

“Social media, and Facebook in particular, has had a horrifying damaging effect on democracies, societies and vulnerable populations around the world,” GPAHE says. “Bigoted populist leaders and far-right political parties across the globe have harnessed the power of social media to achieve political heights likely previously unattainable.”

GPAHE has issued specific recommendations in response to its findings, calling on social media companies to end “newsworthiness” exemptions globally, apply fact-checking to political advertisements, and implement “preventative genocidal protocols,” among others.

Advertisement

You can read the full GPAHE report, Democracies Under Threat, by clicking here.

Facebook Launches Tool to Help Users Get Covid-19 Vaccine

People receive the Pfizer covid-19 vaccine during opening day of the Community Vaccination Site, a collaboration between the City of Seattle, First & Goal Inc., and Swedish Health Services at the Lumen Field Event Center in Seattle, Washington on March 13, 2021.

People receive the Pfizer covid-19 vaccine during opening day of the Community Vaccination Site, a collaboration between the City of Seattle, First & Goal Inc., and Swedish Health Services at the Lumen Field Event Center in Seattle, Washington on March 13, 2021.
Photo: Jason Redmond (Getty Images)

Facebook has launched a tool in partnership with Boston Children’s Hospital to help people find covid-19 vaccine appointments, according to a press release from the social media company early Monday. The tool only works in the U.S. right now, but Facebook says it hopes to roll out the vaccine finder in more regions as the coronavirus inoculations become more widely available.

Advertisement

Users can visit Facebook’s Covid-19 Vaccine Info Center where they’ll be able to see hours of operation for locations with the vaccine, along with contact info and links online to make an appointment to get the jab. The Facebook tool is available in 71 different languages so far, according to the company.

“Today we’re launching a global campaign to help bring 50 million people a step closer to getting Covid-19 vaccines,” Mark Zuckerberg wrote in a Facebook post overnight.

“We’ve already connected over 2 billion people to authoritative Covid-19 information,” Zuckerberg continued. “Now that many countries are moving towards vaccinations for all adults, we’re working on tools to make it easier for everyone to get vaccinated as well.”

And while Zuck is correct that Facebook has started providing more authoritative covid-19 information in recent months, conspiracy theories about vaccines, among plenty of other things, found a safe home at Facebook for far too long—especially in private groups.

Illustration for article titled Facebook Launches Tool to Help Users Get Covid-19 Vaccine

Image: Facebook

Over 100 million vaccine doses have been distributed in the U.S. as of this past weekend, a remarkable achievement that should give Americans some sense of relief, however reserved. There is light at the end of the tunnel.

Advertisement

What happens if you don’t use Facebook but still want to find an appointment to get the vaccine? The good news is that Facebook’s tool is simply built on top of an existing tool that’s already available online called VaccineFinder, which is available at VaccineFinder.org. The tool was developed by the CDC, Harvard Medical School, and Boston Children’s Hospital, along with other health care partners.

To be clear, pointing out that the tool already exists is not to discount or belittle Facebook’s actions today. For some people, Facebook is the entire internet and anything that helps get more people vaccinated is certainly a good thing.

Advertisement

But it’s also helpful that this isn’t a tool that can only be accessed through Facebook. Plenty of people have jumped ship from Facebook in recent years following a multitude of privacy concerns with the platform. And the last thing we need is for Facebook to build proprietary public health tools that can only be accessed if you have a Facebook account.