Facebook and Instagram Are Rolling Out (More) Financial Incentives for Influencers

Illustration for article titled Facebook and Instagram Are Rolling Out (More) Financial Incentives for Influencers

Photo: AFP / Stringer (Getty Images)

If you thought the deal couldn’t possibly get sweeter for the influencers that flock to Facebook and Instagram to simultaneously bolster their social media followings and line their pockets, think again: On Tuesday, both platforms announced that they’ll be stepping up their respective games in the coming weeks by rolling out a suite of additional financial incentives aimed at keeping the creator class logged on and streaming.

Advertisement

During Instagram’s first Creator Week event, Mark Zuckerberg — the CEO of Facebook, which owns Instagram — debuted new features that will help influencers rack up “extra cash” in exchange for hitting certain milestones. According to Engadget, examples of goals that will translate to extra cash include selling badges within streams or going live with other accounts on Instagram and participating in “Stars Challenges,” on Facebook, which will reward creators for meeting certain streaming milestones and completing other predetermined tasks.

“We believe that you should be rewarded for the value that you bring to your fans and to the overall community,” Zuckerberg told creators during the event.

In addition to the new milestones, Instagram will also be rolling out an option for creators who sell their own products to link to them in-app, with additional options to earn commission directly from shopping posts.

The cash incentives seem explicitly designed to keep influencers, well influencing, which, in addition to lining creators’ pockets, serves the dual purpose of attracting more users to Instagram and Facebook.

Notably, the bonuses also seem to be explicitly targeted towards mid-range creators, rather than content behemoths with massive online followings. This seems to be in line with Zuckerberg’s recently stated goal of establishing a sort of “creator middle class” — the subset of influencers that, despite having substantial platforms, are not yet big or influence-y enough to merit sponsorship offers from big-name brands.

Does Tim Cook Know About Mark Zuckerberg’s Skills With a Spear?

Illustration for article titled Does Tim Cook Know About Mark Zuckerberg’s Skills With a Spear?

Photo: Kenzo Tribouillard (Getty Images)

Mark Zuckerberg’s Facebook page is having an extremely normal one, read: Mark’s latest gladiatorial displays to the mighty riff of Audioslave. As you can see, he is capable of firing bow and arrow and throwing a javelin—or a spear of some sort. Also, yesterday he dropped whatever still-intact public pretense might have remained and told Apple that Facebook is coming for them.

After two videos of paramilitary exercises posted on Sunday and Monday, Zuckerberg announced that Facebook will keep engagement and subscription tools free for creators until 2023, and it won’t be robbing you like Apple (and “others,” which means “Google”) with notorious app store commissions. He wrote:

To help more creators make a living on our platforms, we’re going to keep paid online events, fan subscriptions, badges, and our upcoming independent news products free for creators until 2023. And when we do introduce a revenue share, it will be less than the 30% that Apple and others take.

Advertisement

That 30% is a reference to Apple’s commission for purchases made through apps downloaded from the App Store. In an announcement about Facebook’s free online events service, the VP of the Facebook app Fidji Simo wrote that creators can blame Apple for taking their earnings. “We asked Apple to reduce its 30% App Store tax or allow us to offer Facebook Pay so we could absorb all costs for businesses struggling during COVID-19,” Simo wrote. “Unfortunately, they dismissed both our requests and [small and medium businesses] will only be paid 70% of their hard-earned revenue.”

Facebook fully intends to weaponize the above fact in order to muster the influencers and turn them against certain Facebook competitors. Zuckerberg added that Facebook will be unrolling a tool so that influencers can compare their earnings by breaking down “how different companies’ fees and taxes are impacting their earnings.”

“More to come soon,” he warned.

Protect your neck.

Facebook has spent the better part of a year protesting Apple’s privacy update which compels iPhone users to opt-in or out of data collection by apps. Not only is this expected to hurt Facebook’s ad network, but it draws uncomfortable attention to the fact that apps like Facebook, Instagram, and Messenger (and apps using Facebook ads) track our movements while browsing other apps and the internet.

A personal Tim Cook-Mark Zuckerberg rivalry flourished into a spectacular New York Times piece detailing attempts by a Facebook-backed political firm to anonymously smear Cook, even making a fake presidential campaign to turn Donald Trump against Cook. Meanwhile, speaking to the media and governments, Tim Cook has gotten to cast himself as a savior and raising the question what will it take to psych out this guy?

Zuckerberg named unrelated nemeses as the target audience for war games on Saturday, June 5th, claiming that an unnamed “trail that I like to hike” ran out of hiking permits and now only offers hunting permits. The message to Tim Cook was clear: Mark didn’t want to become a hunter, but his hand was forced.

Advertisement

Trump Suspended From Facebook for at Least 2 Years

Illustration for article titled Trump Suspended From Facebook for at Least 2 Years

Photo: John Gurzinski/AFP via Getty Images (Getty Images)

Facebook announced on Friday that Donald Trump’s accounts on the company’s platforms—which includes the former president’s Instagram account—are going to be suspended for a full two years. At the end of that two-year gap, the company will “look to experts to assess whether the risk to public safety has receded,” before reinstating his account.

Advertisement

This news comes about a month after the company’s Oversight Board voted to uphold the suspensions that struck Trump after he used the platform to praise people involved with the Capitol riots on January 6th. Because his account was officially suspended the following day, Facebook says that the two-year block will stay in place until January 7th, 2023.

According to a blog post by Facebook’s VP of global affairs, Nick Clegg, the two year period is meant to be “long enough to allow a safe period of time after the acts of incitement, to be significant enough to be a deterrent to Mr. Trump and others from committing such severe violations in future, and to be proportionate to the gravity of the violation itself.”

At the time of the original Oversight Board vote, some members criticized the originally open-ended nature of the ban, demanding that Facebook re-review their decision and “justify a proportionate response” for the then-outgoing president. They gave Facebook six months to respond with a verdict that “should be “consistent with the rules that are applied to other users of its platform.” This two-year stretch is meant to respond to those criticisms.

In the blog, the company noted that “any penalty” that gets applied (or not applied) to Trump will inevitably be controversial.

“There are many people who believe it was not appropriate for a private company like Facebook to suspend an outgoing President from its platform, and many others who believe Mr. Trump should have immediately been banned for life,” Facebook wrote. “We know today’s decision will be criticized by many people on opposing sides of the political divide—but our job is to make a decision in as proportionate, fair and transparent a way as possible.”

In addition to its decision on Trump, Facebook also updated its policies regarding posts by all politicians, which was first reported by the Verge. This includes publishing its “strike system” to allow users to “know what actions our systems will take if they violate our policies,” and limited its “newsworthiness” standard, which the company used as the basis for letting Trump say shit like this. Facebook says it will publish any instance in which it applies its “newsworthiness allowance” and, importantly, it will no longer “treat content posted by politicians any differently from content posted by anyone else.

Advertisement

We allow certain content that is newsworthy or important to the public interest to remain on our platformeven if it might otherwise violate our Community Standards,” Clegg wrote.We may also limit other enforcement consequences, such as demotions, when it is in the public interest to do so. When making these determinations, however, we will remove content if the risk of harm outweighs the public interest.”

Facebook Wants Your Thoughts and Prayers

Illustration for article titled Facebook Wants Your Thoughts and Prayers

Photo: Justin Sullivan (Getty Images)

Facebook’s found a new way to capitalize on the thoughts, prayers (and data) from the religious side of its user base. On Thursday, the company confirmed that it’s begun expanding a new feature called “prayer posts” that will let members of particular Facebook groups literally ask for (and offer up) prayers for other folks on the platform.

Advertisement

A Facebook spokesperson confirmed that the feature’s been in testing for “over a year” before quietly rolling out to the masses over the past few months. Back in April, Robert Jones—who runs Public Religion Research Institute in Washington DC—was one of the first public faces to actually ask the company what the hell these posts actually were.

His question wasn’t picked up by mainstream outlets at the time, but more than a few religious-facing newswires jumped on the story and got Facebook to confirm that Prayer Posts were indeed being tested on a select few groups, though the company wouldn’t elaborate on which groups they were (hint: probably users who are religious).

At the time, Nona Jones—who has the baffling role of leading “Global Faith Partnerships” for the company—told one of these religious outlets that the idea for prayer posts stemmed from the need to “build community,” with users over the course of the pandemic. It’s not a coincidence that Jones was seeing this post in the lead-up to Easter when churches were expecting to see their attendance to be sliced to a fraction of what they’d expect in the pre-covid era.

“During the COVID-19 pandemic we’ve seen many faith and spirituality communities using our services to connect, so we’re starting to explore new tools to support them,” a Facebook spokesperson told Gizmodo. He added that the feature first debuted in select groups in the US in order to “give people the option of requesting prayer from their Facebook Group,” if they choose. The company did not answer questions on whether any of the data from these posts would be used to deliver targeted ads at users based on their group-praying habits.

When a group administrator opts into using the feature, members just drop prayer requests into the group, and then others can flock over and hit the “pray” button to notify the poster that their request has been prayed for. Sure, it’s… something, but like the majority of Facebook’s design, it’s a concept that feels cold and clinical and more than a bit bizarre. This is a company that puts profits before all else, always, and has found ways to turn even the smallest blip on the platform into data to be monetized. And if prayers are something deeply personal to the users posting them, at the end of the day, they’re still just points of data on a major social platform that has absolutely no scruples about turning over that data for as much cash as possible.

Advertisement

Facebook Moderator Says That ‘Wellness Coaches’ Advise Karaoke and Painting for Traumatized Workers

Illustration for article titled Facebook Moderator Says That 'Wellness Coaches' Advise Karaoke and Painting for Traumatized Workers

Photo: Drew Angerer (Getty Images)

The Irish Parliament today held a hearing on Facebook’s treatment of subcontracted content moderators—the thousands of people up to their eyeballs in toxic waste in the company basement. Moderators have repeatedly reported over the years that their contract companies hurl them into traumatizing work with little coaching or mental health support, in a system designed to stifle speech.

Advertisement

During the hearing, 26-year-old content moderator Isabella Plunkett said that Facebook’s (or the outsourcing firm Covalen’s) mental health infrastructure is practically non-existent. “To help us cope, they offer ‘wellness coaches,’” Plunkett said. “These people mean well, but they’re not doctors. They suggest karaoke or painting – but you don’t always feel like singing, frankly, after you’ve seen someone battered to bits.” Plunkett added that she’d gotten a referral to the company doctor and never heard back about a follow-up. She also reported that moderators are told to limit exposure to child abuse and self-harm to two hours per day, “but that isn’t happening.”

Content moderation requires that workers internalize a torrent of horror. In 2017, a moderator told the Guardian:

There was literally nothing enjoyable about the job. You’d go into work at 9am every morning, turn on your computer and watch someone have their head cut off. Every day, every minute, that’s what you see. Heads being cut off.

Last year, Facebook paid out an inconsequential $52 million to contractors in a class-action lawsuit filed by a group of moderators suffering from PTSD after exposed to child sexual abuse material, bestiality, beheadings, suicide, rape, torture, and murder. According to a 2019 Verge report on Phoenix-based moderators, self-medicating drug use at work was common at the outsourcing firm Cognizant.

Anecdotally, moderators have repeatedly reported a steep turnover rate; a dozen moderators told the Wall Street Journal that their colleagues typically quit after a few months to a year.

Plunkett has said that she was afraid to speak publicly, a common feeling among moderators. Foxglove, a non-profit advocacy group currently working to improve conditions for content moderators, said in a statement shared with Gizmodo that workers must sign NDAs of which they aren’t given copies. In 2019, The Intercept reported that the outsourcing company Accenture pressured “wellness coaches” in Austin, Texas to share details of their “trauma sessions” with moderators. The Verge also reported that Phoenix-based moderators constantly fear retribution by way of an Amazonian “point” system representing accuracy; employees can appeal demerits with Facebook, but their managers reportedly discouraged them from talking to Facebook, which sometimes reviewed their case only after they lost their jobs.

Foxglove told Gizmodo that Irish moderators claim the starting salary at Covalen is about 26-27,000 Euros, a little over $30,000 US dollars per year. Meanwhile, Facebook software engineers report on LinkedIn that their base salaries average $160,000 per year.

Advertisement

Facebook denied almost all of the above accounts in an email to Gizmodo. “Everyone who reviews content for Facebook goes through an in-depth training programme on our Community Standards and has access to psychological support to ensure their wellbeing,” a Facebook spokesperson said. “In Ireland, this includes 24/7 on-site support with trained practitioners, an on-call service, and access to private healthcare from the first day of employment.”

They also said that NDAs are necessary to protect users’ data, but it’s unclear why that would apply to speaking out about workplace conditions.

Advertisement

Covalen also denied Foxglove’s assertion that employees don’t receive copies of NDAs, saying that the confidentiality agreements are therefore archived and that HR “is more than happy to provide them with a copy.” They also said that they’re promoting a “speaking up policy,” encouraging employees to “raise [concerns] through identified channels.” So they can “speak out,” but internally, in designated places. They didn’t identify what happens when a moderator speaks out, only that they’ve “actively listened.” Technically, a wellness coach telling you to go to karaoke is listening, but it’s not providing any practical aid for post-traumatic stress.

Covalen also said that their “wellness coaches” are “highly qualified professionals” with at minimum master’s degrees in psychology, counseling, or psychotherapy. But it added that employees get access to six free psychotherapy sessions, implying that the 24/7 on-site “wellness coach” sessions are not actually psychotherapy sessions. Gizmodo has asked Facebook and Covalen for more specificity and will update the post if we hear back.

Advertisement

Given the unfortunate reality that Facebook needs moderators, the company could most obviously improve wellness by loosening up the pounding exposure to PTSD-inducing imagery. A 2020 report from NYU Stern pointed out that 15,000 people moderate content for Facebook and Instagram, which is woefully inadequate to keep track of three million posts flagged by users and AI per day. (When asked, Facebook did not confirm its current moderator count to Gizmodo.) The report cites Mark Zuckerberg’s 2018 statement on moderation, who put the number at two million; nonetheless, this would mean that at minimum 133 images flash before moderators’ eyes daily. According to The Verge, one moderator would review up to 400 pieces of content per day.

In her testimony, Foxglove co-founder and attorney Cori Crider pointed out that Facebook leans on moderators to keep the business running, yet they’re treated as “second-class citizens.” Crider urged Ireland’s Joint Committee on Enterprise, Trade, and Employment to regulate Facebook in order to end the culture of fear, bring contractors in-house, allow moderators to opt-out of reviewing harmful content, enforce independent oversight for exposure limits, and offer actual psychiatric resources.

Advertisement

The committee offered their sympathies and well-placed disgust.

“I would never want my son or daughter to do this work,” Senator Paul Gavan said. “I can’t imagine how horrific it must be. I want to state for the record that what’s happening here is absolutely appalling. This is the dark underbelly of our shiny multi-national social media companies.”

Advertisement

“It’s incredibly tough to hear,” Senator Garret Ahearn said, of Plunkett’s account. “I think chair it’s important that we do bring Facebook and these people in to be accountable for decisions that they make.”

We complain constantly that Facebook needs to do a better job of moderating. It also needs to do a better job of averting foreseeable calamity as it’s coming, rather than pay the lawyers and release the hounds later.

Advertisement

You can watch the full hearing here and Plunkett speak at a press conference here.

Oversight Board Finds Facebook Took the Coward’s Way Out With Trump Ban, Also Takes Coward’s Way Out

Illustration for article titled Oversight Board Finds Facebook Took the Coward's Way Out With Trump Ban, Also Takes Coward's Way Out

Photo: Jacquelyn Martin (AP)

America may have to wait another six months to find out whether Facebook has the capacity to make a good decision.

Advertisement

On Wednesday, Facebook’s Oversight Board announced that it would uphold the social media network’s decision to suspend Donald Trump from its main site and Instagram after he rallied his supporters to attack the Capitol on Jan. 6 in a failed effort to stop the certification of Joe Biden’s victory in the presidential race. It also ruled that when Facebook announced Trump would be banned “indefinitely” and punted the final decision on whether it would become permanent to the board, it was just arbitrarily making shit up.

“The Board has upheld Facebook’s decision on January 7, 2021, to restrict then-President Donald Trump’s access to posting content on his Facebook page and Instagram account,” the board wrote in its decision. “However, it was not appropriate for Facebook to impose the indeterminate and standardless penalty of indefinite suspension. Facebook’s normal penalties include removing the violating content, imposing a time-bound period of suspension, or permanently disabling the page and account.”

“The Board found that the two posts by Mr. Trump on January 6 severely violated Facebook’s Community Standards and Instagram’s Community Guidelines,” the board continued. “‘We love you. You’re very special’ in the first post and ‘great patriots’ and ‘remember this day forever’ in the second post violated Facebook’s rules prohibiting praise or support of people engaged in violence. The Board found that, in maintaining an unfounded narrative of electoral fraud and persistent calls to action, Mr. Trump created an environment where a serious risk of violence was possible.”

“Given the seriousness of the violations and the ongoing risk of violence, Facebook was justified in suspending Mr. Trump’s accounts on January 6 and extending that suspension on January 7,” they added. “…[But] it is not permissible for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored… In applying a vague, standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities.”

The board then told Facebook that within six months, it must revisit the matter and either make Trump’s ban permanent themselves or release his account. They wrote the final determination “must be based on the gravity of the violation and the prospect of future harm” and “be consistent with Facebook’s rules for severe violations, which must, in turn, be clear, necessary and proportionate.”

Facebook’s normal penalties include removing the violating content, imposing a time-bound period of suspension, or permanently disabling the page and account.

Advertisement

The Oversight Board, depending on who one asks, is either an independent body made up of academics, lawyers, politicians, and free speech activists with the ability to review and overrule virtually any of Facebook’s moderation decisions, including when Facebook rules in favor of the person who posted the contested content—or it’s an exercise in thinly veiled corporate obfuscation designed to add a patina of legitimacy to the company’s decisions. (Don’t blame us; blame the Oversight Board!) The board issued its first set of rulings in January, but the board took months to reach a conclusion as to whether our former carbuncle-in-chief’s use of Facebook to try and incite a coup, albeit a really shitty one, violated the site’s rules despite it clearly having done so.

Cori Crider, a director at Foxglove, a non-profit that works with Facebook content moderators around the world, told Gizmodo via email that the Oversight Board served to distract from Facebook’s broken moderation system and the grueling conditions that the army of contractors who man it labors under. Crider said absorbing the real cost of providing adequate “staff, pay, and mental health support” to keep Facebook would be transformative for the company, which is why they’ve done everything in their power to avoid doing it.

Advertisement

“Facebook is desperately hoping we’ll all pay attention to its shiny Oversight Board and ignore the real issue–content moderation on Facebook is totally broken,” Crider wrote. “It’s mostly done not by this Board but in digital sweatshops, and they don’t want to spend the money to fix it.”

“Today’s decision about Donald Trump is just one of thousands of similar decisions that get made every day with far less fanfare by underpaid, outsourced content moderators,” Crider wrote. “But instead of a plummy title and a six-figure stipend, the real content moderators are kept in working conditions that give lots of them PTSD. Facebook refuses to hire them, even though they’re the very heart of the business. And this lackadaisical approach to industrial-scale content moderation hasn’t been remotely enough to stop Facebook being a river of hate, lies, and violence.”

Advertisement

“… Moderators have real insight into the spread of lies and violence on Facebook, but when they try to suggest changes or report issues up, it’s like talking to a brick wall. Zuck ought to listen to their views,” Crider added. “I’d also invite all the Oversight Board members, if they’re really concerned about the health of the global public square, to sit in a Facebook moderator’s shoes for a week and grapple with the violence and hate and child abuse themselves. It would open their eyes to what Facebook really is – and lead to them calling for their colleagues to be given a fairer shake.”

Trump was always more concerned with the ban on @realDonaldTrump, his now-defunct Twitter account where he could more directly influence or at least try to piss off the droves of media and political elites on the platform. His account peaked at nearly 89 million followers before the kill date in January. Trump went so far as to continue posting via a series of alts including his campaign account, which was banned too. He’s seemed less eager about Facebook, where he has just shy of 33 million followers and an additional 24 million on Instagram, despite the site fueling a vast, servile right-wing digital media ecosystem that relentlessly promoted his presidency, filled his coffers, and ginned up the manpower for the Jan. 6 riot. He did briefly attempt to evade the ban by livestreaming via daughter-in-law Lara Trump’s account, though that attempt was aborted after Facebook warned workarounds wouldn’t pass scrutiny.

Advertisement

While Wednesday’s decision does uphold Facebook’s original decision to suspend access to Trump’s account, it could also be read as giving Facebook up to another six months to make up its mind after seeing which way the winds are blowing. Conservatives went into virtual apoplexy when Trump got banned from both sites, neutering his social media presence overnight. They’re bound to be just as displeased about the Oversight Board’s decision, which puts a capstone on four years of Facebook and its CEO Mark Zuckerberg bending over backwards to please Republican politicians and pundits who went on to vitriolically criticize the site anyhow.

Now will start another era of Facebook doing exactly that, just in a slightly different manner with vaguely different rhetoric and with or without Trump. The system works!

Advertisement

This is a breaking news story and will be updated as more information becomes available.

Facebook Is Officially Beta Testing Hotline, a Clubhouse-Inspired Audio Q&A Feature

Illustration for article titled Facebook Is Officially Beta Testing Hotline, a Clubhouse-Inspired Audio Q&A Feature

Photo: LOIC VENANCE / Contributor (Getty Images)

Facebook on Wednesday ran its first public beta test of Hotline — a web-based Q&A platform that seems like it was dreamed up as the platform’s answer to the current voice chat app craze.

Advertisement

More specifically, Hotline is designed to function as a sort of love child between Instagram Live and Clubhouse, TechCrunch reports: Creators will address an audience of users, who will then be able to respond by asking questions with either text or audio. Unlike Clubhouse — which is strictly an audio-only platform — Hotline users will have the option to turn their cameras on during events, adding a visual element to an otherwise voice-dominated experience.

Hotline is currently being developed by Facebook’s NPE Team, which handles experimental app development within the company, and is being led by Eric Hazzard, who created the positivity-focused Q&A app tbh that Facebook acquired before pivoting Hotline.

A public livestream of the app’s functionality on Wednesday was led by real estate investor Nick Huber, who spoke about industrial real estate as a second income stream — which should give you a pretty good idea about exactly what type of creators Hotline will be attempting to net once it’s live. Close observers of the stream will have noticed that Hotline’s interface closely resembles Clubhouse’s, in that the speaker’s icon is situated atop or astride an “audience,” which is populated by listeners whose profiles appear below the livestream (on the desktop version, the audience is off to the side).

Where the app differs from Clubhouse is in its functionality for “audience” members, who will see the questions they ask appear in a list at the top of the stream which other users can then choose to upvote or downvote. The creator will also have the option to pull listeners onto the “stage” area to join them in a back and forth, which will be something closer to Zoom in nature than its audio-only forebears.

In a statement on Wednesday, Facebook declined to offer specific details about a launch date for Hotline, but said that developers have been encouraged to see how new multimedia features and formats “continue to help people connect and build community.

“With Hotline, we’re hoping to understand how interactive, live multimedia Q&As can help people learn from experts in areas like professional skills, just as it helps those experts build their businesses,” a Facebook spokesperson said.

Advertisement

How to Check if Your Phone Number Is in the Huge Facebook Data Leak

Illustration for article titled How to Check if Your Phone Number Is in the Huge Facebook Data Leak

Photo: Chip Somodevilla (Getty Images)

Hacked data on over 553 million Facebook users was leaked online over the weekend, including names, birthdates, a Facebook user’s relationship status, the city where they live, their workplace, and sometimes even email addresses. But the most sensitive data included in the leak is arguably the phone numbers, which are often used for two-factor authentication. And now there’s a way to easily check if your phone number is in the leak—at least if you live in the U.S.

Advertisement

The website The News Each Day has a simple tool where you can input your phone number and see if it’s in the leak. Gizmodo tested the tool against some data from the actual Facebook leak and found it to be accurate. For example, we tested Mark Zuckerberg’s phone number, which is included in the leak. It worked. (We assume Zuck has changed his phone number by now.)

If you’re willing to risk handing over your phone number, all you need to do to check is input your phone number without any hyphens or periods. You also need to include the international country code at the beginning. For example, if you’re used to seeing your phone number in this form, 555-212-0000, you should get rid of the hyphens and add the digit “one” in front.

Using the same fake number above, the number you input should look like this: 15552120000. If you include a variation that’s anything but the string of numbers, the tool will falsely tell you that your number is not included in the leak. In reality, it very well could be.

Illustration for article titled How to Check if Your Phone Number Is in the Huge Facebook Data Leak

Screenshot: The News Each Day

Of course, it’s totally understandable if you don’t want to enter your phone number into some website you’ve never heard of. (There’s very little available information about The News Each Day, and its privacy policy says that all it tracks is clicks through Google Analytics.) So, if you want to check if the Facebook data dump affected you using a better-known tool, HaveIBeenPwned has updated its database to include the Facebook leak. Just head to the site and enter the email address associated with your Facebook account, and you can see if your personal information was compromised. You’ll also be able to see if your email address was included in any other breaches included in HaveIBeenPwned’s database.

Facebook hasn’t said much about the leak, except that the info was hacked in 2019. The data was offered in hacking forums for a price two years ago, but the thing that makes this weekend’s leak different is that the data has now been leaked for free. Anyone can find the 16GB of data with just a simple Google search.

Advertisement

Update 12pm ET, April 5: HaveIBeenPwned has updated its database to include the Facebook leak. Use that tool if you’re sketched out by The News Each Day tool.

This Vintage Bread Slicer Wouldn’t Meet Today’s Safety Standards But It’s Fun to Watch Being Restored

Gif: YouTube

Bread slicers weren’t exactly the safest things around in the middle of the 20th century. That open blade is just begging for someone’s hand to slip. But OSHA issues aside, it’s fascinating to watch a simple, beautiful machine like this get restored.

Advertisement

The 16-minute video, first spotted by Digg, was produced for YouTube by LADB Restoration, which is dedicated to restoring old machines.

LADB has videos restoring everything from vintage tobacco cutters to antique grindstones. They’ve even restored an old blowtorch. But this latest video might be one of the best yet, if only because it’s so relaxing.

The video is so calming—wordless and focused on nothing more than the task at hand—that it’s almost therapeutic.

Gif: YouTube

LADB Restoration has a Patreon with longer videos, though we admit we’re not yet paying subscribers. But given how addictive simply watching machines get restored can be, that might have to change in the near future.

Health Official in Papua New Guinea Explains How Facebook Spreads Misinformation

Papua New Guinea’s Prime Minister James Marape (R) preparing to receive a dose of the AstraZeneca Covid-19 vaccine in Port Moresby on March 30, 2021.

Papua New Guinea’s Prime Minister James Marape (R) preparing to receive a dose of the AstraZeneca Covid-19 vaccine in Port Moresby on March 30, 2021.
Photo: Gorethy Kenneth/AFP (Getty Images)

Papua New Guinea’s health minister, Jelta Wong, spoke with an Australian think tank over videochat on Thursday about challenges PNG faces in combatting the covid-19 pandemic. PNG is seeing a resurgence of the disease along with a growing wave of misinformation being spread online about vaccines, and the talk is an interesting look at how Facebook has harmed public health in a relatively small country of just 9 million people.

Advertisement

“When Facebook hit Papua New Guinea, everybody became an expert. Everybody became a PhD, ” Wong explained sarcastically to a member of the Lowy Institute in a video that’s available on YouTube.

Wong, who says he uses Facebook to connect with family, went on to explain that while some people might be skeptical when they encounter misinformation on Facebook, there’s always a handful who will believe the weirdest conspiracy theories.

“There’s always one person or two or three or four people that will believe what they say. And that is our biggest challenge when people tell us that Bill Gates is behind all this. How could we say Bill Gates is behind it?” Wong said, referring to conspiracy theories that the Microsoft cofounder is implanting microchips through coronavirus vaccinations, among other things.

“One of the biggest philanthropists in the world…” Wong continued about Gates. “And then some nutcase turns around and puts it on Facebook that he’s the guy that started the collecting [illicit information] and then it just generates through Facebook. I think Facebook is is our biggest conspiracy theorist platform.”

“This is dangerous, very dangerous,” Wong said. “And this is the type of thing… like, we have a million more people in our country that just sit on Facebook because it’s cheap, it’s easy, and they can get their opinion out. That’s all it is for.”

Papua New Guinea struggled with Facebook long before the pandemic hit, with government officials even floating the idea in 2018 of banning the site and launching its own state-run alternative. And while a minority of people in Papua New Guinea are on the platform it’s still clearly contributing to headaches for health officials who want to see people vaccinated.

Facebook’s moderation policies receive a lot of attention in the U.S. and Europe, but many parts of the globe don’t get the same resources, especially in places that don’t speak English. PNG has a large English-speaking population, thanks in large part to colonialism, but it also has over 830 languages currently in use, a high number for a relatively small population. Facebook has moderators working in roughly 50 languages, according to its own statistics.

Advertisement

“I think Facebook has a lot of influence here and they need to be held responsible for some of the information that they [distribute],” Wong said.

“Most of it, if I take you through Facebook now… some of the stuff that is unbelievably not true. And they still push it out and they’re […] supposed to have a program to stop these type of things.”

Advertisement