A Pared-Down Version of Facebook’s Libra Project Could Launch as Soon as January

Illustration for article titled A Pared-Down Version of Facebooks Libra Project Could Launch as Soon as January

Photo: Josh Edelson (Getty Images)

Facebook’s long-anticipated cryptocurrency venture, Libra, could go live as early as January, three people involved with the matter told Financial Times this week. But don’t get too excited just yet: Thanks to a torrent of federal scrutiny and many of its chief investors bailing on the project, its scope has been significantly scaled back.

Advertisement

The Facebook-led Libra Association now intends to launch just “ a single coin backed one-for-one by the dollar” with plans to roll out additional currencies and a “digital composite” of all its coins at an unspecified future date, one source told the outlet. It’s a far cry from Libra’s original sales pitch back in June 2019 to “reinvent money” and “transform the global economy” by leveraging Facebook’s billions of users to scale its global, blockchain-based payment network.

But that was in the before times. Before the covid-19 pandemic wreaked havoc on the global economy, for one, and before a slew of troubles plagued the project. Seven of the Libra Association’s high-profile members, including PayPal, Stripe, Visa, eBay, and Mastercard, have pulled out of the project since then. Their exodus came after financial regulators in the U.S., India, China, the European Union, and elsewhere publicly opposed Libra and the so-called “crypto mafia” behind it and voiced concerns that the cryptocurrency network could threaten monetary stability or be flooded with money laundering schemes. Officials also worried that Facebook, a company that’s no stranger to screw-ups for the history books, was attempting to circumvent their control.

Advertisement

Facebook also ended up rebranding the digital cryptocurrency wallet, which it owns outright, that it had planned for the project. The company changed it from Calibra to Novi after the former’s logo became the subject of a trademark infringement lawsuit.

Libra’s official launch date remains in limbo for now, but could come as early as January pending approval to operate as a payment service from the Swiss Financial Market Supervisory Authority, sources told Financial Times under the condition of anonymity. As for Novi, one worker familiar with the matter said that the wallet was “ready from a product perspective” but Facebook’s holding back on its launch and instead focusing on “half a dozen high-volume remittance corridors” including the U.S. and several Latin American countries.

[Financial Times]

Facebook Experiments With Being Less Awful, Says Not to Get Used to It or Anything

Illustration for article titled Facebook Experiments With Being Less Awful, Says Not to Get Used to It or Anything

Photo: Bill Clark-Pool (Getty Images)

Has Facebook learned jack shit from the past few nightmare years? Not really, per a report in the New York Times on Tuesday. Facebook only started giving more weight to reputable publishers in the News Feed days after the 2020 election and doesn’t plan on making that a long-term thing. Executives on its policy team also blocked or sought to water down changes that would limit content the company defined as “bad for the world” or “hate bait,” as well as shot down a feature that would warn users if they fell for hoaxes.

Advertisement

According to the Times, CEO Mark Zuckerberg agreed days after the election to tweak the Facebook news feed to emphasize “news ecosystem quality” (NEQ), a “secret internal ranking it assigns to news publishers based on signals about the quality of their journalism,” because of rampant misinformation spread by Trump and his conservative allies over the election’s results. The Times wrote:

Typically, N.E.Q. scores play a minor role in determining what appears on users’ feeds. But several days after the election, Mr. Zuckerberg agreed to increase the weight that Facebook’s algorithm gave to N.E.Q. scores to make sure authoritative news appeared more prominently, said three people with knowledge of the decision, who were not authorized to discuss internal deliberations.

The change was part of the “break glass” plans Facebook had spent months developing for the aftermath of a contested election. It resulted in a spike in visibility for big, mainstream publishers like CNN, The New York Times and NPR, while posts from highly engaged hyperpartisan pages, such as Breitbart and Occupy Democrats, became less visible, the employees said.

Advertisement

Facebook had allegedly been weighing similar options to slow down the flow of misinformation in the event of a contested election—such as a pilot program to test something resembling a “virality circuit breaker,” which automatically stops promoting posts that go explosively viral until fact-checkers can look at it.

Report after report emphasized that Facebook remained a massive vector for the spread of right-wing disinformation efforts going into the elections, in part because it was fearful of upsetting Republicans convinced social media firms are secretly censoring them. Pro-Trump conspiracy theories alleging Democrats were preparing to win the election by fraud flourished with little intervention. So it’s rather convenient that Facebook only decided to weight NEQ more heavily in the news feed when it became clear Trump had lost.

The break-the-glass strategy wasn’t activated in the weeks or months prior to Nov. 3, when conservative media was promoting wild predictions of a rigged election. The platform’s useless warning labels failed to prevent post-election claims of mass voter fraud from the president and GOP-aligned media personalities from going viral. Nor did Facebook ever have a “plan to make these [NEQ changes] permanent,” Facebook integrity division chief Guy Rosen told the Times. That’s despite employees reportedly asking at company meetings whether the company could just leave the NEQ weights in place to improve the news feed somewhat.

According to the Times, Facebook internally released the results of a test this month called “P(Bad for the World)”, in which it gauged reducing the reach of posts users dubbed “bad for the world.” After it found a stricter approach decreased total user sessions as well as time spent on the site, it rolled out a less aggressive version that didn’t impact those metrics as much. To put it another way: Facebook knows being “bad for the world” in moderation is good for business.

Advertisement

Sources told the paper that before the election, executives on its policy team vetoed a “correct the record” feature that would direct users who engaged with or shared hoaxes to a fact-checking page and prevented an anti-“hate bait” feature from being enabled on Facebook Pages—instead limiting it to Groups. In both cases, the executives claimed that the changes might anger conservative publishers and politicians. (Rosen denied to the Times that the decisions were made on political grounds.)

Trump and the GOP’s threats to punish social media sites for liberal bias are dead in the water, and Facebook is likely to shift with the political winds in the coming months. But if its history is any indication, Facebook will continue playing a shell game of promising to rein in toxicity while actively encouraging it.

Advertisement

“The question is, what have they learned from this election that should inform their policies in the future,” Vanita Gupta, CEO of the Leadership Conference on Civil and Human Rights, told the Times. “My worry is that they’ll revert all of these changes despite the fact that the conditions that brought them forward are still with us.”

It’s not clear how much increasing NEQ’s clout on News Feed rankings has affected the number of times users log in or how long they spend on the site when they get there. Facebook’s News Feed lead, John Hegeman, told the paper the company would study any potential impact, though like Rosen indicated the changes are temporary.

Advertisement

Apple Defends Delay of iOS 14 Feature Limiting App Tracking, Blasts Facebook

Illustration for article titled Apple Defends Delay of iOS 14 Feature Limiting App Tracking, Blasts Facebook

Photo: Ming Yeung (Getty Images)

Earlier this year, human rights and privacy groups including the Electronic Frontier Foundation and Human Rights Watch wrote to Apple, asking why it was delaying the introduction of a feature that would force apps to receive explicit opt-in from iPhone users before tracking them. Apple responded, according to Bloomberg, with a letter slamming Facebook.

Advertisement

Apple rolled out the privacy-enhancing feature in iOS 14 in September but hasn’t made it mandatory for developers to enable yet. The groups wrote in a letter to the tech giant stating the delay was ill-advised in the “critical weeks leading up to and following the 2020 U.S. elections, when people’s data can be used to target them with personalized political ads.”

In the letter, Apple’s global head of privacy, Jane Horvath, responded to the groups by trashing Facebook and its business model.

Advertisement

“Too often, information is collected about you on an app or website owned by one company and combined with information collected separately by other companies for targeted advertisements and advertising measurement,” Apple wrote. “Sometimes your data is even aggregated and resold by data brokers, which are third parties you neither know nor interact with. Tracking can be invasive, even creepy, and more often than not it takes place without meaningful user awareness or consent.”

Apple touted the App Tracking Transparency (ATT) feature as part of its overall commitment to privacy, specifically naming Facebook:

By contrast, Facebook and others have a very different approach to targeting. Not only do they allow the grouping of users into smaller segments, they use detailed data about online browsing activity to target ads. Facebook executives have made clear their intent is to collect as much data as possible across both first and third party products to develop and monetize detailed profiles of their users, and this disregard for user privacy continues to expand to include more of their products.

Horvath also said that Facebook could have been partially responsible for the delay in rolling out ATT, telling Bloomberg the expanded timeline would “give developers the time they indicated they needed to properly update their systems and data practices.”

The ATT feature is a major change for app developers and requires users to verify they want to share usage data. It also restricts apps’ access to several unique identifiers that can be used to follow a user around, affecting their ability to monitor post-install actions, target ads, or build models of user behavior. VentureBeat wrote in September that while “10 to 30% of iOS users currently limit ad personalization, and as many as 15% currently use limited ad tracking to disable their [Identifier for Advertisers],” up to 80% are expected to hit no when given the ATT prompt.

Advertisement

Facebook shot back in a statement to Ars Technica and other outlets that this isn’t about privacy at all—it’s about locking down iOS with anticompetitive tactics to give Apple’s in-house offerings and unfair advantage. It has a point, as Apple is currently facing antitrust complaints from the ad industry over the iOS 14 update, as well as from companies including Spotify, Telegram, and Epic Games. The House Judiciary antitrust subcommittee recently found that Apple’s mandatory requirement that app developers use its payment platform is anticompetitive, as is the way it restricts APIs, modifies search rankings, and sets default apps. The Department of Justice antitrust division is reportedly looking into Apple along with other big tech firms, including Google, though details about the Apple probe remain vague.

“The truth is Apple has expanded its business into advertising and through its upcoming iOS 14 changes is trying to move the free internet into paid apps and services where they profit,” Facebook told Ars Technica. “As a result, they are using their dominant market position to self-preference their own data collection while making it nearly impossible for their competitors to use the same data. They claim it’s about privacy, but it’s about profit.”

Advertisement

This feud has been going on for a while. In August, Facebook warned that the iOS update could lower publisher revenue via its Audience Network by up to 50%. That month, Facebook also said that it was forced to strip down the version of Facebook Gaming available via the App Store due to TOS restrictions.

“Unfortunately, we had to remove gameplay functionality entirely in order to get Apple’s approval on the standalone Facebook Gaming app—meaning iOS users have an inferior experience to those using Android,” chief operating officer Sheryl Sandberg wrote in a statement.

Advertisement

Facebook Reportedly Faces Ban in Vietnam Over Refusal to Censor More Local Political Content

Nguyen Quoc Duc Vuong sentenced in July 2020 to eight years in prison over pro-democracy and anti-government Facebook posts in Vietnam.

Nguyen Quoc Duc Vuong sentenced in July 2020 to eight years in prison over pro-democracy and anti-government Facebook posts in Vietnam.
Photo: Vietnam News Agency/AFP (Getty Images)

The government of Vietnam is threatening to shut down Facebook within the entire country over the social media company’s refusal to censor more local political content, according to a new report from Reuters. Facebook had previously censored some political speech on the platform to appease the Vietnamese government, but apparently, that wasn’t enough.

Advertisement

Back in 2017, the government of Vietnam ordered Facebook to censor some so-called “anti-state” political posts on the platform for 60 million users in the country, something the social media company agreed to do. Facebook even set up a special “online channel” that allows government figures to flag content they dislike.

Individual anti-government users in the country have also been banned, drawing the ire of human rights groups around the world, and one man, Nguyen Quoc Duc Vuong, was recently sentenced to eight years in prison for “humiliating” the country’s leaders. Nguyen published pro-democracy material on Facebook and other things to which the Vietnam Communist Party objected, according to Human Rights Watch. Vietnamese authorities reportedly want even more strict actions against activists in the country, and they’re willing to shut it all down to make that happen.

Advertisement

Facebook did not explicitly confirm the Reuters report to Gizmodo on Friday morning, but it did hint at problems the company is facing in Vietnam, a market worth roughly $1 billion in revenue to the social media giant, according to reports.

“Millions of people in Vietnam use our services every day to connect with family and friends and thousands of businesses rely on them to connect with customers,” a spokesperson for Facebook based in Singapore said via email.

Facebook does not have any full-time employees in Vietnam.

“We don’t always see eye to eye with governments in countries where we operate, including Vietnam,” the spokesperson continued. “Over the past few months, we’ve faced additional pressure from the government to restrict more content, however, we will do everything we can to ensure that our services remain available so people can continue to express themselves.”

Advertisement

The Vietnamese government did not respond to a request for comment early Friday morning.

Where does that leave Zuck and his army of users in Vietnam? If the past is any guide, they’ll likely concede to the government’s wishes to maintain a presence in the country. And maybe Dipayan Ghosh, former public policy advisor at Facebook, said it best when he talked with the Los Angeles Times last month.

Advertisement

“The thought process for the company is not about maintaining service for free speech,” Ghosh said. “It’s about maintaining service for the revenue.”

Facebook Sues Operator Who Reportedly Scraped 100,000 Instagram Accounts for Clone Sites

Illustration for article titled Facebook Sues Operator Who Reportedly Scraped 100,000 Instagram Accounts for Clone Sites

Photo: Lionel Bonaventure (Getty Images)

Facebook filed a lawsuit on Thursday against a website owner who allegedly operated a network of Instagram clone sites using information from more than 100,000 public profiles. This complaint marks the social media giant’s latest crackdown on organizations both large and small for violating its terms of service.

Advertisement

According to Facebook, Ensar Sahinturk, who is a Turkish national, used automation software to scrape profiles, photos, and videos from over 100,000 Instagram accounts without permission. He then reportedly published this data on his network of clone websites, many of which had similar names to Instagram. Facebook said it became aware of the network in November 2019, and at least one of Sahinturk’s websites began operating as far back as August 2017. In a statement to TechCrunch, a company spokesperson said the network had “voluminous traffic” but did not disclose specific metrics concerning the extent of its reach.

In a company blog post announcing the suit, Jessica Romero, Facebook’s director of platform enforcement and litigation, said that Facebook had previously issued Sahinturk cease and desist letters and disabled his accounts on Facebook and Instagram. Now the company seeks to “obtain a permanent injunction” against him.

Advertisement

“Data scraping undermines people’s privacy and ability to control their information, and is prohibited by our Terms,” Romero said. “This case is the latest example of our actions to disrupt those who scrape user data as part of our ongoing commitment to protect our community, enforce our policies and hold people accountable for abusing our services.”

Facebook has been steadily churning out lawsuits in an aggressive campaign against developers and organizations that misuse its platform. Last month, Facebook filed two lawsuits targeting companies caught selling likes and followers on Instagram. A Russia-based developer was hit with a suit in August for purportedly running a network of businesses similarly dealing in fake engagement on the platform. It’s apparently a lucrative line of work, which explains why so many fraudulent campaigns keep cropping up. A ring in New Zealand allegedly made more than $9 million peddling artificial engagement services before Facebook came down with a lawsuit last year.

Facebook’s Contractors Say the Company Is Risking Their Lives To Turn A Pandemic Profit

Illustration for article titled Facebooks Contractors Say the Company Is Risking Their Lives To Turn A Pandemic Profit

Photo: DANIEL LEAL-OLIVAS / Contributor (Getty Images)

In May of 2020, with the COVID-19 pandemic fully underway, Facebook CEO Mark Zuckerberg announced that the company would allow most of its employees to continue working from home until at least the end of the year in an effort to “contain the spread of Covid-19 so we can keep our communities safe and get back up and running again soon.”

Advertisement

But that same luxury was apparently not extended to the thousands-strong fleet of contract workers Facebook employs to moderate harmful content on the platform, and on Wednesday, 200 of those workers sent an open letter to top executives at the company objecting to the way they’ve been treated.

Addressed to Zuckerberg and Facebook chief operating officer Sheryl Sandberg, as well as the CEOs of outsourcing companies Accenture and Covalen, the letter accuses Facebook of compromising the health and safety of its contractors and their loved ones in order to maintain Facebook’s profits during the pandemic.”

Advertisement

After months of allowing content moderators to work from home, faced with intense pressure to keep Facebook free of hate and disinformation, you have forced us back to the office,” the letter reads.Moderators who secure a doctors’ note about a personal COVID risk have been excused from attending in person. Moderators with vulnerable relatives, who might die were they to contract COVID from us, have not.”

As it is, there have already been COVID outbreaks in several of Facebook’s offices, with workers in Ireland, Germany, Poland and the United States testing positive for the virus.

And according to the letter-writers, the threat of infection comes on top of an already psychologically punishing workload—one that led Facebook to pay out some $52 million to US-based contractors last spring for trauma they endured on the job:

Before the pandemic, content moderation was easily Facebook’s most brutal job. We waded through violence and child abuse for hours on end. Moderators working on child abuse content had targets increased during the pandemic, with no additional support.

Now, on top of work that is psychologically toxic, holding onto the job means walking into a hot zone. In several offices, multiple COVID cases have occurred on the floor. Workers have asked Facebook leadership, and the leadership of your outsourcing firms like Accenture and CPL, to take urgent steps to protect us and value our work. You refused. We are publishing this letter because we are left with no choice. 

Advertisement

Along with the ability to work from home, the letter’s signatories also list a number of other demands, including an option to receive hazard pay and a call to end outsourcing and bring the content moderators in-house so that they can receive healthcare and other benefits.

The letter prompted a response from Facebook, with the company quick to argue that the majority of the 15,000 content reviewers it employs have continued to work from home during the pandemic and reiterating that it does make “well-being resources” available to its workers.

Advertisement

“While we believe in having an open internal dialogue, these discussions need to be honest,” Facebook spokesperson Toby Partlett told the New York Times in a statement. “Facebook has exceeded health guidance on keeping facilities safe for any in-office work.”

Facebook’s Content Moderators Have Had Enough

Illustration for article titled Facebooks Content Moderators Have Had Enough

Photo: Joel Saget (Getty Images)

In spite of coronavirus cases continuing to climb around the world, Facebook’s legions of contracted content moderators are still required to work out of offices “to maintain Facebook’s profits during the pandemic.” This is according to an open letter published on the company’s internal Workplace communication software, signed by more than 200 content moderators today.

Advertisement

“Workers have asked Facebook leadership, and the leadership of your outsourcing firms like Accenture and CPL, to take urgent steps to protect us and value our work,” the letter reads. “You refused. We are publishing this letter because we are left with no choice.”

Advertisement

Back in March, Zuckerberg told reporters that the bulk of this workforce would be allowed to work from home until the “public health response has been sufficient.” Apparently, that bar was cleared in mid-October, when Facebook told content moderation teams that they were required to work from their offices again. The Intercept later reported that one contractor working out of an Accenture-owned Facebook facility in Austin, Texas became symptomatic just two days after returning to work, and got a positive test back some days later. Foxglove, the law firm representing these contractors, added in a statement to the New York Times that additional contractors based out of Ireland, Germany, and Poland have tested positive for covid-19.

Naturally, the letter makes some requests for repairing the obviously strained relationship between these contractors and management.

First, they ask that all content moderators who are—or live with—someone in a high risk group be allowed to work from home indefinitely, and regardless of health status “work that can be done from home should continue to be done from home.” Right now, the only moderators granted this privilege are those that bring in a doctor’s note proving that they’re at high risk. Even then, the letter claims, this option isn’t offered in some workplaces.

In the call where Zuckerberg originally granted remote work for his company’s contractors, he mentioned that some of the more sensitive—or potentially illegal—subjects like child abuse would be best dealt with in-office for security purposes. The letter agrees that while criminal content should be handled onsite, there’s no reason that the rest of the content moderation team should be roped into doing the same.

Advertisement

The letter also asks that the moderators who work on these sorts of high-risk posts should be paid a hazard pay—1.5 times higher than their usual wage, while all content moderators should be offered “real” health and psychiatric care for the work that they do. Moderators, they argue, deserve “at least” as much mental and physical support as Facebook’s salaried staff.

Perhaps the boldest of the letter’s demands asks Facebook to stop outsourcing its moderation altogether. “There is, if anything, more clamor than ever for aggressive content moderation at Facebook. This requires our work,” the petitioners write. “Facebook should bring the content moderation workforce in house, giving us the same rights and benefits as full Facebook staff.”

Advertisement

As leverage for the contractors point out that Facebook’s prior attempts at quietly moderate its platform with AI-based solutions were abject failures. “Important speech got swept into the maw of the Facebook filter—and risky content, like self-harm, stayed up,” the moderators, who this AI solution was not equipped to displace, write. “Facebook’s algorithms are years away from achieving the necessary level of sophistication to moderate content automatically. They may never get there.”

It’s worth mentioning that these contractors hardly had it easy in the pre-pandemic era. Reports have described the long hours, low paychecks, and psychological trauma involved in what the letter rightfully calls one of the Facebook’s most “brutal” jobs. As the letter explains, the workers monitoring Facebook for child abuse specifically had their workload upped during the pandemic, but were given “no additional support” to support their slog.

Advertisement

Facebook currently boasts a moderation team of around 35,000 people, globally. It’s unclear how many of those Foxglove intends to represent or in what capacity.

We’ve reached out Facebook and Foxglove for comment and will update if we hear back.

Advertisement

Facebook Knows That Labeling Trump’s Election Lies Hasn’t Stopped His Posts From Going Viral

Illustration for article titled Facebook Knows That Labeling Trumps Election Lies Hasnt Stopped His Posts From Going Viral

Photo: Chip Somodevilla / Staff (Getty Images)

Facebook’s attempt to slow the spread of President Trump’s misinformation and outright lies by affixing warning labels to the content has done little to stop the posts from going viral—and the platform is apparently well aware.

Advertisement

According to internal conversations reviewed by Buzzfeed News, data scientists in the employ of Facebook freely admit that the new labels being attached to misleading or false posts as part of a broader strategy to stop the spread of election-related misinformation—referred to internally as “informs”—have had little to no impact on how posts are being shared, whether they’re coming from Trump or anyone else.

“We have evidence that applying these informs to posts decreases their reshares by ~8%,” the data scientists said, according to Buzzfeed. “However given that Trump has SO many shares on any given post, the decrease is not going to change shares by orders of magnitude.”

Advertisement

That Facebook has been unable to meaningfully reduce the spread of Trump’s lies isn’t exactly shocking, particularly given how feeble the platform’s attempts at stemming the tide of misinformation has been in the lead up to the 2020 election. But the tacit acknowledgement of the failure is illuminating if only in the sense that it provides confirmation that at least some employees at Facebook are alarmed at, and asking questions about, the company’s ineptitude.

Under particular internal scrutiny is Facebook’s failure to address two posts in which Trump falsely wrote, “I WON THE ELECTION,”—posts which, despite bearing labels fact-checking the claim, have amassed a combined 1.7 million reactions, 350,000 comments, and 90,000 shares to date.

“Is there any induction that the ‘this post might not be true’ flags have continued to be effective at all in slowing misinformation spread?” asked one Facebook employee on one of the company’s internal message boards. “I have a feeling people have quickly learned to ignore these flags at this point. Are we limiting reach of these posts at all or just hoping that people will do it organically?”

“The fact that we refuse to hold accounts with millions of followers to higher standards to everyone else (and often they get lower standards) is one of the most upsetting things about working here,” added another employee.

Advertisement

In response, one researcher working on civic integrity at the company helpfully pointed out that Facebook’s policy is not to formally fact-check politicians, which leaves little room for solutions.

“Will also flag that given company policy around not fact-checking politicians the alternative is nothing currently,” they said, according to Buzzfeed.

Advertisement

Even in the aftermath of unveiling the much-criticized election guidelines, Facebook has continued to come under fire for high-profile missteps related to free speech on the platform. After a video in which Former White House Chief Strategist Steve Bannon called for Dr. Anthony Fauci and FBI Director Christopher Wray to be beheaded was live on his Facebook page for more than 10 hours on November 12, Facebook CEO Mark Zuckerberg reportedly told staff at a company meeting that the comment was not enough to merit a suspension of Bannon’s account.

These Are the Creepiest Gadgets of 2020, According to Mozilla

The weather outside is turning frightful, and that means it’s time for the Mozilla Foundation to scare the crap out of you with its annual “Privacy Not Included” buyer’s guide. Each year, the Mozilla Foundation judges a handful of gadgets based on their privacy chops, giving shoppers an easy way to judge whether a gift will divulge mountains of personal information about the user. The companies’ privacy policies and past scandals weigh heavily on the list, which is why you’ll see Facebook and Amazon getting dinged in this year’s roundup. Mozilla isn’t able to analyze every product out there, but it usually gets most of the big ones.

Advertisement

Mozilla puts the devices on a scale from “not creepy” to “a little creepy” to “somewhat creepy” to “very creepy.” The worst products get saved for the “super creepy” category, which means a privacy-oriented person will likely be at least a little creeped out when the gadget is on. This year, 36 products reviewed by Mozilla’s non-profit wing utterly failed to meet its standards for privacy—while some you might expect to get low marks managed to clear the not-creepy bar.

Of the more than 130 products Mozilla researchers evaluated, most Amazon and Facebook gadgets fall into the “super creepy” category, with Facebook’s Portal and Oculus Quest 2 VR headset, Amazon’s Echo Show and Echo Dot smart devices, Amazon Halo fitness tracker, as well as Amazon-owned Ring’s security cams and doorbells sitting among the worst on the list. Of the Amazon products Mozilla included, only the Kindle and Echo Buds were found to meet acceptable privacy standards. Mozilla recommends people buy exactly zero Facebook devices because, well, it’s Facebook.

Advertisement

Google, a company you might equate with gross privacy violations, fared better than its Big Tech brethren. The Google Nest Mini, Nest Audio, Nest thermostat, Nest security cams, and Nest Protect smoke detector all landed in the “very creepy” category—not quite as bad as Facebook and most Amazon products. The Google Pixel Buds, meanwhile, only received the “somewhat creepy” label. Mozilla admits that Google “does collect a ton of data on you,” but because it makes it possible to limit that collection through its settings, the company “seems to do a better job than some of the other Big Tech companies when it comes to privacy.”

Apple—a company that has made privacy central to its marketing—also emerges unscathed, with the Apple Watch 6, AirPods and AirPods Pro, Apple TV 4K, and even the Homepod all landing in the “a little creepy” category, the least concerning.

The Nintendo Switch, Sony’s PS5, and Microsoft’s Xbox Series X and S are all no biggie as far as privacy goes too, according to Mozilla’s list. On the flip side, Roku’s home entertainment gadgets were found to do rather bad on the privacy front. Given Roku devices are ad factories this isn’t surprising.

Advertisement

Products from these mega brands only make up a fraction of the gadgets Mozilla singled out this holiday season. Much of the list is made up of smart home gadgets, like the Withings smart scales (good) or the Atomi smart coffee maker (bad). My main takeaway? Beware most internet-connected pet gadgets and expensive smart gym equipment.

You can view the full list here, where you can also vote on how creepy you think each product is. And we’ve rounded up the worst of the lot below for quick perusing. Here are the products Mozilla says to avoid during your gift buying this year:

Advertisement

  1. Nvidia Shield TV
  2. Dyson Pure Cool
  3. Roku streaming stick
  4. Roku Streambar and Soundbar
  5. Levoit smart air purifiers
  6. Livescribe smart pens
  7. Hamilton Beach smart coffee maker
  8. Kobo ereaders
  9. Huawei Honor Band 5 fitness tracker
  10. Blueair wifi-connected air purifiers
  11. Schlage Sense smart deadbolt
  12. Schlange Encode smart deadbolt
  13. NordicTrack RW 500 and 900 rowers
  14. NordicTrack T Series treadmills
  15. Wickedbone interactive gaming toy for dogs
  16. Tonal
  17. Artie 3000 coding robot
  18. Huawei Smart Watch ES
  19. SpotOn Fence dog GPS tracker
  20. Xiaomi Mi Band 5
  21. Xiaomi Amazfit Band 5
  22. Conway Airmega 300S and 400S air purifiers
  23. Greater Goods wifi smart body scale
  24. Ring video doorbell
  25. Ring security cams
  26. Ubtech Jimu robot kits
  27. Atomi smart coffee maker
  28. DJI Mavic Mini
  29. Dogness iPet robot
  30. Simplisafe security cams
  31. KidKraft Alexa 2-in-1 Kitchen and Market
  32. Amazon Halo fitness tracker
  33. ikuddle Auto-Pack litter box
  34. Oculus Quest 2 VP headset
  35. Facebook Portal
  1. Moleskine Smart Writing set

Advertisement

If You Can’t Beat Them, Make Some Shit Up About Them and Hope It Goes Viral

Illustration for article titled If You Cant Beat Them, Make Some Shit Up About Them and Hope It Goes Viral

Graphic: Gizmodo, Photos: Getty Images, Screenshot via Twitter

HellfeedHellfeedHellfeed is your bimonthly resource for news on the current heading of the social media garbage barge.

Your feeds are not currently full of news about the ongoing civil war between the Real American Front and the Alliance to Restore Democracy. In fact, there’s barely any Second American Civil War going on at all, which is to say that things could theoretically be going a lot worse after November 3 than your relatives, friends, and random doomsayers on social media might have predicted.

Advertisement

That said, things are not exactly all well in the state of Denmark, and Big Tech is right in the middle of this clusterfuck—liberals are blaming it for helping Republicans trick a sizable percentage of the country into thinking Joe Biden somehow bribed every poll worker in the country, while conservatives are working themselves into a lather trying to find ways to blame Big Tech censorship for the outcome. So here’s Hellfeed: Jesus Christ Make It Stop Already Edition.

Can we pat ourselves on the back yet?

Election Day passed without the prophesied catastrophic breakdown of social and political order—hey, give it time!—but that’s a pretty low bar. Donald Trump has refused to concede that Biden will be the 46th president and instead retreated further into a reassuring fantasy that hundreds of thousands of fraudulent votes were cast by Democrats in key states, thus making him the actual winner. Republicans have largely done either nothing to stop him or outright endorsed the president’s ongoing effort to pull off what would possibly be the laziest coup d’etat in world history.

Advertisement

Facebook and Twitter took some steps to curb the most egregious disinformation spreading before and during the election, taking down posts and pages and flagging a number of Trump’s posts. This functionally did little to stem the tide. Hoaxes and lies continue to spread faster via the web than the companies can take them down (Pinterest, LinkedIn, and NextDoor are experiencing their own problems, while YouTube is doing virtually nothing, according to the New York Times). As NBC News noted, tech firms have historically been very reluctant to police powerful actors like the White House for fear of backlash, and they’re still playing catchup now to mixed results. Beyond social media, political operatives ramped up their efforts to spread hoaxes via methods nearly impossible to stop, like robocalls and email.

As far as the big social media companies go, the big takeaway here is that the companies turned a blind eye to everything from Trump posts and QAnon to run-of-the-mill propaganda operations for years. For example, 538 argued that Trump and conservative media, who operated unchecked on Facebook and Twitter for years, have spent years seeding doubt about the electoral process in a manner that makes last-minute responses somewhat futile:

“Priming is where an external source, a sender of information, is trying to prime people to think a certain way,” said Mark Whitmore, a professor of management and information systems at Kent State University who has studied misinformation and cognitive bias. “One of the ways in which priming occurs is through partisanship. When that happens, people have a greater tendency to think along the lines of whatever party they feel they belong to.”

When people are already primed to think about a topic in a certain way, it can lead them to seek out information that confirms their existing beliefs… There’s also the illusory truth effect: a phenomenon in which the more times people are exposed to an idea, the more likely they are to perceive it as true, regardless of political leanings.

Per the Washington Post, Trump and his allies were able to exploit a “network of new and existing Facebook pages, groups and events to rally people and spark real-world intimidation of poll workers” to popularize lies about the outcome of the election. On Twitter, the labels seemed to do little to discourage widespread sharing of the president’s ravings, and on Facebook, they could simply be ignored. YouTube barely bothered to lift a finger except to attach warning labels to election-related videos, whether they be truthful or not, and restrict political ads.

This is all to say that whatever the platforms tried or didn’t try, hoaxers and liars dominated. (Slate has a roundup, including some initial data, here.) A Politico poll published this week found that some 70 percent of Republicans don’t believe the elections were free and fair, doubled from 35 percent before the election. The vast majority of that 70 percent endorsed conspiracy theories about mail in-voting, ballot tampering, and that such things helped Biden win.

Advertisement

One thing that’s clear is that the voter fraud mess wasn’t the product of a spontaneous, grassroots uprising arising from social media but the result of a deliberate, long-term plan by Trump and his allies in the GOP and news media (TV, print, and online) to undermine the results of the election. Berkman Klein Center for Internet & Society at Harvard researchers released a study earlier this year finding social media played a “secondary and supportive role” in “an elite-driven, mass-media led process.”

Say the line, Zuckerberg

Mark Zuckerberg has reportedly told employees at Facebook that he believes “the outcome of the election is now clear and Joe Biden is going to be our next president,” though he didn’t give specific statistics on how many false or hoax claims the company had taken down. As the Times’ Mike Isaac noted, Zuckerberg hasn’t posted in two weeks and hasn’t publicly acknowledged the election results.

Advertisement

Facebook shuts down “Stop the Steal” rallies

Per the New York Times, Facebook took quick action to wipe out one of the fastest-growing groups in the company’s history, a “Stop the Steal” page that started on Wednesday and by Thursday was at 320,000 users, a rate of 100 new members every 10 seconds. During its brief existence, it managed to flood Facebook (and due to overflow, Twitter, YouTube, Instagram, and right-wing media) with hysterical and baseless posts about voter fraud that never happened.

Advertisement

“The group was organized around the delegitimization of the election process, and we saw worrying calls for violence from some members of the group,” Tom Reynolds, a Facebook spokesperson, told the Times.

Far-right groups and conservative organizations subsequently called for Trump supporters to assemble in DC on Saturday under a variety of names such as the Million MAGA March, the March for Trump, and Stop the Steal DC. Some of the events were deleted, and the remaining Facebook groups show planned attendance of no more than a few thousand (at best—in the past few years, both Trump and far-right groups have struggled to draw more than a gaggle of attendees at DC events).

Advertisement

The labels, they do nothing

On Thursday, Twitter released data on enforcement of its Civic Integrity Policy from Oct. 27 to Nov. 11, saying that it had labeled 300,000 tweets (or 0.2%) of all election-related posts as “disputed and potentially misleading” and hid 456 of those behind warning labels. Twitter claimed this led to a roughly 29 percent decrease in quote tweeting:

Approximately 74% of the people who viewed those Tweets saw them after we applied a label or warning message.

We saw an estimated 29% decrease in Quote Tweets of these labeled Tweets due in part to a prompt that warned people prior to sharing.

Advertisement

These are Twitter’s metrics, so it’s hard to know what this indicates about the company’s moderation performance. Twitter wrote in the post that it stopped serving “liked by” and “followed by” notifications to users about other accounts they don’t follow, which made a “statistically significant difference in misinformation prevalence” (you don’t say). Twitter did say it will continue to make it less convenient to retweet other users without adding a comment.

Parlez vous safe space?

Free speech app Parler—which used to be pronounced “parlay,” as in the French word, but is now pronounced “parlour,” like the room—has become a major destination for conservatives fleeing Facebook, Twitter, Instagram in the wake of the election. This includes conspiracy theorists like QAnon devotees and people banned from other sites for any number of sins, who are attracted to the platform’s promise to not censor users (unless, of course, it falls in a category of content that offends right-wingers).

Advertisement

Parler surged to the top of app store lists last weekend. According to Wired, over the course of the last week, it went from 4.5 million to 8 million users, in large part because conservative media personalities are relentlessly promoting it to their huge followings on Facebook, Instagram, and Twitter. It’s less a site for discourse and more of a bullhorn for right-wing pundits, talk show hosts, media figures, and politicians to amplify their message to a hardcore fanbase—Wired registered an account and was immediately bombarded by messages urging them to sign up for the Trump campaign’s text message list.

This could play out two ways, basically. One is that Parler actually carves out space for conservatives to lap up the hard-right content they crave. The other is that conservatives lose interest after a while, leaving behind only the most diehard fans of blowhards like Mark Levin, Dinesh D’Souza, Dan Bongino, etc. In either scenario, we imagine Parler will evolve into another node on the lucrative grifting circuit on the right, whether it’s direct or mail-order marketing or an unending supply of GoFundMes for flash-in-the-plan Republican celebrities.

Advertisement

Proctor this

Proctorio is one of the many creepy, invasive programs schools and colleges whose students are learning from home use to monitor for cheating during exams (through forced webcam and microphone access and dubious “suspicion” algorithms). One student posted Proctorio code snippets to Twitter in an effort to show how the app invaded privacy in September; Proctorio’s marketing director John Devoy and CEO Mike Olsen responded weeks later by demanding Twitter take down the tweets under the Digital Millennium Copyright Act.

Advertisement

Twitter did so, despite the insistence of the Electronic Frontier Foundation that the tweets were a clear example of fair use, according to TechCrunch. Twitter later restored the tweets after TechCrunch published its article.

Q dropped

QAnon is not doing so hot in response to news that they might not be the vanguard of Dear Leader’s ascension after all or the fact that Q has stopped posting. (Whoops!) They’re coping by diving deeper down the rabbit hole.

Advertisement

Memory holes

Facebook has now implemented a Snapchat-like “Vanish Mode” on Messenger and Instagram, which allows users to have their messages auto-delete after a period of time. Honestly, there’s no good reason this shouldn’t be a standard feature on everything.

Advertisement

Drats

The goons at Immigrations and Customs Enforcement’s Twitter account went offline Thursday. But it turns out some genius at the agency just fucked up the account’s listed birthday and got locked out.

Advertisement

The ban list

Some notable smackdowns in the past few weeks:

  • Google and Facebook are continuing their bans on political advertising indefinitely while Trump plots his coup attempt.
  • David Icke—the British anti-Semite who believes lizardmen secretly rule the world—finally got banned from Twitter for spreading covid-19 misinformation.
  • Former White House chief strategist Steve Bannon, founder of internet hellhole Breitbart, got banned from Twitter and Stitcher and podcast episodes yanked from other sites after he called for Dr. Anthony Fauci and FBI Director Christopher Wray to be beheaded. Facebook slapped him on the wrist.
  • Facebook did ban a coordinated network of political pages tied to Bannon, which cumulatively had audiences of 2.5 million and promoted election fraud conspiracies, but only after they had
  • In addition to #StoptheSteal, several pro-Trump hashtags including #sharpiegate (don’t ask) and #riggedelection got blocked on Facebook and TikTok.
  • Airbnb banned some of the manchild members of the far-right Proud Boys group who were planning to travel to rallies in DC this weekend.
  • Thailand banned internet porn, which will definitely keep people from watching internet porn.

Advertisement

Honorable mention: Trump’s Commerce Department was supposedly set to ban TikTok from U.S. app stores on Thursday, but it chickened out.