WhatsApp Can Now Do the Same Thing as Google Meet and Zoom and FaceTime

WhatsApp on Thursday announced the introduction of voice and video calls on its desktop apps for Mac and PC, expanding the ways in which you can create the illusion of meaningful human interaction in this moment of crushing isolation.

“With so many people still apart from their loved ones, and adjusting to new ways of working, we want conversations on WhatsApp to feel as close to in-person as possible, regardless of where you are in the world or the technology you’re using,” WhatsApp owner Facebook announced in its press release, a reminder that simply talking to another human being face-to-face with reckless abandon could still kill you.

Advertisement

This is good news for those of us who are so profoundly exhausted by the past year that even holding a phone feels impossible: the new desktop WhatsApp call functionality “makes it easier to work with colleagues, see your family more clearly on a bigger canvas or free up your hands to move around a room while talking,” the company said. You know deep down that this is no different than using Google Meet or Zoom or FaceTime or any of the other many apps that do exactly the same thing, but that word “easier” is so appealing right now that you appreciate the effort to make this seem like something to be happy about if nothing else.

And if you’re worried about WhatsApp (and, by extension, Facebook) listening to you sob uncontrollably while you explain to your mom on a video chat that, yes, you have been calling Walgreens every 15 minutes to find out if they have any leftover vaccines that they need to inject into someone’s arm before they expire and get thrown in the garbage right next to the rest of your life, but no, they still don’t have any available, fear not! “Voice and video calls on WhatsApp are end-to-end encrypted, so WhatsApp can’t hear or see them, whether you call from your phone or your computer,” the press release reads.

Voice and video calls on WhatsApp’s desktop apps, available for download here, are currently limited to one-on-one conversations, but the company promises it plans to expand the feature to group chats sometime in the future, a period of time that either remains nice to think about or has become so tenuous and vague that’s it’s morphed into a meaningless concept, depending on where you’re at in your personal cycle between hope and misery. And we all know that the one thing we need right now is more group chats.

Study: Far-Right Propaganda Gets the Most Engagement on Facebook, Especially When It’s Lies

Republican Senator Rand Paul appears before a Facebook logo at a Fox News/Facebook hosted presidential primary debate in 2015; used here as stock photo.

Republican Senator Rand Paul appears before a Facebook logo at a Fox News/Facebook hosted presidential primary debate in 2015; used here as stock photo.
Photo: Scott Olson (Getty Images)

Shocker: Despite conservatives’ endless kvetching about the supposed liberal bias of Silicon Valley technocrats, one of the easiest ways to go viral on Facebook is spouting extreme, far-right rhetoric, according to a new study by New York University’s Cybersecurity for Democracy project.

In results released Wednesday, researchers with the project analyzed various types of posts promoted as news ahead of the 2020 elections and found that “content from sources rated as far-right by independent news rating services consistently received the highest engagement per follower of any partisan group.” Far-right sources that regularly promoted hoax, lies, and other misinformation did even better, outperforming other far-right sources by 65%.

The researchers relied on data for 2,973 news and info sources with more than 100 followers on Facebook provided by Newsguard and Media Bias/Fact Check, two sites that rate the accuracy and partisan leanings of various outlets. (There’s reason to quibble with the ratings provided by these places, but they’re reasonable proxies for categorizing a large number of sources by ideological bent.) The team then downloaded some 8.6 million public posts from those nearly 3,000 sources between Aug. 10, 2020 and Jan. 11, 2021, just shy of a week after a crowd of pro-Trump rioters tried to storm the Capitol to overturn the 2020 election results.

Advertisement

They found that sources categorized as far-right by Newsguard and Media Bias/Fact Check did very well on Facebook, followed by those classified as far-left, other more moderately partisan sources, and finally those that were “center”-oriented. Those far-right sources tended to receive several hundred more interactions (likes, comments, shares, etc.) per 1,000 followers than other outlets. Far-right pages experienced skyrocketing engagement in early January, before the riot at the Capitol.

undefined

Graphic: New York University/Cybersecurity for Democracy/Medium (Other)

Furthermore, those far-right sources classified as regularly spreading misinformation and conspiracy theories actually did better on engagement (426 interactions per 1,000 followers a week on average) than every other type of source (including far-right pages not classified as sources of misinfo, which got 259 interactions per 1,000 followers a week on average).

That’s not even the most egregious part of it. While far-right sources were rewarded with higher engagement on Facebook when they spread misinfo or conspiracy theories, the Cybersecurity for Democracy findings show sources classified as “slightly right,” “center,” “slightly left,” or “far left” appeared to be subject to a “misinformation penalty.” Said penalty appeared to be much heavier for sources classified as centrist or left of center.

Advertisement

Illustration for article titled Study: Far-Right Propaganda Gets the Most Engagement on Facebook, Especially When It's Lies

Graphic: New York University/Cybersecurity for Democracy/Medium (Other)

“What we find is that among the far right in particular, misinformation is more engaging than non-misinformation,” Laura Edelson, the lead researcher of the study and an NYU doctoral candidate, told Wired. “I think this is something that a lot of people thought might be the case, but now we can really quantify it, we can specifically identify that this is really true on the far right, but not true in the center or on the left.”

Advertisement

Edelson told CNN, “My takeaway is that, one way or another, far-right misinformation sources are able to engage on Facebook with their audiences much, much more than any other category. That’s probably pretty dangerous on a system that uses engagement to determine what content to promote.”

Edelson added that because Facebook is optimized to maximize engagement, it follows that it may be more likely to juice right-wing sources by recommending more users follow them.

Advertisement

The researchers wrote their data aligns with previous research by the German Marshall Fund and the Harvard Misinformation Review that extreme and/or deceptive content tends to perform better on social media; the latter study also found that “that the association between partisanship and misinformation is stronger among conservative users.”

The study didn’t investigate why Facebook seems to favor right-wing sources, and the researchers noted that engagement numbers don’t necessarily reflect how widely content was shared and viewed across the social network. In a statement to Wired, a Facebook spokesperson used a similar line of defense: “This report looks mostly at how people engage with content, which should not be confused with how many people actually see it on Facebook. When you look at the content that gets the most reach across Facebook, it’s not at all as partisan as this study suggests.”

Advertisement

Facebook has floated similar defenses before—that engagement data doesn’t reflect how often a given news outlet’s content is shared sitewide or how many users actually encounter or click on it. As Recode has argued, including other data sources such as engagement on links shared privately on Facebook does indicate the top performers sitewide include more mainstream sources like CNN, the BBC, and papers like the New York Times, but doesn’t change the overall takeaway that “certain kinds of conservative content—mostly emotion-driven, deeply partisan posts” have an inherent advantage on the site.

Facebook has also tried to explain away the issue by suggesting right-wingers are just inherently more engaging, with its algorithms having little to do with it.

Advertisement

One anonymous executive at the company told Politico in September 2020 that “Right-wing populism is always more engaging” because it taps into “incredibly strong, primitive emotion” on topics like “nation, protection, the other, anger, fear.” The executive argued that this phenomenon “wasn’t invented 15 years ago when Mark Zuckerberg started Facebook” and was also “there in the [19]30s” (not reassuring) and “why tabloids do better than the [Financial Times].”

Prior reporting and research has repeatedly shown that while Facebook is great for creating partisan echo chambers, right-wingers are far and away the biggest beneficiary, in some cases by design. For example, Facebook reportedly conducted internal research showing Groups were becoming vehicles for extreme and violent rhetoric, and was made aware from user reports that a feature called In Feed Recommendations that wasn’t supposed to promote political content was boosting right-wing pundits like Ben Shapiro. In these and other cases, a former company core data scientist recently told BuzzFeed, Facebook’s policy team reportedly intervened, citing the possibility of backlash from conservatives if changes were made.

Advertisement

Facebook is, of course, by no means the only way far-right ideas slip into the conservative mainstream—nor is the far right anything new to U.S. politics—but it is an extremely important toolset in an era where movement conservatives are extremely online and constantly searching for the latest viral outrage. While traditional conservative media like Fox News and its mutant stepchildren like Newsmax and One America News Network are powerful in their own right, Facebook offers an easy way for GOP politicians, right-wing propagandists, troll-the-libs pundits, QAnon conspiracists, and the like to repackage extreme viewpoints into memes, owns, and other shareable content for a mass audience.

“We’re looking forward to learning more about the news ecosystem on Facebook so that we can start to better understand the whys, instead of just the whats,” the Cybersecurity for Democracy team wrote in the report.

Advertisement

Instagram Rolls Out New Way to Say Something Stupid in Public

Instagram is expanding its livestreaming offerings with a new feature dubbed Live Rooms, which is just like Instagram Live but with up to three more people haphazardly broadcasting their thoughts into the world simultaneously.

Instagram’s Live Rooms add to the increasingly crowded livestreaming space, which includes everything from Twitch to TikTok, to audio-only Clubhouse and Twitter’s Spaces. And because most of us have absolutely no business livestreaming for any reason, it also represents an increasing focus on social media geared towards professional creators, celebrities, and brands while creating new moderation challenges for the platforms themselves.

The functionality of Live Rooms is simple and straightforward. From the home screen on Instagram, swipe left and select the Live option. You can add a title and then tap on the users who you’d like to include. Live Rooms also lets the person who launches the stream to add “guests” to join them mid-broadcast: “for example, you could start with two guests, and add a surprise guest as the third participant later! 🥳,” Instagram writes in its press release about the feature.

Advertisement

In an attempt to limit harassment and other problematic behavior, any user who’s blocked by a Live Room participant will not be able to view the stream. And any Instagram user who’s been blocked from going live on the platform won’t be able to join as a Live Room guest. Comments can also be blocked, reported, and filtered, just as is the case for the solo Live feature.

Another feature that carries over from Live is badges, which Live Room viewers can buy for between $1 and $5 to make their usernames look extra special in chat.

Of course, as lovely as surprise guests and Badge bling might sound, this is the internet we’re talking about. And on the internet, terrible things happen constantly in ways that remain both shocking and entirely predictable. While various third-party tools for live video moderation exist, most automatic moderation tools are geared toward text, as Reuters recently reported. It’s possible Instagram could use live transcription tools to help moderate some problematic broadcasts, as Twitter is reportedly “looking into” for Spaces moderation. Or it could go the Chatroulette route and use AI to clean up certain dirty streams.

Advertisement

In an email, an Instagram spokesperson said the company is “working on other moderator controls and audio features, which we’ll be launching in the coming months. Something that’s been highly requested by our Live creators is more controls for moderators/hosts of the broadcasts.” But some hosts will surely encourage rather than forbid problematic content. And even if a live broadcast gets taken down mid-stream, that doesn’t mean it’s gone.

Facebook, which owns Instagram, knows this all too well: In 2019, a shooter livestreamed the massacre of Muslim worshipers at a mosque in Christchurch, New Zealand, using its live broadcast feature. While the company claims the original livestream was viewed “fewer than 200 times” during the broadcast and “viewed about 4000 times in total before being removed from Facebook,” Facebook (and many other social platforms) scrambled to remove copies of the horrific mass murder. Of the 1.5 million copies of the view that Facebook says was uploaded to its platform, some 300,000 copies were able to make it through its filters.

Advertisement

In aftermath of the 17-minute video spreading online, a Muslim advocacy group in France sued Facebook and YouTube for, as the complaint states, “broadcasting a message with violent content abetting terrorism, or of a nature likely to seriously violate human dignity and liable to be seen by a minor.” New Zealand, meanwhile, prosecuted several people for distributing or possessing the video, under a human-rights law that forbids the dissemination of terrorist propaganda or content that could “excite hostility against” people or groups based on their race, ethnicity, or national origin.

Beyond the extreme example of the Christchurch video, Live Rooms creates more opportunity for the spread of disinformation, misinformation, and other plights of our interconnected world. Facebook clearly has the ability to penalize users who violate its rules on livestreams, and it will almost certainly use those tactics to keep tabs on Live Rooms as well. But with livestreams on Instagram reportedly booming as we all remain socially distant, it’s all but guaranteed something horrible will slip through the cracks. And as the Christchurch tragedy exemplified, it only takes one to further spread terrorist propaganda or other dangerous content to anyone looking to find it.

Advertisement

It’s of course easy to criticize some new feature based on the worst possibilities, and I’m sure there will be plenty of fitness teachers, musicians, and beauty vloggers who create useful broadcasts that make the world just a bit less miserable during this miserable pandemic era. But until Facebook, Instagram, and other platforms get moderation of all types under control, it’s hard to not assume we’ll wake up one day to news that Live Rooms has become the latest hotbed of something dangerous and deranged.

Bots Reportedly Helped Fuel GameStonks Hype on Facebook, Twitter, and Other Platforms

Illustration for article titled Bots Reportedly Helped Fuel GameStonks Hype on Facebook, Twitter, and Other Platforms

Photo: Chris Delmas/AFP (Getty Images)

The s0-called GameStonks saga had some help from automated bots hyping up “meme” stocks on Facebook, Instagram, Twitter, and YouTube, according to an analysis by the cybersecurity firm PiiQ Media reviewed by Reuters.

Users on the Reddit forum r/WallStreetBets teamed up last month to trigger a massive short squeeze of GameStop stock in a coordinated attempt to screw over hedge funds that bet the video game retailer’s stock would tank. After sending GameStop stock soaring 400% in a week, Reddit users set their sights on other beleaguered companies such as AMC Entertainment, Nokia, and BlackBerry to drive up the value of these so-called “meme” stocks.

U.S. regulators have since launched an investigation into short selling and online trading platforms, including Robinhood, the stock trading app at the center of a class-action lawsuit after it temporarily blocked users from purchasing “meme” stocks amid the buying frenzy.

Advertisement

In his testimony before Congress, Reddit CEO Steve Huffman said that, based on an internal analysis, bots and foreign actors did not play a “significant role” in the GameStop-related traffic on WallStreetBets, CNBC reports. However, an analysis by PiiQ Media, a startup that studies social media risks, found that bots on YouTube, Twitter, Instagram, and Facebook helped fuel the buying frenzy, although the scope of their influence remains unclear.

The firm studied patterns of keywords related to the GameStonks saga across posts and profiles from January through Feb. 18. These keywords included “GME,” the ticker symbol for GameStop stock, and “Hold the Line,” a viral call for investors not to dump their GameStop shares as prices began to come down from their historic heights.

PiiQ Media found similar “start and stop patterns” among GameStop-related posts, with activity spiking at the beginning and end of each trading day—a pattern that’s indicative of bots, the firm’s co-founder and chief technology officer, Aaron Barr, told Reuters.

“We saw clear patterns of artificial behavior across the other four social media platforms. When you think of organic content, it’s variable in the day, variable day-to-day. It doesn’t have the exact same pattern every day for a month,” he said.

Advertisement

PiiQ Media estimates that tens of thousands of bot accounts participated in the campaign to hype up GameStop and other “meme” stocks. While the firm didn’t include Reddit posts in its analysis, Barr told Reuters he would expect to see similar patterns of bot-like activity on the platform.

When asked about the study, a Twitter spokesperson said “bots” have become an umbrella term for a range of online activity and pointed us to a company blog post debunking a few common misconceptions about bots and platform manipulation. They also shared the following statement:

“People often refer to bots when describing everything from automated account activity to individuals who would prefer to be anonymous for personal or safety reasons, or avoid a photo because they’ve got strong privacy concerns. The term is used to mischaracterize accounts with numerical usernames that are auto-generated when your preference is taken, and more worryingly, as a tool by those in positions of political power to tarnish the views of people who may disagree with them or online public opinion that’s not favorable.”

Advertisement

YouTube and Facebook did not immediately respond to Gizmodo’s requests for comment. We’ll update this blog if we hear back.

The U.S. Securities and Exchange Commission is also reportedly looking into the GameStonks saga for signs of illicit market manipulation and fraud. On Friday, the agency temporarily blocked trading in 15 companies over concerns that their stock prices were being artificially inflated, per a Bloomberg report.

Advertisement

“We proactively monitor for suspicious trading activity tied to stock promotions on social media, and act quickly to stop that trading when appropriate to safeguard the public interest,” Melissa Hodgman, acting director of the SEC’s enforcement division, said in a statement to the outlet.

It’s entirely plausible that retail investors or other interested parties tried to capitalize on the GameStop fervor with automated campaigns. But since the scope of their influence remains unclear, it’s anyone’s guess if these campaigns were a driving force behind the glorious fiasco or just another drop in the bucket.

Advertisement

‘Hey Facebook,’ How About Not?

Illustration for article titled 'Hey Facebook,' How About Not?

Image: Facebook

Chances are you’re probably familiar with using a command like, “OK Google,” “Hey Siri,” or “Alexa” to summon a voice assistant. But now, the company that brought you pokes and likes—and who could forget fake news?— is trying to add, “Hey Facebook” to your rotation.

Facebook’s new vocal command officially rolled out today with the company’s announcement that it’s introducing “Hey Facebook” as an opt-in wake phrase on the Oculus Quest 2 to help deliver a more seamless hands-free VR experience.

However, as The Verge discovered, the “Hey Facebook” command can now also be used with Facebook’s Portal devices instead of the existing “Hey Portal” wake phrase to ask questions or perform functions like starting a video call.

Advertisement

Now on a certain level, as we continue moving into this era of ambient computing where you don’t have to be sitting in front of a monitor and keyboard to actually use a computer, the addition of another wake phrase to everyday use shouldn’t come as a big surprise.

However, there’s something that just feels off about “Hey Facebook,” and after ruminating on it a bit, I think I’ve figured out why. The difference between “Hey Facebook” and other voice command triggers is that when you are directing a question to Siri or Alexa, you are directing your request to a specific entity, which in this case is an AI-powered digital assistant. And even though, “OK Google” would seem to be the same as “Hey Facebook,” Google has always made it clear that you are talking to the Google Assistant, not the company itself. (For what it’s worth, I still think the Google Assistant badly needs some kind of normal, human name.)

But with Facebook, there is no assistant or AI to speak. Facebook killed its previous assistant M, so with “Hey Facebook,” it feels like you are calling out to the faceless company that reminds people when your birthday is and keeps tabs on unspecified amounts of personal data. In the same fashion, “Hey Facebook” is also different from saying “Hey Portal,” which inherently refers to a specific device in your home. “Hey Facebook” just doesn’t feel the same.

So while saying “Hey Facebook” is extremely weird, that won’t stop Facebook from pushing its new wake phrase instead of alternatives like “Hey Portal” or even “Hey Oculus.” Facebook usually gets what it wants, even if it’s something no one else is on board with.

Advertisement

Twitter Passes Stimulus Package for the Very Online

Illustration for article titled Twitter Passes Stimulus Package for the Very Online

Photo: Olivier Douliery/AFP (Getty Images)

Twitter is finally rolling out a way to get paid for tweeting that doesn’t involve putting a Venmo link in your bio, promoting a Patreon, or using the app to hunt for a rich spouse.

On Thursday, the company announced a new feature that could change the way the app functions entirely: Super Follows, which is essentially paid subscriptions for individual Twitter feeds. Users will now be able to paywall certain types of content away from others on Twitter with “Super Follows,” which allows them to charge more for various types of content. According to the Verge, that could include giving paid subscribers access to private tweet feeds, Twitter’s new newsletter feature, or profile badges. Another feature announced on Thursday, the ability for users to create and join groups called Communities, can also be paywalled. Both of these additions won’t be rolled out for a few months, and according to the Verge, it’s not clear how big a cut Twitter will take from the revenue.

This is a big shift in the way Twitter operates: a long-running and pretty tired joke on the site has been that “this site is free,” referring to none of its content directly costing any money whatsoever. The flip part of that equation is that monetizing a Twitter presence is impossible without referring fans somewhere else, even if that’s just to pay for access to a private Twitter feed. So this is sort of a big shift, in that it could reshape the incentives for users to participate in the site in the first place and allow Twitter to compete directly with crowdfunding app Patreon and similar payment tools on Facebook and YouTube.

Advertisement

It’s also easy to see how this could open a Pandora’s Box of sorts for Twitter. It’s long struggled to rein in toxic communities like white supremacists, conspiracy theorists, and far-right trolls, all of whom could now potentially use the app as a way to make money. The addition of private feeds for subscribers could also let those so inclined hide stuff like harassment campaigns behind paywalls, where such content will be accessible to a smaller pool of paying followers unlikely to report it to the site’s moderators. (It’s already possible to do this via direct messages, locked accounts, and off-site coordination, but still.)

Similarly, the Communities feature sounds pretty close to Facebook Groups. Facebook pivoted from the news feed to an emphasis on Groups in 2019, which had disastrous consequences after said Groups were infested with death threats, harassment, and calls to violence.

Another thing Twitter hasn’t clarified is whether it will allow Super Follows for sexual content, a type of content which is only subject to a handful of restrictions elsewhere on the site (like not posting it in banner images or profile pictures.) Allowing it would put the site in direct competition with places like OnlyFans, though when Motherboard’s Samantha Cole asked Twitter whether or not it will allow users to pay for porn the company responded with a non-answer, claiming that it was “examining and rethinking the incentives of our service.”

The announcement has also set off a wave of am-I-kidding-or-aren’t-I speculation from reporters and other media types about whether or not their employers will allow them to charge for tweets. It’s not any kind of secret that journalists are among the most Twitter-addicted people on the planet and comprise a large percentage of the power users that dominate the app’s feed… and thus easy to see why this is an appealing fantasy for them.

Advertisement

Suffice it to say that while anything that subsidizes, say, tech bloggers buying fancy aquariums is welcome, how big the reader appetite to fund 280-character insights is or how willing news organizations are to let staff run sidelines remains speculative at best.

Advertisement

Twitter has recently rolled out countless features including Instagram Stories-esque Fleets; newsletters; and a Clubhouse-like audio chat tool. It acquired a screen-sharing app called Squad that could be of use if it decides to launch a streaming service, and an adtech firm called CrossInstall which could help fix its notoriously busted ad tools. This could all be related to a failed investor coup led by vampiric hedge fund Elliott Management last March demanding Twitter catch up to its far more profitable competition.

According to the Verge, Twitter said during a business presentation on Thursday that paid subscriptions and the Communities feature are marked as “what’s next” without putting forward a solid timeline for implementation. Per CNBC, Twitter told analysts and investors it hopes the new features will help it hit its goal of $7.5 billion in annual revenue by 2023, about double how much money it makes now.

Advertisement

Facebook’s New Ad Campaign Tries To Remind You That Targeted Ads Are Good, Actually

Illustration for article titled Facebook's New Ad Campaign Tries To Remind You That Targeted Ads Are Good, Actually

Photo: Sean Gallup (Getty Images)

Just two months after running a full-page ad decrying Apple’s impending updates, Facebook is rolling out another campaign meant to defend the targeted ads that make up about 98% of its multi-billion dollar revenue stream.

Per CNBC, the one-minute ad will air across digital platforms, radio, and television starting today. Facebook says the spot is meant to highlight “how personalized ads are an important way people discover small businesses on Facebook and Instagram,” and “how these ads help small businesses grow from an idea into a livelihood.”

You can give it a watch below:

The ad features a few stand-ins for the small businesses that seemingly rely on Facebook’s ad-targeting tech for their livelihoods. There’s one woman who’s shown using Instagram ads to promote her goat farm to people who want to give goat yoga a try. There’s a pair of influencers shown advertising an indie bag brand that—as Facebook points out—doesn’t only pay homage to her West African background, but also supports “empowerment work” in the region. All the while, Grace Jones (yes, the Grace Jones) does some spoken word about how wonderful ad targeting is, and how it brings these sorts of interesting businesses to people’s attention. The end of the spot then directs viewers to a dedicated site that reminds us right at the top that “good ideas deserve to be found.”

Advertisement

This is Facebook’s latest attempts to butter up users ahead of Apple’s planned rollout of certain anti-tracking tools meant to give iOS users a but more transparency and control over the data that their apps are allowed to collect. Since this past summer, Facebook has argued at every possible opportunity to argue that without the ability to freely track users and pelt them with ads, small business will suffer. In response, Apple fired back that it was simply standing up for iOS users that were tired of Facebook’s ongoing disregard for user privacy. Facebook shot back that this was all baldfaced attempt on Apple’s part to monopolize that juicy user data all for themselves. Apple responded that these updates aren’t eliminating targeted ads entirely, but simply giving users the chance to opt-out.

These iOS updates are still on track to roll out in early spring, which means Facebook needs to do all the damage control it can before then. Earlier this month, the company announced it would be testing some pop-up prompts of its own across iPhones and iPads asking users to allow Facebook to track them across apps and sites “for a better ads experience.”

What Facebook’s trying to do here is remind us all that while you probably don’t love targeted ads, you probably love yoga studios, handbags, and the people behind them. Facebook’s webpage for the campaign does what it can to convince us that those to ideas are one in the same: if you click on the landing page’s definition of what “personalized ads” are, it doesn’t tell you anything about how Facebook’s ads are targeted or how they manage to track you across the web. Instead, Facebook says that “personalized ads (aka ‘targeted ads’) help small businesses grow by reaching customers that are more likely to be interested in their products or services.” That’s it.

Facebook goes on to say that these ads don’t only support the businesses you love, but actually preserve your privacy, regardless of what Apple tells you:

Personalized ads help us connect you with businesses that are most relevant to your interests, without sharing who you are with the advertiser. Individual data that could identify you, like your name, posts, or contact information is never shared with businesses using personalized ads.

Advertisement

On one hand, this is all technically true: the data that these businesses use for ad targeting is so aggregated that they’re typically getting a birds-eye view of the number of clicks from a few hundred or thousand people at a time, rather than just one. But the only reason those quasi-anonymous pools of data even exist in the first place is because Facebook’s spent more than a decade tracking us all.

It’s also worth noting here that with the latest iOS update, that creepy cache of data won’t be going anywhere. It’ll just make sure that Facebook isn’t able to build up more data on all of us. Regardless, some analysts suspect that losing access to this ongoing data trickle could cost Facebook about 10% of its quarterly revenue—about $8 billion dollars by the end of this year.

Advertisement

But what about those small businesses? The ones that Facebook says rely on its ad platform for their survival?

Nobody can deny that the ongoing global pandemic has devastated countless small businesses across the country, many of which don’t see an end to the current economic climate in their near future. However, it’s unlikely that the impact of Apple’s update will be anywhere nearly as catastrophic as Facebook’s saying here. Back in December, Dipayan Ghosh—an ex-Facebook executive turned public critic of the company—pointed out as much. Small businesses, he said, don’t only advertise on Facebook, and they don’t only rely on Facebook’s massive reams of data to do that work. Over time, some small business owners on forums like Reddit have reached the same conclusion: advertising might be a little harder with Apple’s new update, but it won’t be impossible.

Advertisement

What would be truly egregious would be if a company were willfully misrepresenting the efficacy of its targeted ads to those same cash-strapped small businesses. But Facebook wouldn’t know anything about that, would it?

Facebook Finally Bans Myanmar Military After Feb. 1 Coup

A friend of Myanmar protester Mya Thwate Thwate Khaing, who died after being shot during a rally against the military coup, looks at pictures of her on a phone during a memorial service in Naypyidaw on February 25, 2021.

A friend of Myanmar protester Mya Thwate Thwate Khaing, who died after being shot during a rally against the military coup, looks at pictures of her on a phone during a memorial service in Naypyidaw on February 25, 2021.
Photo: STR/AFP (Getty Images)

Facebook finally banned the military in Myanmar, known as Tatmadaw, from the social media platform several weeks after the military staged a coup that toppled the democratically elected government. The ban on the country’s military includes Instagram, which is owned by Facebook.

“Events since the February 1 coup, including deadly violence, have precipitated a need for this ban. We believe the risks of allowing the Tatmadaw on Facebook and Instagram are too great,” Rafael Frankel, director of policy for the Asia-Pacific region, said in a statement posted online late Wednesday.

“We’re also prohibiting Tatmadaw-linked commercial entities from advertising on the platform,” Frankel continued. “We are using the UN Fact-Finding Mission on Myanmar’s 2019 report, on the economic interests of the Tatmadaw, as the basis to guide these efforts, along with the UN Guiding Principles on Business and Human Rights. These bans will remain in effect indefinitely.”

Advertisement

Facebook has already taken down military-connected pages like Tatmadaw True News Information Team, MRTV, and MRTV Live since the coup earlier this month.

Facebook’s statement doesn’t mention the 20-year-old protester, Mya Thwate Thwate Khaing, who was shot in the head during an anti-coup protest in Myanmar and later died in the hospital, but that event has attracted condemnation from around the world.

The Myanmar government is currently being run by the military, but Facebook made sure to stress that certain parts of government that are vital to public health and wellbeing, such as the Ministry of Health and Sport and the Ministry of Education, will not be affected by the new ban.

Facebook is tremendously popular in Myanmar and one of the first things the military government did after taking power was to ban the social media platform. Service has been highly restricted ever since, with Netblocks reporting that Facebook, WhatsApp, and Instagram are all currently down.

Advertisement

Facebook came under heavy criticism after the platform was used to incite genocide in Myanmar in 2018 but the company insisted on Wednesday that it held the military to the same standards as everyone else. The new statement lists four factors that caused Facebook to make this decision:

  1. The Tatmadaw’s history of exceptionally severe human rights abuses and the clear risk of future military-initiated violence in Myanmar, where the military is operating unchecked and with wide-ranging powers.
  2. The Tatmadaw’s history of on-platform content and behavior violations that led to us repeatedly enforcing our policies to protect our community.
  3. Ongoing violations by the military and military-linked accounts and Pages since the February 1 coup, including efforts to reconstitute networks of Coordinated Inauthentic Behavior that we previously removed, and content that violates our violence and incitement and coordinating harm policies, which we removed.
  4. The coup greatly increases the danger posed by the behaviors above, and the likelihood that online threats could lead to offline harm.

Advertisement

The difficult part to understand, of course, is why points one, two, and four in the list weren’t enough for a ban on February 1 or earlier. The word “history” is used in points one and two, an implicit acknowledgement that none of this is new.

Optimists are fond of saying “better late than never,” but that’s a tough pill to swallow when you’re talking about things like genocide and military coups. But, better late than never, Facebook.

Advertisement

Facebook Told Us to Suck It, for a Change

Illustration for article titled Facebook Told Us to Suck It, for a Change

Photo: Michael Reynolds (Getty Images)

Attention, haters: you have officially been put on notice by Facebook’s VP of Global Affairs Nick Clegg.

This morning, Clegg unleashed a very salty, very strongly-worded rebuke to sordid charges propagated by publishers supposedly looking for a cash grab. It’s titled “The Real Story of What Happened With News on Facebook in Australia,” and reads like a closing argument in a courtroom drama—in this case, essentially accusing Australian lawmakers of allowing the media industry to pick Facebook’s pocket through a proposed law which would compel the company to pay for journalism. While I reject every assertion in this blog post, it’s nice to finally get a human on the line—rather that the unbroken chain of prerecorded denialism and we’ll-get-back-to-yous from Facebook which rarely relate in any way to criticisms at hand.

Here’s the reported version of the Story of What Happened With News on Facebook in Australia: Australian lawmakers have been wrapping up some new legislation (the News Media Bargaining Code). Specifically, it gives Australian news businesses the power to bargain over a rate which Facebook and Google would have to pay in exchange for hosting news articles in full, as excerpts, or in link form. Facebook did not care for this plan and retaliated by pulling all news links shared by publishers and users from its site in Australia. (Clegg said that Facebook had to do so to protect itself from liability.)

Advertisement

But many also noted that this kind of proved the point that Facebook wields way too much power over news access online. Yesterday, the company flipped the news switch back on after lawmakers agreed to a handful of amendments. They would give Facebook and Google a month’s notice before enforcement and potentially exempt Facebook entirely if it proves that it already pays Australian media companies through alternative deals. (Google, on the other hand, struck a deal with NewsCorp to share some ad revenue and create a subscription service. Google already pays some participating publishers to give readers free access to paywalled articles in its News Showcase product. Facebook reportedly pays a select number of outlets to present their full stories in its News Tab.)

In a blog post yesterday, Facebook said it was “pleased” with the agreement, but Clegg saved a few choice words for (presumably) legislators and journalists. Claiming that the Australian lawmakers were deluded by “a fundamental misunderstanding” of how news on Facebook works, Clegg argued that Facebook actually provides news outlets a free marketing service. More to the point, what you’ve heard are lies [emphasis theirs]:

The assertions — repeated widely in recent days — that Facebook steals or takes original journalism for its own benefit always were and remain false.

Okay, depends on your vantage point. Moreover, that wasn’t really the lesson from the past week. We just learned that Australians like getting their news from Facebook.

Clegg could have left it there, but he decided to let it rip:

Of course, the internet has been disruptive for the news industry. Anyone with a connection can start a website or write a blog post; not everyone can start a newspaper. When ads started moving from print to digital, the economics of news changed, and the industry was forced to adapt. Some have made this transition to the online world successfully, while others have struggled to adapt. It is understandable that some media conglomerates see Facebook as a potential source of money to make up for their losses, but does that mean they should be able to demand a blank check?

Advertisement

I’m guessing the money-grubbing failures to which Clegg refers include the dying local papers that have struggled to adapt in part specifically because they’re losing out on locally-targeted advertising revenue which is now almost entirely pocketed by Google and Facebook. Anyway, okay, we get it! Not done yet [emphasis, again, Clegg’s]:

It’s like forcing car makers to fund radio stations because people might listen to them in the car — and letting the stations set the price. It is ironic that some of the biggest publishers that have long advocated for free markets and voluntary commercial undertakings now appear to be in favor of state sponsored price setting. The events in Australia show the danger of camouflaging a bid for cash subsidies behind distortions about how the internet works.

Advertisement

This is a wildly skewed metaphor; Facebook is less like the car and more like one of two radio stations that get to decide which record labels to promote. That kind of broadcast dominance has directly led to newroom layoffs through (allegedly knowingly misleading) emphasis on video. It’s also algorithmically suppressed outlets now competing for attention with fake and inflammatory sources. For a sense of how much an even playing field matters, the Pew Research Center recently found that 36% of Americans regularly get their news from Facebook. Its influence over the flow of information is so patently obvious that every few years we circle back to insisting that Zuckerberg just admit that he’s running a media organization.

Maybe Australian politicians, in needling Facebook to pay its fair share, finally struck a nerve. Or maybe the thrill of winning a pissing match against a sovereign nation has the company’s executives willing to gloat. Whatever the case may be, I sincerely hope that Facebook keeps the line of honest dialogue open.

Advertisement

YouTube Thinks It’s Cracked the Code on Appropriate Content for 9-Year-Olds

Illustration for article titled YouTube Thinks It’s Cracked the Code on Appropriate Content for 9-Year-Olds

Photo: MARTIN BUREAU/AFP (Getty Images)

YouTube is attempting to bridge the gap between its dedicated Kids app and regular YouTube for parents with tweens and teens.

YouTube announced Wednesday that it will launch a new “supervised” experience in beta that will introduce additional features and settings for regulating the types of content that older children can access on the platform. Content will be restricted based on the selection of one of three categories. “Explore” will introduce videos suitable for kids 9 and older, “Explore More” will bump them into a category with videos for kids 13 and older, and “Most of YouTube” will show them nearly everything except age-restricted and topics that might be sensitive to non-adults.

YouTube says it will use a blend of machine learning, human review, and user input to vet content—a system that has worked spectacularly for YouTube in the past. Seemingly trying to get out ahead of whatever issues will arise from its busted moderation system, the announcement blog stated that YouTube knows “that our systems will make mistakes and will continue to evolve over time.”

Advertisement

Clearly, any tool that attempts to filter inappropriate content on YouTube is welcome and necessary. But guardians cannot rely on YouTube alone to take the wheel and guide the experience of their kids. We’ve seen how well that’s worked in the past over on YouTube’s dedicated Kids app—which is to say, not great.

Part of the problem is that YouTube’s platform, like those of other social media giants, is just too big to adequately moderate. One wrong turn can send your kid down a rabbit hole of conspiracies whether they were looking for them or not. Plus, if we’re being honest, teens and tweens are probably going to find a way to watch whatever content they want to watch regardless of how kid-proofed the home computer is anyway.

All that said, creating a middle ground between YouTube Kids and the chaos of normal YouTube is something. Just don’t bank on a perfect moderation system. Even YouTube says so.