Clubhouse Announces That Its App Will Be Available on Android Worldwide by Friday

Illustration for article titled Clubhouse Announces That Its App Will Be Available on Android Worldwide by Friday

Photo: Mark Schiefelbein (AP)

Faced with plummeting app downloads on iOS in recent months, Clubhouse has one thing to say: Hello, Android.

Advertisement

The audio-based social network announced on Sunday in a town hall it would be rolling out to Android users worldwide by Friday afternoon, May 21. In a Twitter post, Clubhouse said that it would start its expansion with Japan, Brazil, and Russia on Tuesday. The company said it would add availability in other countries throughout the week, specifying that it would launch in Nigeria and India on Friday morning.

Clubhouse told Gizmodo on Sunday that it had begun its first wave of the Android beta rollout in the U.S. last week. In the end, the company also ended up launching its app in New Zealand, Canada, Australia, and the UK. Clubhouse said the app is still invitation-only, but that people can download the app on the Play Store, and friends on the app may invite them in.

Besides announcing its worldwide expansion on Android, Clubhouse said it was working on feature parity in Android and iOS. TechCrunch points out that Clubhouse’s Android app still lacks several features offered on iOS. During last week’s Android launch, the outlet stated, users couldn’t follow a topic, create or manage a club, link their social profiles, make payments, or change their profile name.

While Clubhouse’s expansion on Android was expected, and some might say overdue, the app might be hoping that rolling out to more devices will allow it to recover its lost steam. Since its iOS launch last year, the app has seen explosive growth, attracting tech billionaires like Facebook CEO Mark Zuckerberg and Tesla CEO Elon Musk.

The shininess around Clubhouse recently began to taper off, though. According to the analytics firm SensorTower, Clubhouse had 2 million downloads in January and then jumped to more than 9.5 million in February. Downloads dipped in March to 2.7 million and then again in April, when they dropped to below a million.

The reasons for Clubhouse’s rollercoaster of growth over these past few months are still up in the air. Some say that the app became a success because it launched at the beginning of the pandemic, a time when so many of us were stuck inside and starved for human connection. Today, the world is different. Things are opening back up again. Vaccinated people are taking off their masks and going outside, so the idea of chatting on an audio-only platform may just not hold the same appeal.

Advertisement

The social app landscape is different as well because users have more options. Big Tech’s social apps are all copying Clubhouse’s format. Instagram, for instance, has given users the option to turn off their audio or video when using Instagram Live. Twitter has launched Spaces, which allows users to join virtual rooms and have real-time audio conversations with others. Facebook is also working on its own version of Clubhouse, as are LinkedIn, Spotify, and Slack, just to name a few.

It’s unclear whether Clubhouse’s global rollout to Android will save it from becoming a passing fad, but we’ll find out soon.

Advertisement

28-Year-Old Woman Infiltrates High School to Beef Up Her Instagram and Seriously, WTF?

Surely there are easier ways to pad your following on Instagram than trespassing on high school grounds while disguised as a teenager. That apparently did not occur to a 28-year-old Florida woman who was arrested after infiltrating a school in Miami-Dade County for the Gram on Monday.

Advertisement

Really? That was your plan? Really?

The grown-ass woman in question, Audrey Francisquini, allegedly snuck into American Senior High School with a backpack, a “painting under one arm and a skateboard under the other,” according to the Washington Post. Police say she walked the halls of the school handing out fliers advertising her Instagram account before her cover was blown. Police reports state she was confronted by school security and gave the excuse that she was looking for the registration office, but continued to prowl the halls with fliers before being again confronted by security, CBS Miami reported. Francisquini fled but was subsequently arrested and charged with felony trespassing, misdemeanor interfering with a school, and nonviolently resisting arrest. One imagines handing out fliers with her social media handle on it didn’t exactly help her evade the authorities.

Francisquini is a former police officer who was fired from her job in DeKalb County, Georgia when she was arrested for allegedly accessing a female colleague’s social media accounts to post revenge porn. As of the time of the incident, she worked for Carnival Cruise lines.

According to the Post, her trip to the school somehow managed to be almost as creepy as Never Been Kissed, a 1999 movie where Drew Barrymore infiltrates a high school as an undercover reporter and is later joined in the ruse by her brother, played by David Arquette, who attends prom in his underwear:

A student told the station that Francisquini was showing off her Instagram feed, which featured videos and several images of her wearing a “devil’s mask.”

“It’s crazy. It’s very creepy,” the student said. The station showed videos from her account, in which Francisquini wore a sinister red mask with pointy ears and black horns.

She could have also got the idea from that meme of Steve Buscemi asking “How do you do, fellow kids?”, though in that case, his character was an undercover drug cop.

While surrounded by police at her home, she continued to post to Instagram, according to WSVN.

Advertisement

“I legit have I don’t know how many cops outside right now of my house,” Francisquini said in an Instagram Story. “I’m not going outside at all.”

In court, WSVN reported, Francisquini shook her head while prosecutors read the allegations against her and was advised by her public defender not to continue blabbing about the incident on camera:

“She was carrying a skateboard, a painting, dressed similar to students to try and blend in with — as soon as you shake your head right now,” a prosecutor said.

“Ma’am, stop doing that,” Francisquini’s public defender advised… “If someone shoves a camera in your face, just don’t talk about this,” Francisquini’s public defender said before she left court.

Advertisement

Seriously? Wannabe social media influencers have pulled some absolutely bizarre stuff over the years, ranging from driving their cars blindfolded to eating Tide Pods to dipping their testicles in soy sauce and lighting their homes on fire, but this one is something else.

Francisquini is out on a $2,000 bond, and Miami-Dade County Public Schools has said the “unfortunate incident” is under review, according to the Post.

Advertisement

Facebook Moderator Says That ‘Wellness Coaches’ Advise Karaoke and Painting for Traumatized Workers

Illustration for article titled Facebook Moderator Says That 'Wellness Coaches' Advise Karaoke and Painting for Traumatized Workers

Photo: Drew Angerer (Getty Images)

The Irish Parliament today held a hearing on Facebook’s treatment of subcontracted content moderators—the thousands of people up to their eyeballs in toxic waste in the company basement. Moderators have repeatedly reported over the years that their contract companies hurl them into traumatizing work with little coaching or mental health support, in a system designed to stifle speech.

Advertisement

During the hearing, 26-year-old content moderator Isabella Plunkett said that Facebook’s (or the outsourcing firm Covalen’s) mental health infrastructure is practically non-existent. “To help us cope, they offer ‘wellness coaches,’” Plunkett said. “These people mean well, but they’re not doctors. They suggest karaoke or painting – but you don’t always feel like singing, frankly, after you’ve seen someone battered to bits.” Plunkett added that she’d gotten a referral to the company doctor and never heard back about a follow-up. She also reported that moderators are told to limit exposure to child abuse and self-harm to two hours per day, “but that isn’t happening.”

Content moderation requires that workers internalize a torrent of horror. In 2017, a moderator told the Guardian:

There was literally nothing enjoyable about the job. You’d go into work at 9am every morning, turn on your computer and watch someone have their head cut off. Every day, every minute, that’s what you see. Heads being cut off.

Last year, Facebook paid out an inconsequential $52 million to contractors in a class-action lawsuit filed by a group of moderators suffering from PTSD after exposed to child sexual abuse material, bestiality, beheadings, suicide, rape, torture, and murder. According to a 2019 Verge report on Phoenix-based moderators, self-medicating drug use at work was common at the outsourcing firm Cognizant.

Anecdotally, moderators have repeatedly reported a steep turnover rate; a dozen moderators told the Wall Street Journal that their colleagues typically quit after a few months to a year.

Plunkett has said that she was afraid to speak publicly, a common feeling among moderators. Foxglove, a non-profit advocacy group currently working to improve conditions for content moderators, said in a statement shared with Gizmodo that workers must sign NDAs of which they aren’t given copies. In 2019, The Intercept reported that the outsourcing company Accenture pressured “wellness coaches” in Austin, Texas to share details of their “trauma sessions” with moderators. The Verge also reported that Phoenix-based moderators constantly fear retribution by way of an Amazonian “point” system representing accuracy; employees can appeal demerits with Facebook, but their managers reportedly discouraged them from talking to Facebook, which sometimes reviewed their case only after they lost their jobs.

Foxglove told Gizmodo that Irish moderators claim the starting salary at Covalen is about 26-27,000 Euros, a little over $30,000 US dollars per year. Meanwhile, Facebook software engineers report on LinkedIn that their base salaries average $160,000 per year.

Advertisement

Facebook denied almost all of the above accounts in an email to Gizmodo. “Everyone who reviews content for Facebook goes through an in-depth training programme on our Community Standards and has access to psychological support to ensure their wellbeing,” a Facebook spokesperson said. “In Ireland, this includes 24/7 on-site support with trained practitioners, an on-call service, and access to private healthcare from the first day of employment.”

They also said that NDAs are necessary to protect users’ data, but it’s unclear why that would apply to speaking out about workplace conditions.

Advertisement

Covalen also denied Foxglove’s assertion that employees don’t receive copies of NDAs, saying that the confidentiality agreements are therefore archived and that HR “is more than happy to provide them with a copy.” They also said that they’re promoting a “speaking up policy,” encouraging employees to “raise [concerns] through identified channels.” So they can “speak out,” but internally, in designated places. They didn’t identify what happens when a moderator speaks out, only that they’ve “actively listened.” Technically, a wellness coach telling you to go to karaoke is listening, but it’s not providing any practical aid for post-traumatic stress.

Covalen also said that their “wellness coaches” are “highly qualified professionals” with at minimum master’s degrees in psychology, counseling, or psychotherapy. But it added that employees get access to six free psychotherapy sessions, implying that the 24/7 on-site “wellness coach” sessions are not actually psychotherapy sessions. Gizmodo has asked Facebook and Covalen for more specificity and will update the post if we hear back.

Advertisement

Given the unfortunate reality that Facebook needs moderators, the company could most obviously improve wellness by loosening up the pounding exposure to PTSD-inducing imagery. A 2020 report from NYU Stern pointed out that 15,000 people moderate content for Facebook and Instagram, which is woefully inadequate to keep track of three million posts flagged by users and AI per day. (When asked, Facebook did not confirm its current moderator count to Gizmodo.) The report cites Mark Zuckerberg’s 2018 statement on moderation, who put the number at two million; nonetheless, this would mean that at minimum 133 images flash before moderators’ eyes daily. According to The Verge, one moderator would review up to 400 pieces of content per day.

In her testimony, Foxglove co-founder and attorney Cori Crider pointed out that Facebook leans on moderators to keep the business running, yet they’re treated as “second-class citizens.” Crider urged Ireland’s Joint Committee on Enterprise, Trade, and Employment to regulate Facebook in order to end the culture of fear, bring contractors in-house, allow moderators to opt-out of reviewing harmful content, enforce independent oversight for exposure limits, and offer actual psychiatric resources.

Advertisement

The committee offered their sympathies and well-placed disgust.

“I would never want my son or daughter to do this work,” Senator Paul Gavan said. “I can’t imagine how horrific it must be. I want to state for the record that what’s happening here is absolutely appalling. This is the dark underbelly of our shiny multi-national social media companies.”

Advertisement

“It’s incredibly tough to hear,” Senator Garret Ahearn said, of Plunkett’s account. “I think chair it’s important that we do bring Facebook and these people in to be accountable for decisions that they make.”

We complain constantly that Facebook needs to do a better job of moderating. It also needs to do a better job of averting foreseeable calamity as it’s coming, rather than pay the lawyers and release the hounds later.

Advertisement

You can watch the full hearing here and Plunkett speak at a press conference here.

TikTok Wants To Be Its Own Economy

Illustration for article titled TikTok Wants To Be Its Own Economy

Photo: Drew Angerer (Getty Images)

TikTok is apparently the latest platform to make the shift from social media site to a glorified digital mall. On Tuesday, Bloomberg reported that TikTok has started floating the idea of in-app shopping to brands over in Europe, hoping to hook young EU shoppers—and their wallets—in the process.

Advertisement

According to the report, this e-shopping feature is still in the early stages, and there isn’t a set deadline when TikTokers across the globe will start seeing it crop up into their feeds. One of the brands with access to this prototype—Hype, a streetwear label that’s right at home with TikTok’s Gen-Z audience—confirmed to Bloomberg that these tests are ongoing, but wouldn’t go into details.

Bloomberg was able to see a screenshot of what Hype’s initial TikTok Shop might look like and from its description, it sounds pretty similar to the so-called “product catalogs” you’ve probably seen on your Instagram feed. These storefronts—at least at this early stage—are under a brand’s main account page, and they show off a range of merch with product pictures and prices.

These features are TikTok’s latest attempt to get a slice of the “social commerce” pie, which is the insider term for shopping that gets squeezed into a given social media platform. By the end of 2020, some analysts estimate that folks across the country spent close to $475 billion, and that number’s expected to shoot towards $585 billion by the end of this year.

TikTok has spent the better part of three years trying to make headway among the e-commerce crowd. In 2019, Levi’s became one of the first retailers to use a specific TikTok product that would slap a “shop now” button onto its ads, which would then direct those that click on it to Levi’s store. Then in 2020, TikTok began testing a similar button that would let individual creators direct their own audiences to the store of their choice. In that case, the ad revenue would be split between the creator featured in the ad, and TikTok itself. Meanwhile, the company is continuing to score deals with major names like Walmart and Elf Cosmetics, both equally ready to drop their ad dollars on the platform if it can promise some sales.

The big difference between what these brands were offered before versus what Bloomberg’s report is describing is where this shopping happens—within TikTok’s app, rather than on some brand’s (or creator’s) site.

In a statement to Bloomberg, the company said that it had been “testing and learning with e-commerce offerings and partnerships,” and that it’s “constantly exploring” new ways to add value to its users. The company added that it will “provide updates as we explore these important avenues for our community of users, creators and brands.”

Advertisement

Apparently, one of those avenues is focused on earning money instead of spending it. The same day that Bloomberg’s report came out, sources familiar with the company told Axios about a pilot program designed to help brands use TikTok to scout for potential job candidates to hire. Users can present their resume in the form of a TikTok (naturally), and Axios reports that TikTok will ask these candidates to share these video resume’s on their public profiles.

WhatsApp Will Turn Your Account Into a Useless Zombie If You Don’t Accept Its New Privacy Policy

Illustration for article titled WhatsApp Will Turn Your Account Into a Useless Zombie If You Don't Accept Its New Privacy Policy

Image: WhatsApp, Graphic: Shoshana Wodinsky (Gizmodo)

After facing international backlash over impending updates to its privacy policy, WhatsApp has ever-so-slightly backtracked on the harsh consequences it initially planned for users who don’t accept them—but not entirely.

Advertisement

In an update to the company’s FAQ page, WhatsApp clarifies that no users will have their accounts deleted or instantly lose app functionality if they don’t accept the new policies. It’s a step back from what WhatsApp had been telling users up until this point. When this page was first posted back in February, it specifically told users that those who don’t accept the platform’s new policies “won’t have full functionality” until they do. The threat of losing functionality is still there, but it won’t be automatic.

“For a short time, you’ll be able to receive calls and notifications, but won’t be able to read or send messages from the app,” WhatsApp wrote at the time. While the deadline to accept was initially early February, the blowback the company got from, well, just about everyone, caused the deadline to be postponed until May 15—this coming Saturday.

After that, folks that gave the okay to the new policy won’t notice any difference to their daily WhatsApp experience, and neither will the people that didn’t—at least at first. “After a period of several weeks, the reminder [to accept] people receive will eventually become persistent,” WhatsApp wrote, adding that users getting these “persistent” reminders will see their app stymied pretty significantly: For a “few weeks,” users won’t be able to access their chat lists, but will be able to answer incoming phone and video calls made over WhatsApp. After that grace period, WhatsApp will stop sending messages and calls to your phone entirely (until you accept).

So while WhatsApp isn’t technically disabling your app, the company is making it pretty much unusable.

What these “persistent reminders” will look like.

What these “persistent reminders” will look like.
Graphic: WhatsApp

It’s worth mentioning here that if you keep the app installed but still refuse to accept the policy for whatever reason, WhatsApp won’t outright delete your account because of that. That said, WhatsApp will probably delete your account due to “inactivity” if you don’t connect for 120 days, as is WhatsApp policy.

Advertisement

In a statement to the Verge, a WhatsApp spokesperson reiterated what was already written in the new FAQ: that people’s accounts won’t be deleted, that they’ll continue to receive reminders, and that they won’t lose functionality on the day the deadline hits:

We’ve spent the last several months providing more information about our update to users around the world.

In that time, the majority of people who have received it have accepted the update and WhatsApp continues to grow. However, for those that have not yet had a chance to do so, their accounts will not be deleted or lose functionality on May 15. We’ll continue to provide reminders to those users within WhatsApp in the weeks to come.

Advertisement

While the company has done the bare minimum in explaining what this privacy policy update actually means, the company hasn’t done much to assuage the concerns of lawyers, lawmakers, or really anyone else. And it doesn’t look like these new “reminders” will put them at ease, either.

44 Attorneys General Beg Facebook to Leave the Kids Alone

Illustration for article titled 44 Attorneys General Beg Facebook to Leave the Kids Alone

Photo: Mark Lennihan (AP)

In an open letter, forty-four attorneys general have beseeched Mark Zuckerberg to mercifully stop the company’s planned version of Instagram for children. Buzzfeed News discovered in March that Facebook—a company famous for platforming murderous rage and dangerous misinformation without consequence—has been developing a platform for kids under age 13, the minimum age to create an Instagram account.

Advertisement

Maybe the company wants to pipe dreams of sugar plum butts and monstrous trolls and freemium merriment to their sweet developing brains for…eating, presumably. Or maybe it’s staking a desperate bid to get kids on board with a company whose primary platform looks doomed to peter out with the Boomers and needs more eyeballs on Reels.

Instagram head Adam Mosseri explained to Buzzfeed that kids are breaking the rules and getting on Instagram anyway, so “part of the solution is to create a version of Instagram for young people or kids where parents have transparency or control.” Instagram-can’t-regulate-so-screw-it is also the gist of a Facebook company spokesperson’s statement shared with Gizmodo:

“As every parent knows, kids are already online,” they said, claiming that they are gathering input from “experts in child development, child safety and mental health, and privacy advocates.”

A little shade here: “We also look forward to working with legislators and regulators, including the nation’s attorneys general.” Subtext: We will destroy you.

The attorneys general are not looking forward to working with Facebook and would like Facebook not to unleash the product specifically because of proven failures like keeping kids off the platform in the first place. They cite a report finding that in 2018, UK police documented more instances of sexual grooming on Instagram than on any other platform, followed by Facebook. They also point to the National Center for Missing & Exploited Children, which claimed that in 2020, they received over 20 million reports of child sex abuse material across all of Facebook’s platforms.

The NCMEC reports that the data comes almost entirely from service providers themselves, so TikTok’s relatively sterling count of around 22,700 instances could indicate that Facebook was more communicative. Still, 20 million instances, plus Facebook’s policy of fixing mistakes after everything goes to hell, should preclude getting to run a playground.

In the letter, the attorneys general also point to a recent finding that Instagram had automatically suggested weight loss search terms like “appetite suppressants” for users based on their interests. A 2017 survey by an anti-bullying charity Ditch the Label found 42% of young Instagram users had been cyberbullied on the platform, a higher percentage than on any other social media service. They add that users were able to circumvent a safety control in Messenger Kids which was supposed to limit contacts to parentally-approved friends. In fact, social media probably shouldn’t exist at all. They generally note that social media use leads to increased rates of depression, suicidal thoughts, and body dysmorphia.

Advertisement

There isn’t a name yet for Instagram’s child product yet, and a Facebook spokesperson told Gizmodo that it’s in the early stages of development. The spokesperson added that the company has committed today not to show any ads to people under 13.

Don’t gorge on the tempting morsels, children. You will be trapping yourself in a digital friend circle from which there is no escape. Bobby seems cool today but in 20 years he’ll be posting about adrenochrome and lizard people.

Advertisement

Conservatives Demand Supreme Court Overrule Fake Facebook Court, Others Weigh In

Illustration for article titled Conservatives Demand Supreme Court Overrule Fake Facebook Court, Others Weigh In

Photo: Olivier Douliery (Getty Images)

On Wednesday, Facebook’s Oversight Board, the pseudo-legalistic, questionably independent body that the company claims has the power to review and potentially overrule official moderation decisions, issued its not-so-final proclamations regarding the status of Donald Trump’s account.

Advertisement

The now-former president has been suspended from Facebook and its subsidiary Instagram after inciting deadly riots at the Capitol on Jan. 6 in an ill-fated bid to stop Congress from certifying Joe Biden as the winner of the 2020 elections. In short, the board punted right back to Facebook, upholding the suspension itself but claiming Facebook arbitrarily made up rules regarding “indefinite” bans to handle the Trump situation. The Oversight Board told Facebook to make an actual decision to either permanently ban Trump or unlock his account within six months.

As with everything regarding this godawful company, the inevitable pile-on took a clear partisan split. Republicans and right-wingers viewed the decision not to allow Trump back on the site—which could potentially have ramifications for any attempt at a political resurgence—as an affront on their values and free speech. Democrats and civil rights groups, for their part, generally expressed relief that the Oversight Board spared the country yet more angry posts from the ex-president but also focused on the ludicrousness of the entire venture.

As it turns out, the only people to have swallowed Facebook’s attempts to brand the Oversight Board as a pseudo-governmental arm of a sovereign entity hook, line, and sinker are right-wingers. Suddenly confronted with a vision of corporate dystopia they didn’t like, some Republicans turned to a higher power for help— among them Charlie Kirk, head of the ebullient diaper lad campus Republican and Facebook-spamming organization Turning Point USA. No, we don’t mean God, just something else equally as unlikely to intervene: the Supreme Court.

Kirk tweeted:

The US Supreme Court should overturn the Facebook’s ‘Oversight Board’s” ‘ruling’ which upholds the outlawing of the 45th President of the United States from social media.

This is a big tech, corporate oligarchy without standing and it’s gone too far. Enough is enough.

(The decision is not subject to review by SCOTUS, unless the type of lawsuit that has historically been laughed out of lower courts somehow makes it there, and the justices all decide to join Justice Clarence Thomas in throwing out decades of precedent and law to declare digital platforms as common carriers who can’t ban anyone.)

Kirk’s panicked viewpoint was mimicked by conservative pundit J.D. Vance, author of the loathsome Hillbilly Elegy and who has graduated from self-declared Trump supporter whisperer to prospective Ohio Senate candidate.

Advertisement

Vance tweeted:

The Facebook oversight board has more power than the United Nations.

Conservatives were right to worry about giving our sovereignty away to a multinational institution. We just picked the wrong one.

Advertisement

Will Chamberlain, co-publisher of right-wing magazine Human Events, tweeted, “A corporate committee has no more legitimacy to rule on censorship issues than a random anon on Twitter.” Random QAnon conspiracy theorist turned congresswoman Lauren Boebert, issued a vague threat: “Facebook will pay the price. Mark my words.”

More generally, Republicans used the Oversight Board ruling as an opportunity to continue harping on endlessly about alleged anti-conservative bias in Facebook algorithms (pure bullshit, as right-wing pundits and media consistently make up the bulk of the site’s top performers). According to CNN, the usual circus of right-wing sites including Fox, Breitbart, and Gateway Pundit all led with coverage declaring the decision as Orwellian censorship. Senator Tom Cotton said that the Oversight Board shouldn’t be weighing in on “issues of free speech,” while former White House chief of staff turned radio host Mark Meadows and guest Representative Jim Jordan both agreed it was time to “break them [Big Tech] up.”

Advertisement

Trump issued a statement to several media outlets that we don’t give a shit about.

The reaction from Democrats and activist organizations focused less on the fate of Trump than the convoluted, corporate funhouse carnival process by which the decision was made, as well as whether it was meaningful at all.

Advertisement

Representative Frank Pallone of New Jersey, chair of the House Energy and Commerce Committee, tweeted, “Facebook is amplifying and promoting disinformation and misinformation, and the structure and rules governing its oversight board generally seem to ignore this disturbing reality.” He added that “real accountability will only come with legislative action.”

Evan Greer, director of digital rights nonprofit Fight for the Future, told Gizmodo in a statement, “The vast majority of people who are silenced by Big Tech platform censorship are not former Presidents or celebrities, they are marginalized people, particularly sex workers and politically active Muslims who live outside the U.S. We can go back and forth all day about where the lines should be drawn, but simply demanding more and faster removal of content will not address the very real harms we are seeing.”

Advertisement

“It’s quite telling that Facebook refused to answer several of the Oversight Board’s questions about its algorithms and actual design decisions,” Greer added. “We need to strike at the root of the problem: break Big Tech giants, ban surveillance advertising and non-transparent algorithmic manipulation, and fight for policies that address this parasitic business model while preserving the transformative and democratizing power of the Internet as a powerful tool for social movements working for justice and liberation.”

David Segal, executive director of the Demand Progress Education Fund, a nonprofit that advocates enforcement of antitrust law, told Gizmodo in a statement that the Oversight Board is a smokescreen for Facebook’s business practices.

Advertisement

“Facebook’s monopoly status means it does not compete in a free marketplace: not on privacy, not on algorithms, not in the online advertising market–which accelerates the spread of incendiary content,” Segal wrote. “To the extent anyone focuses on what the Facebook ‘Oversight’ Board says and not what they are—a mechanism to distract attention from and provide credibility to Facebook—we give Facebook a pass for its unfair and dangerous monopolistic practice.”

The Lawyers’ Committee for Human Rights Under Law, a civil rights group, focused on the Oversight Board’s decision not to ban Trump outright.

Advertisement

David Brody, the head of the group’s Digital Justice Initiative, wrote to Gizmodo that “Facebook must immediately and permanently ban former President Trump.” He added the Oversight Board’s decision “did not evaluate the full context of the case and it used legal technicalities to avoid answering hard questions. For example, it failed to address Trump’s repeated use of Facebook to inflame hate and racism, or his long history of spreading divisive lies and disinformation prior to the 2020 election. Over-reliance on formalist schools of legal analysis entrenches dominant power structures by turning a blind eye to the big picture.”

Greer told Gizmodo that while there is growing pressure to act against Facebook for its monopolistic business practices, lack of transparency, and monetization of hate speech and propaganda, ill-advised legislation seeking to rein in the company’s power could do more harm than good. For example, Republicans and Democrats alike have targeted Section 230, the law that shields websites from most liability for user-generated content, with legislation that could have unforeseen consequences or threaten the legal foundations of the internet economy.

Advertisement

“The most dangerous thing that could happen right now is if the public accepts the idea that lawmakers should just do ‘something, anything’ about Big Tech,” Greer wrote. “We need thoughtful policies that actually address harms, not more partisan dunking and working of the refs.”

Area Man Wants You To Check Out His Blog

Illustration for article titled Area Man Wants You To Check Out His Blog

Photo: Drew Angerer / Staff (Getty Images)

After threatening for months to disrupt social media with a bespoke platform that would allow him to bypass community guidelines (and the embarrassingly long list of platforms he’s been banned from), our big patriotic boy has finally made good on his promise. Folks, the wait is over — the future is now, and it’s a blog.

Advertisement

On Tuesday, former president Donald Trump launched his long-awaited social media platform, which, if you really squint at it, kind of resembles a rudimentary version of Twitter, if Twitter had been designed by a day-glo boomer hunkered down in Palm Beach, Florida. Titled “From the desk of Donald J. Trump,” the “feed” is tucked inconspicuously into a corner of Trump’s website and features that classic commentary we all know and love — pithy observations from a very old man who always cared more about how his snarky commentary would be received than he did about actual governance or, you know, people.

“Happy Easter to ALL, including the Radical Left CRAZIES who rigged our Presidential Election, and want to destroy our Country!” reads one post.

“So nice to see RINO Mitt Romney booed off the stage at the Utah Republican State Convention,” reads another. “They are among the earliest to have figured this guy out, a stone cold loser!”

Although the platform just launched, there are already posts dating back as early as March, which implies the existence of a universe where developers could have simply “forgotten” to plug this thing into the internet and kept it offline forever, leaving Trump content to shoot his foul musings straight off into the void for the rest of time.

The platform also features the option to share Trump’s commentary on Twitter and Facebook — two platforms that, as of this writing, the former president is still currently banned from. Significantly, the platform’s launch comes just hours before Facebook’s Oversight Board is expected to hand down a decision on whether or not Trump will be allowed back on Facebook and its subsidiaries, including Instagram.

Trump was famously banned from a host of platforms in January after his rage-stoking rigged-election commentary incited an angry mob that stormed the U.S. Capitol, ultimately leaving five people dead.

Advertisement

According to Fox News, the page is the work of Campaign Nucleus, a “digital ecosystem made for efficiently managing political campaigns and organizations,” helmed by Trump’s former campaign manager, Brad Parscale.

The moral of the story is clear: You can take the Twitter out of the president, but you can’t take the tweet out of the poster. Or something like that.

Advertisement

Signal Tries to Run the Most Honest Facebook Ad Campaign Ever, Immediately Gets Banned

Illustration for article titled Signal Tries to Run the Most Honest Facebook Ad Campaign Ever, Immediately Gets Banned

Graphic: Signal

A series of Instagram ads run by the privacy-positive platform Signal got the messaging app booted from the former’s ad platform, according to a blog post Signal published on Tuesday. The ads were meant to show users the bevy of data that Instagram and its parent company Facebook collects on users, by… targeting those users using Instagram’s own adtech tools.

Advertisement

The actual idea behind the ad campaign is pretty simple. Because Instagram and Facebook share the same ad platform, any data that gets hoovered up while you’re scrolling your Insta or Facebook feeds gets fed into the same cesspool of data, which can be used to target you on either platform later.

Across each of these platforms, you’re also able to target people using a nearly infinite array of data points collected by Facebook’s herd of properties. That data includes basic details, like your age or what city you might live in. It may also include more granular points: say, whether you’re looking for a new home, whether you’re single, or whether you’re really into energy drinks.

Illustration for article titled Signal Tries to Run the Most Honest Facebook Ad Campaign Ever, Immediately Gets Banned

Graphic: Signal

Based on this kind of minute data, Signal was able to create some super-targeted ads that were branded with the exact targeting specs that Signal used. If an ad was targeted towards K-pop fans, the ad said so. If the ad was targeted towards a single person, the ad said so. And if the ad was targeted towards London-based divorcees with degrees in art history, the ad said so.

Apparently, Facebook wasn’t a fan of this sort of transparency into its system. While the company hasn’t yet responded to Gizmodo’s request for comment, Signal’s blog post explains that the ad account used to run these ads was shut down before many of these ads could reach their target audiences. Personally, I think that’s a shame—I’d have loved to see an ad that showed what Instagram really thinks of me.

Instagram Realizes It Had a Clubhouse In Its Heart All Along

Illustration for article titled Instagram Realizes It Had a Clubhouse In Its Heart All Along

Image: Instagram

As more and more social media platforms start cooking up their own Clubhouse clones, Instagram is adding new features to its existing livestreaming service to get in on the voice chat craze. On Thursday, Instagram announced it’s rolling out the option to turn off your audio or video while using Instagram Live.

Advertisement

Instagram tested these new features publically on Monday during an Instagram Live broadcast between Facebook CEO Mark Zuckerberg and Adam Mosseri, the head of Instagram. Starting today, global audiences on both iOS and Android will have access to them too.

“We want to build on our Live product and offer even more ways for our creator community to drive serendipitous, engaging conversation with each other and their audience,” a company spokesperson told Gizmodo via email. “By giving people the option to mute their audio or turn off their video, hosts will have the added flexibility for their livestream experience, as the added functionality could help decrease pressure to look or sound a certain way while broadcasting live.”

As for now, broadcasters won’t be able to turn on or off the video or mute others in their livestreams, but Instagram said it’s working on adding these kinds of options soon.

In a similar move, Instagram’s parent company Facebook added Live Audio Rooms to its platform and Messenger app back in March. It also has a Clubhouse-inspired Q&A platform called Hotline in the works.

LinkedIn, Twitter, Slack, and a slew of other online platforms have jumped at the chance to develop their own voice chat features in recent months, trying to capitalize on the relaxed, “video off” experience popularized by Clubhouse.

Whether or not it’s just a flash in the pan remains to be seen, but Clubhouse’s investors sure seem to have faith in its staying power. The company was reportedly valued at roughly $4 billion amid negotiations with investors during a round of funding earlier this month. However, Clubhouse’s explosive growth is starting to show signs of waning, Insider reports. According to data from app analytics firm Sensor Tower, the number of monthly app installs worldwide tanked between February and March, from 9.6 million downloads to 2.7 million downloads respectively.

Advertisement

Clubhouse’s rise in popularity has been partially tied to the coronavirus pandemic keeping many people stuck inside and pushing them toward socially distanced opportunities, such as public audio chatrooms, to connect. With the world slowly beginning to open back up again as vaccines roll out, it appears Clubhouse shtick may be wearing thin for some users.