TikTok Quietly Tweaks Privacy Policy to Collect Your Biometric Data

Illustration for article titled TikTok Quietly Tweaks Privacy Policy to Collect Your Biometric Data

Photo: Drew Angerer (Getty Images)

TikTok quietly updated its privacy policy earlier this week in a way that allows the company to automatically collect extensive data on users in the United States—including data about their faces and their voice.

Advertisement

The policy change, first spotted by TechCrunch, specifically states that TikTok “may collect biometric identifiers and biometric information as defined under US laws,” including “faceprints” and “voiceprints,” from videos users upload to their platform. The company also notes that “where required by law,” it will seek “any required permissions,” before collecting that data. On top of this, the new policy also clarifies that other data like “the nature of the audio, and the text of the words spoken in your User Content,” might also be automatically collected.

There are a number of reasons TikTok may be collecting this information. One is, of course, advertising—the updated privacy policy notes that, in addition to face and voice data, it may also collect information for purposes “such as identifying the objects and scenery that appear,” which is increasingly common in the marketing world. It may also use the voice data to enable its automatic captions feature. None of this is to say these are the only things TikTok will do with that data—that seems unlikely, at best—but those are some obvious options.

It’s also worth pointing out that this face- and voice-related data appears under a new header in the policy—”Information we collect automatically”—which implies that TikTok might be able to hoover this data without users realizing that’s what they’re agreeing to. Right now, only three states (Illinois, Texas, and Washington) have explicit regulations surrounding the ways companies can collect biometric data, while New York proposed a similar law of its own earlier this year. In the 47 states that don’t have those restrictions, there’s nothing stopping the company from collecting that data without a user’s explicit consent.

Despite falling into the Trump administration’s line of fire last year, most cybersecurity researchers have noted that TikTok isn’t necessarily any more of a threat than data-hoovering giants like Facebook and Google—though the bar they set is pretty low. It’s also worth mentioning that a few months back, TikTok actually paid a whopping $92 million to settle allegations from TikTokers in Illinois that the company was flouting the state’s biometric data-collection laws.

We’ve reached out to TikTok about the new policies and will update if we hear back.

‘Terms & Conditions Apply’ Is a Game That Dares You to Opt-Out

Illustration for article titled 'Terms & Conditions Apply' Is a Game That Dares You to Opt-Out

Screenshot: Shoshana Wodinsky (Gizmodo)

“Help! I’m stuck in a terms and conditions factory. Please accept the terms and conditions to release me.”

Advertisement

This disclaimer from hell is just one of the 29 awaiting you in “Terms and Conditions Apply,” a new project meant to gamify the frustrating experience of trying to opt-out of tracking and ad-targeting across the web. It’s the brainchild of The Guardian and technologist Jonathan Plackett, who said he conceived the game as a way to “expose some of the dark patterns” that websites regularly use when tricking us into giving up our personal data.

At least on its face, the game’s goal is pretty simple: there’s a faceless baddie named EVIL CORP that will pull every trick in the book (and then some) in order to get you to agree to give up your digital data. Your job is to say no to any notification prompts, decline any cookies the Corp tries to drop on you, and turn down any terms of service agreements that get thrown your way.

Needless to say, it gets really hard, really quick. When Evil Corp asks if you agree to the terms and conditions, suddenly the “no” button is written in Klingon, or gets buried under a slew of confusing drop-down menus. A few of the questions feature “yes” and “no” toggles that flip-flop whenever you try clicking on them. Alex Bellos—The Guardian’s resident puzzle maker that teamed up with Plackett on the project—actually posted the solutions for three of these prompts if you want a taste for how difficult some of them are.

While the bulk of these questions might be over the top, there are a few that hardly exaggerate the sort of dark patterns that sites are caught using more and more. Last week, the Electronic Frontier Foundation teamed up with a slew of other tech advocates and policy researchers to roll out the first-ever tipline where folks can report these sorts of shady tactics as they’re spotted across the web. Hopefully, the ones that get reported are a bit less devious than the ones Plackett’s come up with.

Bipartisan Privacy Bill Promising Easier Opt-Outs and Tech Transparency Actually Has a Chance of Passing

Illustration for article titled Bipartisan Privacy Bill Promising Easier Opt-Outs and Tech Transparency Actually Has a Chance of Passing

Photo: Drew Angerer (Getty Images)

A bipartisan group of senators is pushing forward a privacy bill meant to offer some much-needed transparency on the data being hoovered by our digital devices. And for once, a bill like this actually has a chance of passing.

Advertisement

This week, Senator Amy Klobuchar reintroduced the 2019 Social Media Privacy Protection and Consumer Rights Act to the Senate, alongside fellow Democratic senator Joe Manchin and Republican reps John Kennedy and Richard Burr. In a nutshell, the legislation would mandate that all social platforms—from Facebook, to Youtube, to TikTok—replace their barely legible privacy policies with something that regular humans can read and understand.

The bill also mandates that these platforms give users the option to opt-out of the collection of “all personal data of the user tracked by the operator. While many sites offer these opt-outs already in order to comply with GDPR, we’ve also seen some players turn those opt-outs into a mind-numbing hassle, while others have found convenient workarounds to keep tracking users that say no. Under Klobuchar’s drafted bill, platforms caught engaging in these shady tactics would be held liable under the FTC.

Meanwhile, if a user wants to see what kind of data is being collected from them, the bill states that these platforms need to be able to offer a copy—free of charge—detailing whatever data the company’s processed on an individual user. In cases where the company learns that their data has been caught up in a breach, the bill also mandates the platform notify any affected users within 72 hours.

When Klobuchar first brought this bill forward in 2019, Congress was still reeling from the Cambridge Analytica scandal and holding hearing after hearing to mull over what a potential federal privacy law might look like. Klobuchar’s bill was one of the seven presented that year that sputtered out in the Republican-controlled Senate. But things have changed.

The biggest hurdle here is that while everyone agrees that we probably need some sort of federal data privacy law, nobody can agree on what that law should actually look like. Democrats and Republicans have publicly sparred over whether federal law should preempt each state’s privacy laws, and whether the FTC should be solely responsible for prosecuting shitty companies, or if the bill should include a private right to action.

While Congress has been trudging along at an absolutely glacial pace, states across the country have been hard at work proposing and passing their own data privacy legislation, most notably the California Consumer Privacy Act (CCPA). The legislation isn’t perfect—even after getting some much-needed tweaking under Prop. 24—but it’s still the most comprehensive privacy law that’s been passed in the US. The California law had ripple effects for tech companies throughout the country, while tech giants like Facebook and Google have bent over backward trying to wiggle their way out of the new requirements. Hopefully, bills like the one Klobuchar is reviving will make that wiggling a little less easy.

Advertisement

You can read the full text of the bill below:

Advertisement

Too Bad, Zuck: Just 4% of U.S. iPhone Users Let Apps Track Them After iOS Update

Illustration for article titled Too Bad, Zuck: Just 4% of U.S. iPhone Users Let Apps Track Them After iOS Update

Photo: Saul Loeb (Getty Images)

Apple recently rolled out its highly anticipated App Tracking Transparency feature with iOS 14.5, which lets users decide whether apps track their activity for targeted advertising. Overwhelmingly, users seem happy to leave app tracking disabled. Just 4% of iPhone users in the U.S. have agreed to app tracking after updating their device, according to the latest data from Verizon-owned analytics firm Flurry.

Advertisement

Worldwide, that figure jumps to 12%, a healthy increase but one that still doesn’t spell great news for companies like Facebook that sell targeting to advertisers by hoovering up user data. With iOS 14.5, if a user has app tracking requests enabled, then whenever they download or update an app, it has to ask permission before it can track their activity. And it’s clear most users are saying: “Nah.”

Users who want to turn off tracking altogether without rejecting permissions for each app individually can toggle “Allow Apps to Request Track” in the iPhone’s privacy settings. Since the update launched on April 26, Flurry’s data shows that, on average, about 3% of U.S. iOS users and 5% of international iOS users have restricted app tracking.

Flurry based its findings on a sample size of 2.5 million daily mobile active users with iOS 14.5 in the U.S. and a sample size of 5.3 million such users worldwide. According to the company, its analytics tool is installed in more than 1 million mobile applications and it aggregates data from about 2 billion devices per month.

As a vocal opponent of Apple’s new feature, Facebook has launched a sweeping fearmongering campaign to convince users that these privacy measures are, in fact, a bad thing. Facebook took out multiple full-page ads arguing that Apple’s feature will devastate small businesses that rely on its ad targeting services and warning that many free sites may have to start charging users money for subscriptions or in-app purchases. Other tech giants like Snapchat, Google, and Twitter have also said that, if the majority of users decide to forego app tracking, it will likely affect their bottom line.

Granted, this data is just our first glimpse at the response from users. iOS 14.5 has only been out for a little less than two weeks, and, given more time, we’ll likely gain a better understanding of the average number of users opting-in or opting-out of app tracking. But one thing’s crystal clear: People value their privacy. And if that means missing out on a few personalized ads, well, plenty of folks seem happy to make that sacrifice.

Peloton API Exposed User Data, Even for Private Accounts

Illustration for article titled Peloton API Exposed User Data, Even for Private Accounts

Photo: Scott Heins/Stringer (Getty Images)

Peloton’s had a rough go in the news cycle lately, and not helping matters is the fact that its leaky API allowed any hacker to obtain any user’s account data—even if that user had set their profile to private.

Advertisement

The vulnerability, which was discovered by security research firm Pen Test Partners, allowed requests go through for Peloton user account data without checking to make sure the request was authenticated. The API itself is the bit of software that allows the Peloton hardware to communicate with the company’s servers that store user data. As a result, the exposed API could let anyone with a bit of know-how access any Peloton user’s age, gender, city, weight, workout stats, and birthday. Yikes.

The freaky thing here is that this was true even if a user decided to make their account private. Peloton has two separate privacy settings: one for your profile, and one to hide your age and gender. The former prevents other Peloton users from viewing your profile, while the latter prevents your age and gender from appearing in classes. (For the uninitiated, one of Peloton’s draws is a competitive leaderboard.) However, enabling these privacy settings didn’t matter. The researchers were still able to access data from private accounts.

Pen Test Partners disclosed the issue to Peloton in January and gave them a 90-day window to fix the issue. That’s standard protocol for these sorts of things, and Peloton itself has its own responsible disclosure program. While Peloton initially acknowledged it had received the info, it then reportedly ghosted the researchers. In early February, the company appeared to have partially fixed the issue—except private data was still accessible to authenticated Peloton users. At this point, the researchers then enlisted TechCrunch—which broke this story—to get involved.

In an update, Pen Test Partners said Peloton reached out and the aforementioned vulnerabilities were fixed within seven days. In a statement to TechCrunch, a Peloton spokesperson said: “We took action and addressed the issues based on his initial submissions, but we were slow to update the researcher about our remediation efforts. Going forward, we will do better to work collaboratively with the security research community and respond more promptly when vulnerabilities are reported.”

Cool, but also not-so-cool given how the company has handled some of the issues it’s been facing recently. Specifically, today it agreed to recall both the Tread+ and Tread after a tense back-and-forth with the U.S. Consumer Product Safety Commission. The CPSC issued a warning in April saying households with small pets and children should stop using the pricier treadmill after a series of incidents and that the company should recall the Tread+. For its part, Peloton pushed back, blaming improper use as the root cause for the tragedies. According to an Insider report, several Peloton users had reported issues with the Tread+ as early as 2019, and many experienced slow or unresponsive customer service.

While many businesses have floundered due to the pandemic, Peloton’s not one of them. Its business has skyrocketed as people search for gym alternatives during lockdowns. However, its customer service has received a lot of flack for months-long shipping delays, with many owners venting their frustration on social media. There’s a pattern developing here, and Peloton’s sluggish response to customers, security researchers, and consumer safety agencies is clearly something the company needs to reevaluate.

Advertisement

Your Old Phone Number Could Get You Hacked, Researchers Say

Illustration for article titled Your Old Phone Number Could Get You Hacked, Researchers Say

Photo: STR/AFP (Getty Images)

When you get a new phone number, mobile carriers will often “recycle” your old one—assigning it to a new phone and, therefore, a new customer. Carriers say the reason they do this is to stave off a hypothetical future of “number exhaustion”—a sort of “peak oil” for phone numbers, when every possible number that could be assigned to a phone has been taken.

Advertisement

However, the act of number recycling actually brings with it a host of security and privacy risks, a new study conducted by Princeton University researchers shows. More often than not, recycled numbers allow new customers access to old customer information, opening up opportunities for a variety of invasive, potentially exploitative encounters.

For one thing, new number owners will often continue to get personalized updates meant for the former owner. This can be quite invasive—for both parties: The study relates one particular incident in which a user of a new number was “bombarded with texts containing blood test results and spa appointment reservations” that were obviously meant for someone else. While this may sound more comical than concerning, the access presented by a phone number can obviously be a lot more dire.

Despite the fact that phone numbers are typically used in two-factor authentication or for other security purposes, people often fail to immediately update all of their online accounts when they change numbers, and old numbers can linger as methods for SMS-authenticated password resets. This means that they could be used to connect to social media, email, or consumer accounts. Researchers say other personal information could easily be collected to augment such account takeovers, typically from online “people search sites” like BeenVerified or Intelius (these sites don’t always have the most accurate, up-to-date information, however). Phone numbers could also be paired with passwords culled from large data breaches. In these ways, a bad actor could potentially commit fraud and/or hijack accounts to steal more personal data—or for other nefarious purposes.

If these scenarios may sound a bit far fetched, there nevertheless seem to be plenty of opportunities to commit them. One of the researchers, Arvind Narayanan, said that 66% of recycled numbers they sampled were still tied to previous owners’ online accounts, and, as a result, were potentially vulnerable to account hijacking. The researchers surveyed 259 phone numbers and, of those, 215 were “recycled and also vulnerable to at least one of the three attacks,” the study says. Researchers write:

“We obtained 200 recycled numbers for one week, and found 19 of them were still receiving security/privacy-sensitive calls and messages (e.g., authentication passcodes, prescription refill reminders). New owners who are unknowingly assigned a recycled number may realize the incentives to exploit upon receiving unsolicited sensitive communication, and become opportunistic adversaries.”

Narayanan said that after he and his fellow researcher, Kevin Lee, reached out to carriers about these issues, “Verizon and T-mobile improved their documentation but have not made the attack harder.” The companies essentially made it slightly easier for users to inform themselves about these vulnerabilities, but didn’t ultimately do anything to stop the potential attacks from occurring.

This whole line of inquiry hinges largely on the premise that whoever gets your new number turns out to be a malevolent creep, willing to exploit your personal information for their gain. While that might not be the case 9 times out of 10, the vulnerabilities presented by number recycling are certainly enough to make you worry about its current safeguards.

Advertisement

Zuck Slowly Shrinks and Transforms Into a Corncob Ahead of Apple’s Looming Privacy Updates

Illustration for article titled Zuck Slowly Shrinks and Transforms Into a Corncob Ahead of Apple's Looming Privacy Updates

Photo: Drew Angerer (Getty Images)

Facebook has pushed back against Apple’s planned rollout of anti-tracking tools at every possible opportunity, but now the social media giant seems to be changing its tune in a last-ditch effort to save face. On Thursday, CEO Mark Zuckerberg said Facebook may actually be in a “stronger position” after the privacy updates to iOS and is optimistic about how the company will weather this change, according to CNBC and CNET.

Advertisement

“The reality is is that I’m confident that we’re gonna be able to manage through that situation well and we’ll be in a good position,” he said in a Clubhouse room Thursday per the outlets.

With Apple’s planned privacy updates for iOS 14, which are scheduled to roll out sometime this spring, the company aims to give iOS users more transparency and control over their data by requesting permission before apps can track their activity across other apps and the web.

Facebook hasn’t been too keen on that idea given that roughly 98% of its revenue stream depends on targeted ads, which are built around monitoring a person’s browsing habits. The company launched a campaign to convince folks that personalized ads are good, actually, which has so far involved taking out full-page ads in several leading newspapers to condemn Apple and running a video ad claiming that Apple’s privacy updates are killing small businesses by not giving Facebook and other apps free rein to hoover up your data.

(As you might already suspect, Facebook’s claims have been found to be misleading at best, and self-serving propaganda at the worst. While advertising might become slightly more difficult for small businesses and developers with Apple’s new updates, Facebook stands to take the biggest revenue hit, not the little guys.)

Now though, with Apple’s updates looming close on the horizon, Facebook is apparently adopting a new strategy: corncobbing. Aka, to continue to embarrass oneself rather than admit to being brutally owned.

On Thursday, Zuckerberg reiterated concerns that Apple’s decision could still hurt small businesses and developers, but also expressed hope that Facebook might benefit from the situation, CNBC and CNET report.

Advertisement

“It’s possible that we may even be in a stronger position if Apple’s changes encourage more businesses to conduct more commerce on our platforms by making it harder for them to use their data in order to find the customers that would want to use their products outside of our platforms,” he said.

That’s a far cry from the bleak picture Facebook painted before. In August 2020, the company warned that Apple’s updates could lead to a more than 50% drop in its Audience Network advertising business, which lets mobile software developers personalize ads based on Facebook’s data. Facebook’s chief financial officer David Wehner also expressed concern it could hurt the social network’s ability to effectively target ads to users.

Advertisement

Apple and Facebook did not immediately respond to Gizmodo’s request for comments. Apple has repeatedly defended its planned privacy updates against Facebook’s accusations, arguing that these new features aren’t getting rid of targeted ads entirely but instead giving users the chance to opt-out if they wish to.

T-Mobile Is Taking All of Your Sweet, Sweet Data… Unless You Tell It to Stop

Illustration for article titled T-Mobile Is Taking All of Your Sweet, Sweet Data… Unless You Tell It to Stop

Photo: John MacDougall (Getty Images)

Heads up, fellow T-Mobile customers: You might want to take a look at your mobile carrier’s privacy policy.

As first spotted by the Wall Street Journal, the company’s latest update to its privacy policy is set to automatically enroll paying phone subscribers into an ad-targeting program that will see their data shared with unnumbered advertisers starting next month. It’s also worth noting here that the privacy policy update also carries over for any Sprint customers who were gobbled by T-Mobile during the two company’s mega-merger last year.

T-Mobile’s latest Privacy Notice lays out some of the specifics: Starting April 26, the company writes, it will begin a “new program” that shares some personal data—like the apps you download or the sites you visit—with third-party advertisers. T-Mobile also adds that it won’t share your precise location data “unless you give [T-Mobile] your express permission,” and won’t share information in a way that can be directly tied back to your device. But like we’ve written before, just because a dataset is “anonymized” doesn’t mean that you can take the company anonymizing it (T-Mobile, in this case) at its word.

Advertisement

T-Mobile is hardly the only major telco to pull these kinds of ad-targeting shenanigans. Verizon, for example, has an entire subsidiary—Verizon Media—that compiles data from its customers (along with a few third parties) to make its own different audience categories for targeted ads. AT&T’s had its own adtech subsidiary, Xandr, on hand since 2018 for similar purposes: pooling similar buckets of subscriber data together, and then pawning off that data to advertisers that might be interested in reaching, say, new moms, vegetarians, or luxury shoppers on their specific networks.

The company, for its part, promised the Wall Street Journal that it was defaulting to this new setting because “many say they prefer more relevant ads,” which is one of the most oft-repeated arguments people in the ad industry like to throw around to justify their invasive practices. In fact, there’s another reason that T-Mobile might be incentivized to throw this update out right now.

The ongoing updates to Apple’s iOS 14 and the upcoming updates to Google’s Chrome browser have left some advertiser’s core data-collection tactics—like mobile ID’s and third-party cookies—in the dust. Some major companies in the data-brokering space have begun pitching their own sorts of data-hoovering tech that can circumvent these new roadblocks, and T-Mobile’s new policy seems to be another spin on that, just coming from your phone service provider.

T-Mobile’s policy page, for its part, has a pretty comprehensive guide describing how to opt-out of this new program at the bottom of its new notice here, which is something you should go do right now.

Advertisement

TikTok to Pay $92 Million Settlement in Nationwide Class-Action Lawsuit Over Alleged Privacy Violations

Illustration for article titled TikTok to Pay $92 Million Settlement in Nationwide Class-Action Lawsuit Over Alleged Privacy Violations

Photo: Drew Angerer (Getty Images)

TikTok has agreed to fork over $92 million to settle a class-action complaint regarding alleged privacy violations, including claims that the app and its predecessor, Musical.ly, collected personal data without users’ consent for tracking and ad targeting purposes.

The proposed settlement, which lawyers in the case estimated to be one of the largest privacy-related payouts in the U.S. to date, would include nearly all U.S. TikTok users and follows 21 class-action lawsuits that alleged TikTok engaged in a slew of scummy data harvesting tactics. The suit claims TikTok broke federal cybersecurity and privacy laws, including the Computer Fraud and Abuse Act and Video Privacy and Protection Act, along with several state-mandated consumer protection laws in California.

It purportedly used facial recognition technology to quietly collect users’ biometric data such as their ethnicity, gender, and age under the guise of a preventative measure to keep minors off the app (The company has already paid out millions in fines and settlements after reports that it illegally collected and sold young users’ data). The suit also claims TikTok collected “highly sensitive personal data” without users’ permission and then sold that data to third-parties.

Advertisement

TikTok has denied this most recent lawsuit’s allegations and claims it agreed to the settlement to avoid a drawn-out legal battle.

“While we disagree with the assertions, rather than go through lengthy litigation, we’d like to focus our efforts on building a safe and joyful experience for the TikTok community,” a TikTok spokesperson said in a press statement to multiple outlets.

Under the settlement’s conditions, TikTok has agreed to remain in compliance with applicable laws and refrain from hoovering up any user data that isn’t explicitly disclosed in its privacy policy. That includes recording user’s biometric information, collecting GPS or clipboard data, and sharing U.S. users’ data overseas.

The proposed settlement is set to go before U.S. District Judge John Lee of the Northern District of Illinois for final approval, per NPR. TikTok did not immediately respond to Gizmodo’s request for comment.

Advertisement

This Myspace Reincarnation Is Bringing Me So Much Joy

My Spacehey profile, with images from my high school Photobucket account!

My Spacehey profile, with images from my high school Photobucket account!
Screenshot: Joanna Nelius/Gizmodo

Have you heard? The old Myspace is back. Sort of.

Coded entirely by an 18-year-old from Germany named An, Spacehey is near carbon copy of the OG social network’s design in its early 2000s glory days. According to Vice, the new network, which looks entirely like the old network, launched last November and so far has attracted about 55,000 users worldwide.

An told Vice said he wanted to create a social network that offered better privacy and allowed users to be more creative.

“Thanks to older friends and the internet, I heard a lot about [Myspace]. I came to the conclusion that you can’t find something like this nowadays,” said An.

Advertisement

He spent his free time during quarantine scouring internet archives to make Spacehey look as authentic as possible to the classic version of Myspace.

And he nailed it.

Myspace has been rebooted before, but never with the look and feel of the original. That’s what made it appealing, and Spacehey recreates it almost perfectly.

Spacehey offers a few features the original Myspace lacked, like the option to add links to your other social media profiles on Twitter and other platforms that didn’t exist back then. You can embed content from Spotify and YouTube, which also didn’t exist back then. There’s even a section with pre-made, user-created layouts if you don’t feel like coding everything from scratch—although that’s half the fun of having a Myspace, er, Spacehey.

But all the core elements of classic Myspace are there. Friend space. Blogs. Interests. Comments. Even the little “online now” label. If you’re feeling a little inspired, Spacehey user corentin has a running list of other users who have completely decked out their profiles with fun fonts, bright neon colors, and animations that are almost too nostalgic to handle.

Advertisement

An says Spacehey is more than just a Myspace clone, though. He’s very active on the platform, responding directly to user complaints and unafraid to throw down the ban-hammer on anyone spreading hate speech and harassment on the network. That’s not only a welcome change of pace in the overall social media landscape, but is also in direct contrast to the approach Facebook and Twitter have taken over the years when dealing with misinformation and hate groups.

Myspace taught my high school self a lot of things. It taught me how to use HTML and that overloading your page with flashy text and auto-playing music made made for a poor user experience. It taught me how to deal with creepers sliding into my DMs. But most of all, it was a much-needed refuge from overbearing parents who liked to snoop through my text messages and listen in on my phone calls when all I wanted was privacy. I’ve been looking for a Facebook alternative for years now, and Spacehey has potential.

Advertisement

Of course, there are concerns about how viable a throwback to an old social network can be once the novelty of nostalgia wears off. There’s no Spacehey app, for instance, so if you want to access it on your phone you’ll have to use your browser. But I like that. I miss the early days of cell phones that couldn’t connect to the internet, which made it so easy to disconnect from social engagements for days, even weeks at a time. Spacehey could end up being a niche social media platform for a very particular user (say, an older millennial), but that’s OK.

My Spacehey page needs a lot of work. But I’ve been having a great time going through my old Photobucket account, where I saved all the menu and background images I made for my old Myspace. It’s such a unique time capsule of my younger self’s interests: my obsession with CSI, Zach Braff in Garden State (my adult self doesn’t understand that anymore), little icons I made for some of my favorite albums from Icon of Coil, A Perfect Circle, and Don Hertzfeldt’s short film Rejected. I’m still like my teenage self in some ways, but obviously have grown tremendously since then.

Advertisement

It may not skyrocket to TikTok heights of popularity, but Spacehey is a throwback to when curating a social media profile was fun and creative. And I’m having a blast.