Spotify and Apple Music are the two biggest players in the streaming music game, but thanks to a new deal valued at $297 million, Square is looking to change that tune by acquiring a majority stake in Tidal.
In a series of tweets, Square founder and CEO Jack Dorsey (who is also the CEO of Twitter) explained the purchase by saying that even though a fintech company like Square might not seem to have much in common with music streaming service, Dorsey thinks Square and Tidal can work together to create a truly artist-driven business.
Dorsey said in a tweet that “New ideas are found at the intersections, and we believe there’s a compelling one between music and the economy. Making the economy work for artists is similar to what Square has done for sellers.” Okay.
As part of the deal, famous rapper and former majority Tidal owner Jay-Z will join Square’s board of directors, with Jay-Z retaining a smaller stake in the music streaming company. In the meantime, with Jay-Z moving to help oversee a larger part of Square’s businesses including Seller and Cash App, Square’s head of hardware Jesse Dorogusker will serve as Tidal’s interim leader.
G/O Media may get a commission
However, while a music service run by and designed to support artists big and small sounds really nice, it’s not immediately clear how Square intends to make that happen. Though based on the success of Square’s other ventures like Cash App, Dorsey seems confident that Square will be able to transform Tidal in a similar fashion.
“We’re going to start small and focus on the most critical needs of artists and growing their fanbases,” Dorsey said.
It looks to be some time before this joining of forces is expected to bear fruit because according to Bloomberg, Square says Tidal isn’t expected to have a meaningful impact on Square’s sales or profits this year, with Tidal interim lead Jesse Dorogusker saying that Tidal is less focused on Tidal’s market share and more interested in seeing Dorsey and Jay-Z’s vision of an artist-driven business come together.
Glitch, the software company behind Trello and Stack Overflow, now has a collective bargaining agreement with the Communications Workers of America (CWA.) The news is extraordinary, not just because they claim to be the first software workers to have secured a collective bargaining agreement, but because the lead-up to ratification has been so quiet: no leaked memos of smear campaigns, no evidence of union-busting firms. Wonderful, and eerie.
The contract is the outcome of an overwhelming majority vote to unionize under the CWA in March 2020, just before Glitch laid off about a third of its staff, citing the economic downturn. In a joint press release, Glitch workers and the CWA describe Glitch as an unusually willing partner in the negotiations. “Glitch’s management, which voluntarily recognized the union after it was announced, is an exception and should serve as a model for executives at other tech companies,” it reads.
The contract, which lasts 11 months, reportedly doesn’t prioritize “already generous” wages, but job security. The contract guarantees that laid-off workers will be offered their positions back if Glitch re-hires. It also ensures just cause, meaning that the employer may not discipline or fire a worker without a defensible reason.
“We love our jobs, we love working at Glitch, which is why we wanted to ensure we have a lasting voice at this company and lasting protections,” Glitch software engineer Katie Lundsgaard is quoted as saying in the press release. “This contract does that, and I hope tech workers across the industry can see that unions and start-ups are not incompatible.”
The apparently painless negotiation marks a shift in acceptance of unions, which the white collar tech sector (or at least the bosses of such companies) has traditionally treated with suspicion, as clunky institutions that are antithetical to a nimble, teamwork-oriented workplace. When Kickstarter workers broke ground, announcing a union drive in 2019, senior workers called unionization “extreme.” One organizer with the Office and Professional Employees International Union, which helped Kickstarter employees, told WIRED that they had to convince “tech workers to realize that they are workers.” Soon after the unionization drive was made public, Kickstarter fired organizers and hired a law firm that specializes in “maintaining a union-free workplace.” Employees voted to unionize anyway.
G/O Media may get a commission
Over the past few years, unions have gone from taboo to a conceivable future for tech. Along with a wave of media outlets, workers at the podcasting company Gimlet (under Spotify) voted to unionize in 2019. Recently, Medium workers (primarily engineers) lost a unionization effort by one vote but plan to keep moving forward. Meanwhile, a union tide has also swept online media outlets.
Larger tech companies have met organizing efforts with aggressive pressure campaigns and alleged retaliation. Amazon fired a Staten Island warehouse worker who’d been involved in unionization efforts, reportedly planned to malign another organizer, and has notoriously inundated Alabama warehouse workers who are currently holding a union vote with anti-union propaganda. Google, too, has fired organizers and AI researchers critical of its business practices (in all cases, Google denied retaliation). In 2019, a group of Pittsburgh-based Google contract workers voted to join the United Steel Workers (USW), and a growing but still small group of around 890 Google workers has joined the CWA with an all-inclusive minority union, which does not have collective bargaining rights with the company.
Glitch CEO Anil Dash, a vocal proponent of ethics in tech, told Gizmodo via email that he’s pleased with the outcome. “We’re glad to have a collaborative relationship with our workers, and to have reached an agreement that works for everyone at Glitch.”
Giz AsksGiz AsksIn this Gizmodo series, we ask questions about everything and get answers from a variety of experts.
Once a day, at least, I’ll tear up listening to music. Just a drop or two, or not even a drop, just a pre-cry convulsion, a sudden seizure of feeling. More often than not, I have no specific memories tied to the song in question—sometimes I’m hearing it for the first or second time. If you asked me why the song was affecting me, I might point to a certain guitar tone or vocal inflection, but that would suggest another question, namely: Why is that guitar tone or vocal inflection affecting me? And the same, of course, applies to other moods that music had been known to generate, such as joy or annoyance. I could not answer the question, but, thankfully, there’s a vast literature on just this topic. For this week’s Giz Asks, we spoke with some of its authors.
In a classic paper, Patrick Juslin and Daniel Västfjäll propose several mechanisms that underlie emotional responses to music. At the most basic level, brain stem reflexes ensure that sudden, loud sounds can startle. Evaluative conditioning—the repeated pairing between a musical sequence and some other object or circumstance—can imbue the sounds with associative power. Music can elicit internal simulation of its expressive patterns, potentially leading to emotional contagion. It can evoke imagery, thoughts, or memories that themselves trigger emotional response. And finally, it can fulfill or violate expectations that people sustain while listening.
To this list, I would add that music can draw people out of their ordinary mode of attending to the world and into a subjective, participatory involvement with the sounds that many people find highly pleasurable. The diverse ways that making and experiencing music can involve feelings surely extend far beyond this list as well.
Assistant Professor of Creativity and Creative Practice at Northeastern University and Director of the Music, Imaging, and Neural Dynamics Laboratory
My lab studies music and the brain, and I have always been fascinated by why music can give us the chills, also known as frisson. In one study, we specifically asked the question of whether there are any differences in brain structure that might explain individual differences in how music makes people feel. We ran an online survey on several hundred participants, and from those we found that there were some people who consistently reported getting chills frequently when listening to music, whereas other people did not report getting chills much at all. We brought two groups of these subjects into the lab, one group who got chills all the time and another group who did not. We made sure that both groups were the same in age and gender and had the same level of musical training and the same personality factors. We verified that people who reported getting chills were indeed experiencing changes in their physiology—their heart rate was faster and their skin was more conductive (sweatier) during particular moments in which they reported getting chills in response to music. Finally, we scanned the brains of these two groups of people and showed that those who got chills all the time had higher volume of white matter connectivity between auditory areas of the brain and areas of the brain that were important for emotion and social experiences. So brain connectivity, in particular between auditory and emotion centers of the brain, seems to be linked to the ability of music to make us feel things. In a way, music is an auditory channel towards the emotional centers of the brain. Perhaps that is why we make playlists for the people we love.
Associate Professor, Music Theory and Cognition, The Ohio State University
It’s such a strange thing to actively and willingly listen to music that might make you sad. Many of us do it, but it’s not entirely clear why. In a 2011 study, Sandra Garrido and Emery Schubert found that about half of the people they asked either agreed or strongly agreed with the statement “I like to listen to music which makes me feel sadness or grief.” Sadness and grief are big topics, but I like the idea that they can be tied to aspects of empathy and compassion. David Huron and Jonna Vuoskoskiargue said that listening to sad music is tied in with empathy—people that enjoy listening to sad music also tend to score high on measurements of “empathic concern,” which is another way of saying that they’re more compassionate. There are evolutionary advantages to empathy and compassion, and it makes sense to me. I’m looking forward to seeing where that research goes.
In terms of nostalgia, there’s been quite a bit of work on “reminiscence bumps” and music. That is, we tend to remember things more from certain times in our life (often our teenage years), as compared to other periods. It’s been theorized that this is because we remember things more during periods of change and transition, and our teenage years are a period of intense change. I think this is why the Spotify “Time Capsule” playlists are always pretty spot-on. It’s probably not that hard to figure out your age, and your broad preferences in music, so triggering these nostalgic responses is really just about finding those most popular songs from when you were a teenager.
Music can also really help us to feel connected to others. I think there is a very good case to be made that one of the main evolutionary functions of music is to promote and facilitate social bonding and cohesion. If we think about dance, and ritual in general, it serves as a way of creating some sort of group coherence. Music facilitates that.
When the world went into lockdown in March, so many people were surprised to see these people going to the balconies to play music with one another. To me, it made perfect sense: We need sociality, bonding, and compassion in our lives, and music is one of the best avenues for getting at this need.
The desire to perform and listen to music occurs in all cultures, and the reason for this impulse been debated for hundreds of years. In the nineteenth century, Charles Darwin argued that music evolved primarily for courtship. Soon after, the philosopher Herbert Spencer argued for a broader explanation, writing that music developed not only for love songs, but also from vocalizations produced in a range of different emotional states, including joy, triumph, grief, and rage. A recent large-scale survey at Harvard University has shown that, indeed, we can identify the function of different types of music, such as dance songs, lullabies, and love songs, regardless of the culture in which it is produced. So music serves a number of different functions, and can induce a variety of different moods, such as joy, love, anger, and a feeling of community, to name a few.
For music to be successful, we must want to hear it. What are the characteristics of a piece of music that make us want to hear it over and over? In the 1920s, Irving Berlin proposed a set of rules for writing a successful popular song. He argued that simplicity is very important, and also that the music should have familiar elements. He wrote, ‘There is no such thing as a new melody” and argued that effective songwriters “connect the old phrases in a new way, so that they will sound like a new tune.” This advice proved very successful. Also, a main reason why some songs are so “sticky” is that they contain many repeated phrases. This repetition causes the song to be stuck in our heads—and, in general, the more familiar we are with a song, the more we want to listen to it again.
Emotional experiences associated with music are highly idiosyncratic. You and I could be listening to the same song and feel completely different things. Or you could be listening to the same song on different occasions and feel different emotions each time. Because of that, I believe that music doesn’t really make us feel things as much as it creates a structure, or a template, that allows us to have (sometimes very powerful) emotional experiences. In my work, I’ve come to regard music as a technology that humans invented at the dawn of humanity to create and maintain communities. This technology operates on three levels: physiological, cognitive, and social.
At a physiological level, changes in the basic acoustic features of sound—like tempo, timbre, or loudness—create measurable effects in our bodies. For example, a fast tempo or increasing loudness might increase our heart rate, while a scraping sound might cause us to tense up. Because these sounds are made by concrete objects and events in our environment, through our knowledge of these associations they can lead to basic sensations of pleasure or displeasure, much like they do in our other senses.
Next up is the cognitive level. We all grow up hearing music particular to our culture, and through mere exposure to this music we develop a stylistic competency. For example, most listeners who are enculturated inWestern popular music can tell the difference between a verse and a chorus of a song they’ve never heard before. Or they might have a sense when a harmonic sequence sounds like it’s about to come to a resting point. This stylistic knowledge leads to us having certain expectations about how the music is most likely to unfold: what harmonies are most likely to accompany a particular melody, or what kind of a beat is typical for a song in this or that context. Musicians will then play with those expectations to create moments that might feel like points of tension and relaxation in the music, leading to more complex emotions such as chills or awe or desire.
Finally, there is the social level, which, I think, is the most important to how music elicits emotions. The world over, music almost always takes place in the context of some social activity that involves multiple participants. In these situations, it colors our relationships with others by providing a structure for experiencing, if not exact, then at least very similar emotions. It does that mostly through a steady beat with which everyone can synchronize. This, in turn, leads to positive feelings of social belonging and cooperation, which serve to strengthen the community and help it prosper.
Do you have a burning question for Giz Asks? Email us at firstname.lastname@example.org.
Rumors that Samsung may potentially ditch Tizen, its proprietary OS for wearables, for Google’s Wear OS, have been flying lately. It’s a baffling idea, considering that Samsung smartwatches are the best Android-friendly smartwatches right now, and Wear OS is a stinking hot mess.
Case in point: 9to5Google reports that the “OK Google” or “Hey Google” phrases to trigger Google Assistant on Wear OS watches have been broken for months. Google also confirmed to The Verge that it was aware of this bug, which has been plaguing users since at least November 2020, and is working on a fix. While you can still use the Assistant by long-pressing buttons (which is actually my preferred method of bringing up Assistant on Wear OS), it’s telling that Google has known about the problem for this long and still hasn’t fixed the issue.
Wear OS has long been one of Google’s most neglected projects, but this is a new low. The main reason to pick a Wear OS watch over a Fitbit or Samsung smartwatch is native integration with Google Assistant and Google Pay. If you don’t care about quickly fixing one of the main selling points of your wearables platform, then I’m not sure I can confidently say Wear OS is going to be around for the long haul. And this isn’t the only instance. Back in October, even Google put Wear OS second by opting to release a YouTube Music app for the Apple Watch first. Worse yet, Google’s most recent updates to Wear OS were paltry at best, with slightly better app loading times and a weather tile as the marquee features.
This was all Wear OS had to offer in 2020. Compare that to Samsung’s blockbuster year, in which it absolutely knocked it out of the park with the Galaxy Watch 3. Right now, the Galaxy Watch 3 is the only other flagship smartwatch that can go toe-to-toe with the Apple Watch on nearly every single feature. Of course, it’s not perfect. Some features like its FDA-cleared electrocardiogram app are currently only available for Samsung smartphone owners. However, there’s really no competition between the Galaxy Watch 3 and even the best of the best Wear OS watches I’ve tested.
G/O Media may get a commission
To be fair, once upon a time Samsung did use Wear OS—then Android Wear—on its smartwatches. But in 2014, it made the switch to Tizen with the Gear 2 and Gear 2 Neo, probably for the same reasons nearly every other smartwatch maker besides Fossil at the time did: Google’s clunky UI, low adoption rate, and the outdated Qualcomm Snapdragon Wear 2100 chip.
So why, why would Samsung go back to a platform that has yet to get its shit together? I can think of a few reasons, but none of them are particularly good. For starters, Tizen doesn’t have a great ecosystem of third-party apps, and switching to Wear OS might open it up to more apps. But to be quite honest, Wear OS apps don’t get much developer love, even if there are more of them. For instance, Spotify for Wear OS is a glorified remote control, while Spotify for Tizen lets you use offline playlists. Google’s native Wear OS apps are OK at best, and frankly, it’s bizarre that the built-in Google Fit workout app is actually now split into several different versions. Google Fit, even with newerupdates, is also not better than Samsung Health, and having both installed on your watch is again, tedious.
The other reason I could see Samsung making the switch would be to bring the option of Google Assistant and Google Pay to Samsung watches. And that would be awesome, because Samsung Pay is more restrictive to use than Google Pay, and who the hell actually likes Bixby? But does Samsung need to go all-in on Wear OS to incorporate Assistant and Google Pay? Fitbit manages to have Google Assistant work on Fitbit OS, why not allow Samsung to do the same? (Granted, Fitbit likely has Assistant because Google now owns the company.)
There’s the distinct chance that a Samsung Wear OS watch would suck less than every other Wear OS watch. But that’s mostly because Samsung could use its proprietary Exynos SoC instead of relying on Qualcomm’s, which is doing the bare minimum. Also, while I’m sure Samsung’s rotating bezel navigation could be ported onto a Wear OS watch, it just wouldn’t be quite as good unless Google allowed Samsung to run a Wear OS skin (which is what Oppo did with its Wear OS watch). It’s telling that Wear OS was actually decent on the Oppo Watch because it didn’t look or function anything like Wear OS. And at that point, what even is the reason to switch from Tizen again?
It’s clear that Google gets more out of Samsung using Wear OS than vice versa. Samsung bringing its smartwatch innovations to that platform would suddenly make it relevant again—provided that all of Samsung’s apps, including the ones that need FDA clearance, could seamlessly make the jump.
Except that wouldn’t make Wear OS as a whole good. For that to happen, other watchmakers would have to figure out how to make the best use of Wear OS. Google would have to actually update the damn platform consistently with actually good features, not incremental ones that are barely a blip on the radar. Qualcomm would have to figure out how to update its wearable SoC to current process technology and do it more than once every two years. And that’s if Google doesn’t decide to up-end the whole thing now that it owns Fitbit to make something else entirely.
Android users—and not just the ones who use Samsung smartphones—deserve an excellent smartwatch. This just doesn’t seem like the best way to get one.
Finally, you can sort the music in your Liked Songs playlist on Spotify.
Hitting play on the Liked Songs playlist in Spotify has always been a bit of a crapshoot for me—I never know whether I’ll get Steely Dan or Phoebe Bridgers or Ginuwine’s “Pony.” If this sounds like you (maybe sans “Pony” but that’s your business), Spotify mercifully began rolling out mood and genre filters today for both free and premium accounts.
Spotify says that for anyone with at least 30 songs in their Liked Songs playlist, they’ll be able to filter their music with up to 15 personalized genre and mood categories. However, these mood and genre filters are populated based on the music in your playlist. That means if you change the playlist by adding or removing titles, so too can your mood and genre filters change.
To enable the feature, head to Your Library and select Liked Songs. Below the “add songs” button but above the actual playlist, you should see additional bubbles that display your mood and genre filters. To filter by a specific category, select the bubble. To disable it, just click the “X” that’ll appear next to it. In Spotify’s demo of the feature, some of the filters included things like chill, indie, electronic, rap, and folk.
Don’t be alarmed if you don’t see the feature immediately. Spotify said that it’s coming to iOS and Android in the U.S., Canada, UK, Ireland, South African, New Zealand, and Australia “over the coming weeks,” so keep an eye out.
Audiophiles, rejoice. Spotify announced today during a media event that starting later this year, Premium subscribers will get the option of streaming CD-quality, lossless audio via a new paid tier dubbed Spotify HiFi.
According to Spotify, high-fidelity audio is “consistently one of the most requested new features” by users. Once it’s live, Spotify HiFi users will be able to listen to lossless audio on their devices and any Spotify Connect-enabled speakers. Cryptically, Spotify also said that it’s working with “some of the world’s biggest speaker manufacturers” to make sure Spotify HiFi is “accessible to as many fans as possible.”
Spotify went light on the details regarding its new tier. Right now, Spotify streams at a max bitrate of 160kbps for free users, and 320kbps for Premium users. CD-quality audio has a bitrate of 1,411kbps, but other high-fidelity streaming services sometimes go beyond that. Tidal, for instance, goes up to 9,216 kbps via its Tidal Masters tier. Qobuz, another high-fidelity service, also streams at that rate. Whether Spotify opts for “standard” CD-quality or better remains to be seen. Also, Spotify’s presentation clearly focused on music, so it’s unknown whether podcasts will also get a bump in audio quality. Pricing, and which markets Spotify HiFi will be available in, have also yet to be announced.
Spotify isn’t the only company that’s hopped on the HiFi train. Sonos recently introduced Sonos Radio HD, a high-fidelity paid tier of its Sonos Radio service. Meanwhile, Amazon launched its version of high-quality audio streaming with Amazon Music HD back in 2019.
Aside from Spotify HiFi, the company also had a smattering of other announcements. On the podcast front, Spotify said it’s teaming up for a multi-year partnership with DC and Warner Bros. to produce narrative scripted podcasts starting with Batman Unburied. The company also teased podcasts from director Ava DuVernay, a new podcast produced by the Obamas called “Tell Them, I Am,” as well as a partnership with the Russo Brothers’ entertainment company AGBO. The company also announced a partnership with Anchor and WordPress that’ll let bloggers publish written content as podcasts.
And if that weren’t enough, Spotify also said it’s launching in 85 new markets and 36 new languages, bringing the total number to more than 170. While all these new markets will get access to Spotify’s entire global music and podcast catalog, the company says it’ll have to work with local rights holders to include more local offerings.
Epic Games lobbyists drafted legislation that will be heard in North Dakota this week, attempting to bar app stores owned by the likes of Apple and Google from taking a cut of app sales, according to a report over the weekend by The New York Times.
Senate Bill 2333, introduced to the North Dakota Senate last week, seeks to prevent big digital storefronts like Apple’s App Store and Google Play from forcing developers to distribute apps exclusively through their storefronts, or exclusively use their payment systems. It also seeks to prevent the companies behind these storefronts from punishing developers who choose other distribution or payment methods. Epic is currently involved in a legal battle about this issue, taking both Apple and Google to court after both storefronts banned Fortnite when Epic introduced its own payment method last August in protest against the App Store’s 30% cut of sales. The Times writes that debate on the North Dakota bill began on Monday and will be voted on this week.
The Times reports that North Dakota State Senator Kyle Davison was “given the draft legislation by Lacee Bjork Anderson, a lobbyist with Odney Public Affairs in Bismarck. Ms. Anderson said in an interview that she had been hired by Epic Games, the maker of the popular game Fortnite.” Anderson said, “she was also being paid by the Coalition for App Fairness,” a nonprofit that includes Epic Games, alongside other companies such as Spotify, and that seeks “fair treatment by these app stores and the platform owners who operate them.”
Epic hired its first lobbyists in late January, drawing on people from both sides of the political aisle. While it might look self-serving for Epic to be behind the legislation, the US government has been looking into big tech monopolies for a while. The Times reports that several states are exploring bills similar to North Dakota’s, or other measures that limit these companies’ power. While the bill, if it passes, would only apply to businesses operating in North Dakota, and only those that bring in over $10 million in revenue, it could change how Apple and its ilk do business. The Times writes that Apple has been pushing back against the legislation, and “Apple’s chief privacy engineer, Erik Neuenschwander, testified that the bill ‘threatens to destroy iPhone as you know it.’”
People with whom The Times spoke are uncertain if the bill will pass.
Even if you don’t want to hand it to Epic—and the company is certainly making it hard to—the issues the Fortnite case raises go beyond whether you can play a cartoon battle royale on your phone. (If you’ve lost track: no, you still can’t.) The case, currently set to go to trial in May, could benefit smaller developers and be a blow to Apple’s dominance over mobile apps if it goes in Epic’s favor. The North Dakota legislation might be another tool in Epic’s toolkit, and yet another example of the company turning its desire to line its pockets into a moral crusade. But either way, it’s about more than just Fortnite.
As the global pandemic rages on, we’ve been seeing a slew of tech companies consider making remote work an option for their employees even after covid. Now Spotify is on board: On Friday, the company announced the rollout of its Work From Anywhere program, which will give the thousands of Spotify employees around the globe the freedom to work wherever—and however—they’d like.
“We have been discussing the future of work and what it will look like for a couple of years, and have always concluded that globalization and digitalization are drivers for a more flexible workplace,” Spotify said in its announcement. “That is better for both the company and our people.”
The basic gist outlined on Spotify’s HR blog is this: With a manager’s approval, Spotify-ers can mix and match their schedules to work entirely from home, entirely from the office, or some combination of those two.
These employees will also be getting more flexibility when it comes to deciding where they’re working from. If someone who’s really jonesing for some office space happens to pick a locale that isn’t near one of the 48 different Spotify offices across more than a dozen countries, than the company will set them up with a membership for a coworking space somewhere nearby. That said, this isn’t permission to jet off just anywhere in the world—Spotify notes in its announcement that there are “some limitations” on where employees can go, just because of “time zone difficulties, and regional entity laws.”
“The ultimate goal of our new design approach is to ensure that employees have a place where they can focus, collaborate, and create—whether that’s at a desk, in a conference room, or in cafe spaces,” Spotify’s post explains.
G/O Media may get a commission
In explaining its rationale behind the new program, the company added that its employees’ effectiveness “can’t be measured by the number of hours people spend in an office,” and that “work isn’t something our people come to the office for, it’s something they do.”
While not every industry is able to offer this kind of flexibility, the tech sector has embraced the WFH lifestyle over the course of the pandemic. Earlier this week, for example, the cloud giant Salesforce announced a similar policy change to Spotify’s, giving its tens of thousands of employees the option to stay fully remote, fully in-office, or a mix of the two. This comes on the heels of other companies like Shopify, Twitter, and Square giving their employees the option to continue working remotely for as long as they’d prefer, even after their offices eventually reopen. There’s even an ongoing crowdsourced Github project tracking the constantly updating list of tech companies giving their employees the chance to work from home permanently. So far we’re at nine, but we wouldn’t be surprised if that list grows longer.
Apple’s giving users another good reason to take its latest public beta for a test drive.
Hawk-eyed Redditors using the iOS 14.5 public beta noticed that when asking Siri to play music, the assistant will prompt the user to select which music app they’d like it to use to play the song or artist. Normally, Siri just defaults to the Apple Music app when an app isn’t specified. But as confirmed by Gizmodo, asking Siri to, for example, “play Phoebe Bridgers” in iOS 14.5 will bring up a menu with options for Apple Music, Spotify, Podcasts, and for some reason, Books.
Reddit user matejamm1, who shared a screengrab of the feature and had other music apps installed, saw those apps on this screen as well.
If Spotify is selected from this menu, Siri appears to default to that app the next time it is asked to play music—but the feature is fairly buggy. When I was testing the option to set my default music app as Spotify in iOS 14.5, Siri would sometimes default to Spotify, sometimes open Apple Music, and sometimes bring up that same menu for selecting which app I wanted to play the song or music from. However, colleagues who aren’t running the public beta did not have this option when asking Siri to play music, so it’s something.
G/O Media may get a commission
It’s just one more reason to explore Apple’s latest public beta, particularly for those of us who aren’t especially keen on Apple’s own services. Recently, the company has been slowly easing up on forcing us to use them. Last year, iOS 14 introduced the ability to change your preferred mail and browser apps to non-Apple services like Chrome and Gmail. The ability to change other default options would be welcome.
Another standout feature of the iOS 14.5 public beta is the ability to use an Apple Watch to unlock your iPhone while wearing a mask. Sure, it may save you only a few seconds spent manually punching in your passcode, but listen, it does make the process of unlocking your phone while wearing a mask a little less of an annoyance. I’ve been testing it the last several days and have found it to work well.
In 2021, we’re already saving time and streaming better. A tiny victory, but I’ll take it.
Spotify’s powerful algorithm makes finding music you like a breeze. But what if it could recommend music based on how you sound?
That’s the idea proposed in a patent Spotify was recently granted (reported by Pitchfork), which outlines potential uses for this kind of technology. The patent details a concept for using audio signals—your voice, background sounds, and even your accent—to suss out what to play for you. One factor that could inform the streaming service what to play next might be the “emotional state of a speaker,” while others might attempt to determine your gender and how old you are based on your voice.
Explaining its environmental audio data collection, the patent’s authors describe how it might be used to identify where you’re located—inside, outside, on the train, at a party etc.—and potentially how many people you’re sharing the space with.
“For example, in one aspect, the environmental metadata indicates aspects of a physical environment in which the audio signal is input,” the patent states. “In one example, the environmental metadata indicates a number of people in the environment in which the audio signal is input. In another example, the environmental metadata might indicate a location or noise level.”
Sure, it’s creepy as hell. But similar technologies already exist and have for years now. Still, it’s an interesting application for a service competing directly with data overlords like Apple and Amazon, both of which have their own respective music services. Of course Spotify is trying to vacuum up as much data as it can possibly get its hands on. How else is it going to perfect its algorithm and keep you hooked on its service forever? (Though, keep in mind that just because the patent for a technology exists doesn’t necessarily mean it will ever officially roll out.)
G/O Media may get a commission
Spotify didn’t immediately return Gizmodo’s request for comment. However, the company told Pitchfork in a statement that the company “has filed patent applications for hundreds of inventions, and we regularly file new applications. Some of these patents become part of future products, while others don’t. Our ambition is to create the best audio experience out there, but we don’t have any news to share at this time.”