European Commission Proposes Taking Away the Cops’ Big Boy Surveillance Machine

Illustration for article titled European Commission Proposes Taking Away the Cops' Big Boy Surveillance Machine

Photo: Dan Kitwood (Getty Images)

The EU is giving the U.S. a run for its money with privacy regulation, and now they’ve upped the ante with a dynamo of a proposal: banishing AI systems that violate “fundamental rights,” with a special place in hell for law enforcement using real-time biometric identification. The end of that sentence is more of a personal interpretation, but the gist is that it’s time to end the free-for-all.

Advertisement

The sweeping list of protected freedoms in the proposal includes the right to human dignity, respect for privacy, non-discrimination, gender equality, freedom of expression (infringed by the “chilling effect” of surveillance), freedom of assembly, right to an effective remedy and to a fair trial, the rights of defense and the presumption of innocence, fair and just working conditions, consumer protections, the rights of the child, the integration of persons with disabilities, and environmental protection in that health and safety are impacted.

The proposed regulation is over 100 pages long, so here’s a summary of the bans.

BANNED:

  1. An AI system that “deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm.” Furthermore, a “person’s consciousness” is suggested to apply to age and mental state. In a speech, European Commission Executive Vice-President Margrethe Vestager identified an example of “a toy that uses voice assistance to manipulate a child into doing something dangerous. Such uses have no place in Europe,” Vestager continued. “We, therefore, propose to ban them.”
  2. Social scoring by governments: “evaluation or classification of the trustworthiness of natural persons” in a way that leads to “detrimental or unfavorable treatment” in an unrelated social context and/or harms people or groups in a way that “is unjustified or disproportionate to their social behaviour or its gravity.” This implicitly calls out the Chinese Communist Party, which designed a social credit system to score citizen’s “trustworthiness”—a system which has reportedly already denied travel tickets for tens of millions based on debt.
  3. Real-time biometric identification by law enforcement in public spaces that infringe on the public’s rights and freedoms. Exceptions are made for missing children, imminent threat to life, active terrorist attack, IDing a suspect of a serious crime, but even then, law enforcement will need approval, except in cases of a dire emergency.
  4. An exception is made for military uses.

In other words, law enforcement would have to hand over their spy toys for inspection and cut out the kind of abuse that’s now rampant in the United States. Cops have abused face recognition software to make will-nilly suspect identifications. Baltimore PD was caught using face recognition to scan Freddie Gray protesters and pick them off for outstanding warrants. Predictive policing algorithms intensify targeting in Black communities and perpetuate the cycle of disproportionate arrests. Predicted recidivism algorithms have likely lengthened prison sentences. When we get mere glimpses of secretive technology, the scope is always more terrifying than imagined.

Consumer uses, too, have wildly violated civil rights. Algorithms that assess mortgage eligibility have levied higher interest rates on Black and Latinx communities and limited healthcare access.

Such tools would all likely fall under the European Commission’s broad definition of an “AI system” which covers machine learning, “knowledge representation,” statistical “approaches” and search methods, among other applications. Generally, the software can use a “given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

Advertisement

The European Commission also proposes strict regulations on AI systems that it deems “high-risk.” (The commission notes that overall this represents a very small proportion of systems in use.) “High-risk” uses include:

  • Real-time and “post” biometric identification, by anyone, outside the banned law enforcement uses.
  • Safety management for critical infrastructure like traffic and gas supply
  • Educational access and standardized test assessment
  • Job recruitment, candidate evaluation, promotion or termination
  • Allocation of government benefits
  • Determining creditworthiness
  • Emergency responder dispatch
  • Law enforcement’s assessment of individuals’ likelihood of committing crimes
  • Law enforcement’s prediction of recidivism rates
  • Law enforcement’s profiling of individuals
  • Law enforcement’s detection of a person’s emotional state and ersatz polygraph tests
  • Law enforcement’s detection of deep fakes
  • Law enforcement’s evaluation of evidence
  • Law enforcement’s use of unrelated data sets to “ identify unknown patterns or discover hidden relationships in the data”
  • At the border: detection of a person’s emotional state and ersatz polygraph tests
  • At the border: assessment of a visitor’s potential “security risk, a risk of irregular immigration, or a health risk”
  • At the border: verification of travel documents
  • At the border: examination of eligibility for asylum, visa and residence permits
  • Judicial system’s research and interpretation of the law

Providers for all of the above would have to regularly monitor their technology and report back to the European Commission. Developers are expected to create a risk management system, in order to regularly eliminate and mitigate risk. Dealers are expected to provide information and training to users, taking into account the end-user’s level of technical knowledge (read, cops). They would be expected to keep records of who used the technology and how, including input data (ie, cops would have to admit they used Woody Harrelson’s photo to make a suspect ID). They’d also need to Inform authorities when they’re aware of a risk.

Advertisement

Government officials would still be fine to use biometric identification in a way that it doesn’t necessarily cause harm, Vestager added in the speech. The commission considers fingerprint or face scans by border controls or customs agents to be harmless.

While some have complained that this will stifle innovation, the commission has added protections for that too. It would encourage member states to set up “regulatory sandboxes,” supervised by a member state or European Data Protection Supervisor. That sounds like a crackdown, but it’s more like an optional incubator for start-ups that get priority access.

Advertisement

And the European Commission reminds us that the “vast majority” of AI systems don’t fall under the above risk categories—think AI systems that don’t drive human interaction or involve identification. They aim to encourage things like smart sensors and algorithms that help farmers maximize food production and sustainability at cost savings. So, no to barbaric policing and yes to sustaining life on Earth.

Great, let’s go right ahead and copy-paste this.

Advertisement

FTC Says Racist Algorithms Could Get You In a Lot of Trouble

Illustration for article titled FTC Says Racist Algorithms Could Get You In a Lot of Trouble

Photo: Bridget Bennett (Getty Images)

Tentatively excellent news! The FTC has declared that it is serious about racist algorithms, and it will hold businesses legally accountable for using them. In a friendly-reminder type announcement today, it said that businesses selling and/or using racist algorithms could feel the full force of their legal might.

Advertisement

“Fortunately, while the sophisticated technology may be new, the FTC’s attention to automated decision making is not,” FTC staff attorney Elisa Jillson wrote in a statement on Tuesday, adding that the agency “has decades of experience” enforcing laws that racist algorithms violate. They write that selling and/or using racially biased algorithms could qualify as unfair or deceptive practices under the FTC Act. They also remind businesses that racial discrimination (by algorithm or human) could violate the Fair Credit Reporting Act and the Equal Credit Opportunity Act.

The effects of algorithmic racial bias and automated white favoritism spill out far beyond the types of products Facebook serves us. Racist algorithms have been shown to disproportionately deny Black people recommendations for specialized healthcare programs. They have priced out higher interest rates on mortgages for Black and Latinx people than whites with the same credit scores. They have drastically exaggerated Black defendants’ risk of recidivism, which can impact sentencing and bail decisions. They have encouraged police to target locations and arrest records which perpetuate further disproportionate arrests in Black communities. The list goes on.

Government use of racist algorithms makes the “selling” part especially important. The FTC can’t try the cops, but it might be able to go after a company that misrepresented its tool as race-neutral.

Given the endless churn of stories about the racist results of facial recognition, it could seem that the FTC is equipping itself to practically annihilate the technology. In an email to Gizmodo, an FTC spokesperson said that a seller could be guilty of “deceptive” practices if it “misleads consumers (whether they are businesses or individuals) about (for example) what an algorithm can do, the data it is built from, or the results it can deliver, the FTC may challenge that as a deceptive practice.”

That’s a big deal! Most algorithms that sort through personal data do deliver discriminatory results, and companies tend not to admit it. But this is complicated by the fact that it’s often hard to prove the results because companies also tend to avoid letting us look under the hood, forcing investigative journalists and researchers to piece together clues after the damage is done. (See most of the links above.)

That caginess would likely stall an FTC complaint against an “unfair” practice. The commission would have to perform the time-consuming chore of exposing proof that the algorithm itself directly harms consumers. (In the spokesperson’s example: “compromises consumers’ ability to get credit, housing, jobs”.)

Advertisement

In other words, no one knows the extent of racist algorithms’ damage, and the FTC urges businesses to hold themselves accountable or the FTC “will do it for you,” read: the FTC will come for you, even if you’re a small potatoes Honda dealership.

Businesses will still lie, they know, so the announcement also reminds us that the FTC filed a complaint against Facebook alleging, among other things, that the company knowingly deceived users about facial recognition. This resulted in a settlement of $5 billion, which the FTC had celebrated as “history-making” but Democrats complained was wildly insufficient to make Facebook feel any pain.

Advertisement

On a more hopeful note, the FTC could spread some of the regulatory responsibility around. The spokesperson noted that the Consumer Financial Protection Bureau also enforces the Fair Credit Reporting Act and the Equal Credit Opportunity Act. The Department of Health and Human Services and the Department of Justice, too, could pursue discrimination cases.

Here’s hoping they follow through and drive a hard bargain. People are getting sick and locked up.

Advertisement

If Skynet Takes Over, Try Writing ‘Robot’ on Your Shirt

CLIP identifications before and after attaching a piece of paper that says ‘iPod’ to an apple.

CLIP identifications before and after attaching a piece of paper that says ‘iPod’ to an apple.
Screenshot: OpenAI (Other)

Tricking a terminator into not shooting you might be as simple as wearing a giant sign that says ROBOT, at least until Elon Musk-backed research outfit OpenAI trains their image recognition system not to misidentify things based on some scribbles from a Sharpie.

OpenAI researchers published work last week on the CLIP neural network, their state-of-the-art system for allowing computers to recognize the world around them. Neural networks are machine learning systems that can be trained over time to get better at a certain task using a network of interconnected nodes—in CLIP’s case, identifying objects based on an image—in ways that aren’t always immediately clear to the system’s developers. The research published last week concerns “multimodal neurons,” which exist both in biological systems like the brain and artificial ones like CLIP; they “respond to clusters of abstract concepts centered around a common high-level theme, rather than any specific visual feature.” At the highest levels, CLIP organizes images based around a “loose semantic collection of ideas.”

For example, the OpenAI team wrote, CLIP has a multimodal “Spider-Man” neuron that fires upon seeing an image of a spider, the word “spider,” or an image or drawing of the eponymous superhero. One side effect of multimodal neurons, according to the researchers, is that they can be used to fool CLIP: The research team was able to trick the system into identifying an apple (the fruit) as an iPod (the device made by Apple) just by taping a piece of paper that says “iPod” to it.

Advertisement

CLIP identifications before and after attaching a piece of paper that says ‘iPod’ to an apple.

CLIP identifications before and after attaching a piece of paper that says ‘iPod’ to an apple.
Graphic: OpenAI (Other)

Moreover, the system was actually more confident it had correctly identified the item in question when that occurred.

The research team referred to the glitch as a “typographic attack” because it would be trivial for anyone aware of the issue to deliberately exploit it:

We believe attacks such as those described above are far from simply an academic concern. By exploiting the model’s ability to read text robustly, we find that even photographs of hand-written text can often fool the model.

[…] We also believe that these attacks may also take a more subtle, less conspicuous form. An image, given to CLIP, is abstracted in many subtle and sophisticated ways, and these abstractions may over-abstract common patterns—oversimplifying and, by virtue of that, overgeneralizing.

Advertisement

This is less a failing of CLIP than it is an illustration of how complicated the underlying associations it has composed over time are. Per the Guardian, OpenAI research has indicated the conceptual models that CLIP builds are in many ways similar to the functioning of a human brain.

The researchers anticipated that the apple/iPod issue was just an obvious example of an issue that could manifest itself in innumerable other ways in CLIP, as its multimodal neurons “generalize across the literal and the iconic, which may be a double-edged sword.” For example, the system identifies a piggy bank as the combination of the neurons “finance” and “dolls, toys.” The researchers found that CLIP thus identifies an image of a standard poodle as a piggy bank when they forced the finance neuron to fire by drawing dollar signs on it.

Advertisement

The research team noted the technique is similar to “adversarial images,” which are images that are created to trick neural networks into seeing something that isn’t there. But it’s overall cheaper to carry out, as all it requires is paper and some way to write on it. (As the Register noted, visual recognition systems are broadly in their infancy and vulnerable to a range of other simple attacks, such as a Tesla autopilot system that McAfee Labs researchers tricked into thinking a 35 mph highway sign was really an 80 mph sign with a few inches of electrical tape.)

CLIP’s associational model, the researchers added, also had the capability to go significantly wrong and generate bigoted or racist conclusions about various types of people:

We have observed, for example, a “Middle East” neuron [1895] with an association with terrorism; and an “immigration” neuron [395] that responds to Latin America. We have even found a neuron that fires for both dark-skinned people and gorillas [1257], mirroring earlier photo tagging incidents in other models we consider unacceptable.

Advertisement

“We believe these investigations of CLIP only scratch the surface in understanding CLIP’s behavior, and we invite the research community to join in improving our understanding of CLIP and models like it,” the researchers wrote.

CLIP isn’t the only project that OpenAI has been working on. Its GPT-3 text generator, which OpenAI researchers described in 2019 as too dangerous to release, has come a long way and is now capable of generating natural-sounding (but not necessarily convincing) fake news articles. In September 2020, Microsoft acquired an exclusive license to put GPT-3 to work.

Advertisement

40 Hours of Training Was All an AI Needed to Shatter the World Record in the World’s Hardest Video Game

Who says shame can’t be an effective motivator? Less than a week after we shared Wesley Liao’s experiments using machine learning to train an AI to play QWOP, one of the hardest video games of all time, the AI was re-trained with the goal of maximizing its speed, resulting in a new world record.

Starting with their previous AI agent named ACER that was trained with a focus on optimal running techniques and form, Liao trained a new agent with a modified reward system. Previously, behaviors like “low torso height, vertical torso movement, and excessive knee bending” were discouraged to help ACER learn a proper stride technique.

Advertisement

But since the new AI agent was learning from ACER that had already mastered its stride, the machine learning process instead solely focused on rewarding improvements made to the sprinter’s forward velocity. Aside from a couple of minutes of “pre-training,” the new AI required just 40 hours of training to finally beat the best human QWOP players.

A website called Speedrun.com is where you’ll find the actively updated leaderboard for the QWOP 100 meter-dash, and while the top human player (Japan’s gunmaneko) managed to get their sprinter across the finish line in 48.34 seconds, the best recorded run of Liao’s newly trained AI did it in 47.34 seconds. But don’t expect to see Liao’s name atop the QWOP leaderboard. Speedrunning is still a competition for human players only and the use of software tools, such as an AI, to assist a run is strictly forbidden.

Do we need separate speedrunning leaderboards for AI players? Sure, why not? There’s good reason to keep a cautious eye on the incredible advancements we’ve made with artificial intelligence, but it’s also just plain fascinating to see how quickly they can be trained to best a human competitor. Despite being so challenging, QWOP is a very rudimentary video game that focuses on the precise timing of button presses. It would also be interesting to watch an AI tackle a game like The Legend of Zelda series where interactions with other AI-powered characters come into play. In the process, an AI agent like Liao’s may even find shortcuts, techniques, or gameplay strategies that could assist human speedrunners too. In the meantime can we at least get this QWOP-playing AI a participation trophy?

This Site Spits Out AI-Generated Rejection Emails so You Can Copy and Paste Disappointment

Illustration for article titled This Site Spits Out AI-Generated Rejection Emails so You Can Copy and Paste Disappointment

Screenshot: Unfortunately.io/Gizmodo

Crushing a starry-eyed startup’s hopes and dreams can be a pain, but now you can outsource that emotional labor to a heartless AI instead. Because rejection doesn’t have to hurt…err, not you at least, I mean.

That’s the idea behind Unfortunately.io, an online tool that uses artificial intelligence to generate rejection emails. It’s the latest brainchild of Danielle Baskin, a San Francisco-based designer and artist whose works embody that deranged intersection endemic of online humor where one-off goofs and genuinely fantastic ideas blur into one. Her previous projects include Face ID-compatible face masks, an online graveyard for expired domain names, and Quarantine Chat, a call service co-created with fellow artist Max Hawkins that connects two random strangers over the phone and which made headlines last March as the world descended into coronavirus-related lockdown (and loneliness).

Baskin debuted Unfortunately via Twitter on Friday, explaining that she built the prototype after a conversation with its now-lead investor, Jack Dreifuss, who initially suggested the idea. It’s simple: You just copy and paste whatever rejection email Unfortunately spits out, filling in relevant information such as your name and that of the poor soul whose inbox this polite “no” is destined for, and bam, you’re done.

Advertisement

We took it for a test run, as you can see below. The randomized email we got really softens the blow by pointing downtrodden entrepreneurs toward [insert insightful Medium article here].

Illustration for article titled This Site Spits Out AI-Generated Rejection Emails so You Can Copy and Paste Disappointment

Screenshot: Unfortunately.io/Gizmodo

At the moment it can only generate emails for letting down startup ventures—”If you’re an angel investor or a VC—let us handle the heavy work,” reads Unfortunately’s pitch. But the site promises that expanded formats tailored to rejecting candidates and movie/tv pitches are “coming soon.”

The site advertises a paid tier for $25 per month or $149 per year (which we assume is a joke but it’s always so hard to tell on the internet) that will customize the tone of your randomly generated rejection emails with “4 possible emotions,” as shown below, and incorporate OpenAI’s GPT-3 language model for “more nuance and detail.”

Advertisement

Gif: Unfortunately.io/Gizmodo

Unfortunately also encourages visitors to submit their own rejection letter anonymously to be used as part of the site’s dataset. And for anyone out there dealing with a particularly rough rejection right now, there’s the Unfortunately hotline where you can submit an anonymous memo unloading your troubles. “We’re here to listen,” Unfortunately’s site claims, but something tells me you should take that promise with a grain of salt. Just a hunch.

Advertisement

An AI Was Taught to Play the World’s Hardest Video Game and Still Couldn’t Set a New Record

What’s the hardest video game you’ve ever played? If it wasn’t QWOP then let me tell you right know that you don’t know how truly difficult a game can be. The deceptively simple running game is so challenging to master that even an AI trained using machine learning still only mustered a top 10 score instead of shattering the record.

If you’ve never played QWOP before, you owe it to yourself to give it a try and see if you can even get your sprinter off the starting line. Developed by Bennett Foddy back in 2008, QWOP was inspired by an ‘80s arcade game called Track & Field that requires players to mindlessly mashing buttons to win a race. QWOP takes a different approach and instead has players use four keys to control the individual movements of a runner’s thighs and calves—a runner who behaves like a floppy rag doll and is subject to real-world physics, including the effects of gravity. It might sound simple, but mastering the timing and cadence of the key presses needed to get the sprinter to just awkwardly move forward can be incredibly frustrating.

Advertisement

Wesley Liao was curious how well a tool like AI, which has been trained to do things like realistically animate old photos of deceased loved ones, would do playing QWOP. After first creating a Javascript adapter that would allow an AI tool to actually play and interact with the game, Liao’s first attempt at machine learning simply had the AI playing the game by itself and learning which actions resulted in positive outcomes (the sprinter moving forward and increasing its velocity) and which ones resulted in negative results (the sprinter’s torso bending too close to the ground.) Through this approach the AI learned a “knee-scraping” technique that would successfully get it across the 100-meter finish line, but not at record-setting speeds.

Liao’s next attempt at training an AI model involved recording gameplay videos of them trying to succeed at the game, including the use of longer leg strides which are crucial for increasing speeds and crossing the finish line with a decent time. The approach was slightly more successful, but the AI wasn’t able to master a special technique used by advanced QWOP players that involves an upward, forward swing of the legs to generate additional momentum.

Eventually Liao reached out to a veteran player known as Kurodo (@cld_el on Twitter), one of the top QWOP speed runners in the world, who recorded 50 videos of themselves playing the game at an expert level. But even with access to the best possible playing techniques, Liao found the best results came from a machine learning training regimen that involved 25 hours of the AI playing by itself, 15 hours learning from the data gleaned from Kurodo’s expert runs, and another 25 hours of self-play.

But even with all that effort, the QWOP-playing AI’s best 100-meter dash result had it crossing the finish line in 1 minute and 8 seconds—a top 10 finish. According to Speedrun.com, the current 100-meter dash world record is a mere 48 seconds, set just a month ago. Liao is confident with more training and a different reward system (how the AI learns it’s done something correctly), setting a QWOP world record could eventually happen, although since it’s a computer playing the game the record may never be officially acknowledged.

Advertisement

Top VFX Company Attempts to Bridge Uncanny Valley, Fails

Gif: Digital Domain/YouTube

Well, this is horrifying.

One of the foremost VFX companies on the planet, Digital Domain—the outfit behind some of the biggest effects in films in recent decades, including Titanic, Avengers: Infinity War, and Deadpool—has unveiled a new “realistic real-time autonomous digital human,” which it calls “Douglas.” Douglas, which will head to market next year, is being framed as a solution for more humanlike and photorealistic virtual assistants and chatbots, a logical next step in the progression of AI technology and its everyday applications.

Advertisement

Based on the likeness of the company’s senior director of software R&D, Doug Roble, Douglas, a so-described, “autonomous digital human,” certainly looks like something close to a human form being beamed to you over a poorly connected Zoom call. Douglas uses a mix of proprietary R&D as well as machine learning to mimic human gestures and responses, engage fully in conversation, and, apparently, remember new people to whom it’s introduced. But it’s once Douglas starts talking that we fully veer into Uncanny Valley—a fact that isn’t helped by the company’s description of Douglas as being “chameleon-like in its ability to switch faces.” Shudder.

But please, do not just take my word for it. Behold Douglas for yourself, shown below in a video conference with the real Doug (the bot is the one doing unnatural twitching and word sounds):

Also, the hands.

“Everywhere you look you see virtual assistants, chatbots and other forms of AI-based communication interacting with people,” Darren Hendler, director of Digital Humans Group at Digital Domain, said in a statement. “As companies decide to expand past voice-only interactions, there’s going to be a real need for photorealistic humans that behave in the ways we expect them to. That’s where Douglas comes in.”

Advertisement

Listen, I’m not even especially opposed to an AI future with photorealistic bots with Her-level responsiveness and conversation capabilities. But Douglas walks a thin line between being an exciting technological advancement in photorealism and AI and the hyper-intelligent hellbot from your worst nightmares. Personally, I’m mentally filing this one to my “no, thank you” folder.

Amazon’s Alexa Can Now Ask You Follow-Up Questions

Illustration for article titled Amazons Alexa Can Now Ask You Follow-Up Questions

Photo: GRANT HINDSLEY / Contributor (Getty Images)

Responding to demands from consumers for “an Alexa, but make it talk even more,” Amazon announced on Wednesday that its latest digital assistant model will be able to infer a user’s “latent goals,” and will use those to pose follow-up questions to users.

Advertisement

If you ask your new Alexa how long it takes to cook an over-easy egg, Alexa will tell you that it takes roughly 1 1/2 minutes—and then ask you if you’d like to set a timer.

According to Amazon, the update involves algorithmic tweaks and a deep learning model that will allow the device to evolve based on its relationship to the user. If you’re asking Alexa about how to cook eggs every morning and always opting to set the timer, Alexa’s discovery model will use active learning to improve its predictions and more accurately conclude whether or not you want to know when those 1 1/2 minutes are up.

Advertisement

“Amazon’s goal for Alexa is that customers should find interacting with her as natural as interacting with another human being,” Amazon wrote in a blog post. “While [apps] may experience different results, our early metrics show that latent goal [inference] has increased customer engagement with some developers’ apps.”

Amazon seems to acknowledge the fact that latent goal discovery comes with the potential to be very, very annoying. In early prototypes, when users requested “recipes for chicken”, for example, Alexa would reportedly follow up by asking, “Do you want me to play chicken sounds?”

In order to mitigate the potential for unwanted suggestions, Amazon has implemented a deep-learning-based trigger model that factors in “several aspects of the dialogue context, such as the text of the customer’s current session with Alexa and whether the customer has engaged with Alexa’s multi-skill suggestions in the past.”

Latent goal inference comes on the heels of Amazon launching Alexa Conversationsa series of deep neural networks aimed at making it easier for developers to integrate a natural conversational experience into custom apps—and is currently available in English for the US.

Advertisement

This AI Can Tell if You Have Covid-19 Just by Listening to Your Cough

Illustration for article titled This AI Can Tell if You Have Covid-19 Just by Listening to Your Cough

Photo: Mario Tama (Getty Images)

It feels like whenever I cough these days it triggers a mini-panic attack that I promptly try to quash with a steady stream of chamomile tea. Thankfully, researchers at the Massachusetts Institute of Technology have figured out a way to gauge whether a person has covid-19 just from the sound of their cough, so I may soon get to put my inner hypochondriac to rest.

Advertisement

The tool uses neural networks that can detect the subtle changes in a person’s cough that indicate whether they’re infected, even if they don’t have any other symptoms. Asymptomatic people infected with covid-19 are a vector for the virus that’s particularly tricky to manage, in part because they’re less likely to get tested because, duh, why would they if they’re feeling fine, right? Thus, carriers could infect others without even realizing it.

But even asymptomatic carriers have one tell that shows they’re infected, MIT researchers found. It’s all in the cough.

Advertisement

The difference between a healthy person’s cough and the cough of someone infected with the virus is so slight that it’s imperceptible to the human ear. So the team developed an AI to detect these minute differences using tens of thousands of recorded samples of coughs and spoken words. And it’s been ridiculously accurate in early tests, recognizing 98.5% of coughs from people with confirmed covid-19 cases, and 100% of coughs from asymptomatic people.

Here’s how it works. One neural network gauges sounds associated with vocal cord strength, while another detects cues related to a person’s emotional state, such as frustration, which can produce a “flat affect.” A third network listens for subtle changes in lung and respiratory performance. The team then combined all three models and overlaid them with an algorithm to detect muscular degradation.

Doctors have known for years that a patient’s cough can reveal important clues about their health. Even before pandemic times (God, remember those?), research groups trained AI to detect other diseases like pneumonia and asthma just from the sound.

The research’s not without its limits, though. The MIT scientists warned that, even with the level of accuracy achieved so far, people shouldn’t use this AI as a substitution for getting tested for covid-19. They also stated that it wasn’t built to diagnose people who are actively exhibiting covid-19 symptoms.

Advertisement

However, the technology could still play a vital role as a screening tool for the virus. The team is reportedly developing a free “user-friendly” app that can be used as a convenient prescreening tool for individuals who aren’t showing any symptoms but worry they might be infected.

Researchers at Carnegie Mellon University have been working on a similar app called the COVID Voice Detector that, as the name implies, would be able to determine whether someone has covid-19 just by the sound of their voice. Pretty soon, you could only have to cough or speak into your phone to figure out if it’s safe to hang out with people. Or maybe not even that:

“Pandemics could be a thing of the past if pre-screening tools are always on in the background and constantly improved,” the researchers wrote, hinting at a kind of biological Minority Report scenario that I have no doubt would be a privacy nightmare.

Advertisement

Yeah, that’s going to be a “no” from me. I might be fine with coughing into my phone, but I already have enough covid-related anxiety without the health police swarming me like those yellow-suited dudes from Monster’s Inc. whenever I cough.

An AI Analysis of 500,000 Studies Shows How We Can End World Hunger

An Indian farmer dries harvested rice from a paddy field in Assam.

An Indian farmer dries harvested rice from a paddy field in Assam.
Photo: Biju Boro/AFP (Getty Images)

Ending hunger is one of the top priorities of the United Nations this decade. Yet the world appears to be backsliding, with an uptick of 60 million people experiencing hunger in the last five years to an estimated 690 million worldwide.

Advertisement

To help turn this trend around, a team of 70 researchers published a landmark series of eight studies in Nature Food, Nature Plants, and Nature Sustainability on Monday. The scientists turned to machine learning to comb 500,000 studies and white papers chronicling the world’s food system. The results show that there are routes to address world hunger this decade, but also that there are also huge gaps in knowledge we need to fill to ensure those routes are equitable and don’t destroy the biosphere.

Despite the explosion of research, intractable problems like world hunger remain and are even growing worse in some cases. This is partly because new information is outstripping our ability to actually turn it into knowledge and wisdom. The great acceleration began in the 1700s and has gone into overdrive in the internet era; research shows a doubling of scientific citations over the past decade compared to a century rate of doubling in the 18th century. Using machine learning to analyze this rising mountain of information is one key way to make sense of it all.

Advertisement

Researchers with Ceres2030, a group of climate, social, and agricultural scientists and economists, are working to answer the question of how to meet the goal of ending hunger this decade. It’s one of the United Nations’ Sustainable Development Goals, a lofty set of ideals the world has so far failed to make any meaningful progress on. To help right the ship, the team at Ceres2030 enlisted artificial intelligence to see what research shows has been effective. Literature reviews can be a painstaking process that take months or even years to complete.

But after pulling a series of mostly off-the-shelf algorithms and training them for what to look for, the team unleashed them to analyze 500,000 pieces of literature on agricultural practices and development interventions to help improve yields or reduce hunger. It took a week for the machine learning to pare down the dataset of studies to those are actually useful.

Feeding in the data itself actually revealed a weakness in how research is classified. White papers and policy briefs—or what the scientists call “gray literature”—are often stashed on agency websites built in the dark ages of web development and “lack even basic features to select and download multiple citations,” according to the study. That alone points to the need to clean up the internet and make it so that all the information coming out is accessible, let alone useful.

The results, along with another analysis done by the UN Food and Agricultural Organization and German Center for Development Research, show that the world needs to kick in just $14 billion per year this decade to end hunger, double the current levels. For comparison, $14 billion is roughly 2% of what the U.S. spends on the military every year.

Advertisement

“The world produces enough food to feed everyone. So it’s unacceptable that 690 million people are undernourished, 2 billion don’t have regular access to sufficient amounts of safe, nutritious food, and 3 billion people cannot afford healthy diets,” Maximo Torero, the chief economist at FAO, said in a statement. “If rich countries double their aid commitments and help poor countries to prioritize, properly target and scale up cost effective interventions on agricultural R&D, technology, innovation, education, social protection and on trade facilitation, we can end hunger by 2030.”

The machine learning analysis shows where that money could be targeted to get the most out aid. For example, the findings show that more than three-quarters of smallholder farms are located in water-scarce areas. Those areas are likely to become more water-stressed in the future as the planet heats up. To help farmers cope, the machine learning analysis of the literature pointed to the value of investing in livestock and improving access to mobile phone data networks. The former can help improve productivity while the latter can help get weather forecasts and target when to apply fertilizer between rains to minimize runoff and waste.

Advertisement

Here, however, is where the human touch comes in. The researchers also found that while the machine learning analysis pointed to the benefits of these two interventions as targeted ways to reduce resource overuse and provide a layer of diversity in income, there were gaps. Many of the studies dredged up by artificial intelligence failed to include key variables such as gender and, until the past decade, few looked at the environmental impacts. In a world where women make up 43% of farmers and agricultural laborers, but bear disproportionate burdens when it comes to work and the amount of land they own or work, looking at interventions that can specifically help women is of utmost importance to ending hunger as well as meeting other Sustainable Development Goals like ending poverty (the first goal) and reaching gender equality (the fifth goal).

The analysis also shows that many previous studies have largely focused on crop yields rather than improving human well-being, which is a much more holistic—and I’d argue, more important—metric of success. Few studies have taken into account nutrition, a metric crop yield completely misses, or how to prepare farmers for future climate change. Those areas require more research and fast if investments to end hunger are to be spent wisely.

Advertisement

Other groups have also put forward ideas for how to balance well-being and the planet through fixes to our diets, food waste, and the agriculture system, notably last year’s EAT-Lancet report. The results of all this work, but particularly the new machine learning analysis point to how much work is left to be done and why a technocratic approach alone won’t cut it.