You are currently viewing Google tests AI to detect fraudulent phone calls.  Privacy advocates are horrified.

Google tests AI to detect fraudulent phone calls. Privacy advocates are horrified.

Some privacy advocates say they are dismayed by Google’s announcement this week that it is testing a way to scan people’s phone calls in real time for signs of financial fraud.

Google unveiled the idea Tuesday at Google I/O, its conference for software developers. Dave Burke, Google’s vice president of engineering, said the company is testing a feature that uses artificial intelligence to detect fraud-related patterns and then alert Android phone users when suspected fraud is taking place.

Burke described the idea as a security feature and gave an example. At the scene, he received a prank call from someone posing as a bank, offering to move his savings to a new account to keep them. A notification flashed on Burke’s phone: “Probable Scam: Banks will never ask you to move your money to keep it safe,” with an option to end the call.

“Gemini Nano alerts me the second it detects suspicious activity,” Burke said, using the name of a Google-developed AI model. He did not specify what signals the software uses to identify a call as suspicious.

The demonstration drew applause from a private audience at the conference in Mountain View, Calif., but some privacy advocates said the idea threatened to open a Pandora’s box as tech companies raced against each other over AI-enabled features for consumers. In interviews and statements online, they said there are many ways the software could be abused by private surveillance companies, government agents, stalkers or others who might want to eavesdrop on other people’s phone conversations.

Burke said on stage that the feature would not transfer data from phones, providing what he said was a layer of potential protection “so the audio processing remains completely private.”

But privacy advocates said the device’s processing could still be vulnerable to intrusion by determined hackers known to access phones or government officials with subpoenas demanding audio files or transcripts.

Burke did not say what kind of security controls Google would have, and Google did not respond to requests for additional information.

“J. Edgar Hoover would be envious,” said Albert Fox Kahn, executive director of the Surveillance Technology Oversight Project, an advocacy group based in New York. Hoover, who died in 1972, was FBI director for decades and used extensive wiretapping, including of civil rights activists.

Kahn said the implications of Google’s idea were “terrifying,” especially for vulnerable people such as political dissidents or people seeking abortions.

“The phone calls we make on our devices can be one of the most personal things we do,” he said.

“It’s very easy for advertisers to delete every search we do, every URL we click, but what we actually say on our devices, into the microphone, has historically not been monitored,” he said.

It is not clear when or if Google will implement the idea. Burke said on stage that the company will have more to say over the summer. Tech companies often test ideas that they never release to the public.

Google has a wide reach in the mobile phone market because it is behind the most widely used version of the Android mobile operating system. About 43 percent of mobile devices in the U.S. run Android, and about 71 percent of mobile devices worldwide do, according to analyst firm StatCounter.

“Android can help you protect yourself from the bad guys, no matter how they try to get to you,” Burke said.

Meredith Whittaker, a former Google employee, was among those who criticized the idea of ​​fraud detection. Whittaker is now president of the Signal Foundation, a nonprofit group that supports the privacy-focused Signal messaging app.

“It’s incredibly dangerous,” Whittaker wrote to X.

“It’s a short step from discovering ‘scams’ to ‘discovering patterns often associated with/seeking reproductive care’ or ‘often associated with/providing LGBTQ resources’ or ‘often associated with whistleblowing by technology workers,'” it says she.

When Google published about the idea on X received hundreds of responses, including many positive ones. Some said the idea was clever, while others said they were tired of the frequent phone calls from scammers.

Americans age 60 and older lost $3.4 billion last year to reported digital fraud, according to the FBI.

Tech companies sometimes oppose dragnet-style scanning of people’s data. Last year, Apple rejected a request to scan all cloud-based photos for child sexual abuse material, saying that scanning for one type of content opens the door to “mass surveillance,” Wired magazine reported.

But some tech companies are scanning vast amounts of data for insights related to targeted online advertising. Google scanned the emails of non-paying Gmail users for advertising purposes until it ended the practice in 2017 under criticism from privacy advocates.

Christian Hammond, a professor of computer science at Northwestern University, said Google’s call scanning idea was the result of a “feature war” in which the big players in artificial intelligence technology “are constantly trying to go head-to-head with latest technology blast feature.’

“We have these micro-releases that move quickly. And they are not necessary and not focused on consumers,” he said.

He said the advances in AI models are legitimately exciting, but said it’s still too early to see what ideas from tech companies will come out.

“They haven’t figured out what to do with this technology yet,” he said.

Leave a Reply