PYMNTS Intelligence Banner June 2024

FCC Outlaws Use of AI-Generated Voices in Robocalls

robocall

The Federal Communications Commission (FCC) has made it illegal to use artificial intelligence (AI)-generated voices in robocalls.

This ruling will give state attorneys general another tool to use against voice cloning scams, as they can prosecute bad actors for not only the scam but also for using AI to generate the voice in the robocall, the FCC said in a Thursday (Feb. 8) press release.

“We’re putting the fraudsters behind these robocalls on notice,” FCC Chairwoman Jessica Rosenworcel said in the release. “State Attorneys General will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.”

The FCC made the use of voice cloning technology in robocalls illegal by voting Thursday to recognize AI-generated voices as “artificial” under the Telephone Consumer Protection Act (TCPA), according to the release. The agency adopted this ruling unanimously.

The ruling ensures that telemarketers using AI-generated voices in calls are held to the same standards as those making other robocalls, meaning that they must obtain consumers’ prior express written consent before robocalling them, the release said.

Under the TCPA, the FCC can fine robocallers and can block calls from telephone carriers that facilitate these calls, per the release. The TCPA also allows consumers and organizations to bring lawsuits against robocallers. Plus, state attorneys general may have their own enforcement tools that are tied to how robocalls are defined under the TCPA.

The FCC’s ruling comes at a time when a growing number of calls are using AI-generated voices to mimic voices that are familiar to members of the public, such as family members, celebrities and political candidates, according to the release.

“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities and misinform voters,” Rosenworcel said in the release.

AI tools like voice cloning have been like a steroid pill for cybercriminals, because these technologies reduce the effort required to manipulate targets, PYMNTS reported in January.

The AI-granted ability to virtually clone voices based on just snippets of audio has increasingly democratized access to cybercrimes that were previously relegated to only the most sophisticated bad actors.

In July, a group of U.S. senators called on federal regulators to protect consumers from what they saw as a rash of AI-powered scams. They said in a published letter that voice cloning “adds a new, threatening dimension to these scams.”