- The Federal Communications Commission (FCC) on Thursday ruled that robocalls using voice-cloning technology are now illegal, effective immediately. The decision comes as the use of artificial intelligence (AI) to clone people's voices, particularly famous people and politicians, continues to rise.1
- The unanimous ruling cracks down on AI-generated voice cloning using the Telephone Consumer Protection Act of 1991, which restricts junk calls that use artificial and prerecorded voice messages. Under the new guidelines, the FCC has the power to fine companies that use AI voices in their calls or block the service providers that carry them.2
- Regulators heightened focus on AI robocalls after a January message using the voice of Pres. Joe Biden encouraged New Hampshire voters to skip the state's presidential primary. New Hampshire's Attorney General's Office is currently investigating the incident as an attempt at voter suppression.3
- New Hampshire Attorney General John Formella revealed that two Texas companies were behind the fraudulent calls, and he vowed potential civil and criminal action at the state and federal levels. Meanwhile, the FCC announcement said that "bad actors" who violate the law could be forced to pay more than $23K per illegal call.4
- The FCC in November launched a notice of inquiry to build a record for how to combat AI-generated calls, and in December, the agency proposed a nearly $300M fine for two men who were behind notorious scam calls claiming, "we’ve been trying to reach you about your car’s extended warranty.” Thursday's ruling built upon the current rules that enable state law enforcement to punish scammers.5
- Existing law prohibited telemarketers from using automated dialers or artificial or prerecorded voice messages to cell phones while outlawing such calls to landlines without prior written consent from call recipients. AI calls are now classified as "artificial" under the law and are punishable by the same standards.2
- Narrative A, as provided by Daily Dot. The FCC has taken staunch and swift action against AI robocalls in a move that goes a long way in protecting election integrity. The rise of AI has fueled deceptive deep-fake videos of politicians, and just weeks ago, the technology was used to interfere with New Hampshire's Democratic Primary. By clarifying and expanding on existing laws, the FCC is making it clear that bad actors will not be allowed to use novel technology to subvert democracy.
- Narrative B, as provided by The Conversation. The threat AI poses to democracy is real and imminent, and Thursday's ruling from the FCC may not be enough to prevent AI-generated disinformation from impacting elections. As AI becomes more developed, videos and calls imitating politicians can become more convincing and powerful. Cracking down on robocalls was definitely a must-do, but the FCC needs to implement more robust laws that target all different forms of AI-generated misinformation. If more robust regulation doesn't go further soon, faith in democracy could continue to erode.