Lingo Telecom's use of AI for a misleading political robocall has led to a hefty fine.
According to NBC News, a telecommunications company has agreed to pay a $1 million fine for its involvement in a deepfake robocall that impersonated President Joe Biden before the New Hampshire Democratic primary.
Lingo Telecom, a voice service provider, distributed artificial intelligence-generated robocalls through "spoofed" phone numbers. The settlement, which includes stricter oversight protocols, marks a significant step in combating malicious deepfakes in political contexts.
Federal authorities plan to announce the settlement on Wednesday, highlighting it as a first-of-its-kind enforcement action against AI-driven impersonations. The case stems from a January incident where New Hampshire voters received fake robocalls featuring an AI-generated voice mimicking President Biden, instructing them not to vote in the Democratic primary.
The Federal Communications Commission (FCC) has taken a strong stance against the misuse of AI technology in political contexts. FCC Chairwoman Jessica Rosenworcel emphasized the importance of transparency in communication, especially when AI is involved.
Rosenworcel stated:
Every one of us deserves to know that the voice on the line is exactly who they claim to be. If AI is being used, that should be made clear to any consumer, citizen, and voter who encounters it.
This statement underscores the FCC's commitment to protecting the integrity of political communication and ensuring that deceptive technologies do not mislead voters.
While Lingo Telecom has agreed to the settlement, the individual behind the deepfake robocall faces more severe consequences. Steve Kramer, a longtime political consultant, is at the center of the controversy, having orchestrated the call while working for a rival campaign.
Kramer is currently facing a $6 million fine from the FCC and 26 criminal counts in New Hampshire related to voter intimidation and impersonating officials. Additionally, he is the subject of a civil lawsuit brought by the League of Women Voters, which has received support from the U.S. Department of Justice.
The Department of Justice's Civil Rights Division has taken a strong stance on the issue. Kristen Clarke, head of the division, commented on the broader implications of such actions:
Voter intimidation, whether carried out in person or by way of deepfake robocalls, online disinformation campaigns, or other AI-fueled tactics, can stand as a real barrier for voters seeking to exercise their voice in our democracy.
New Hampshire Attorney General John Formella emphasized the significance of the FCC's action, stating that it sends a strong message about the intolerance of election interference and deceptive technology. This case may set a precedent for how similar incidents are handled in the future.
The incident has also sparked discussions about the need for clearer regulations surrounding the use of AI in political advertising and communication. As the technology continues to evolve, policymakers may need to consider new legislation to address the unique challenges posed by deepfakes and other AI-generated content.
The $1 million settlement with Lingo Telecom marks a significant moment in the fight against political deepfakes. By taking swift action against both the company that distributed the calls and the individual behind their creation, federal authorities have demonstrated their commitment to protecting the integrity of elections. As AI technology continues to advance, this case may serve as a crucial precedent in shaping future policies and enforcement strategies to combat deceptive political practices.