By Laura Stefani, Michael A. Munoz & Ellis McKennie
Robocalls may have always had some artificial flavor to them; however, the proliferation of the use of artificial intelligence (AI) continues to blur the line between human and machine interaction. On July 17, the Federal Communications Commission (FCC) issued a draft Notice of Proposed Rulemaking (NPRM) to address the ability of the Telephone Consumer Protection Act (TCPA) to restrict and regulate robocalls made using AI. The NPRM will be finalized and adopted at the agency’s August 7 meeting and may be modified prior to that based on feedback from interested parties.
The draft NPRM comes after the FCC invited and received comments on the subject in November of 2023. Specifically, the agency sought comments on “how AI technologies can be defined in the context of robocalls and robotexts” and what steps should be taken to ensure that the FCC can advance its statutory obligation under the TCPA. Subsequently, as we’ve reported, the FCC took action aimed at unlawful AI robocalls in a recent AI robocall enforcement in response to increased election year calling activities.
The FCC bases its authority to regulate AI robocalls on the plain language of the TCPA, which permits the agency to “prescribe technical and procedural standards for systems that are used to transmit any artificial or recorded voice message via telephone.” The FCC relies on this language as well as legislative history to support its position that it has flexibility in regulating future calling technologies.
The NPRM defines an “AI-generated call” as:
[A] call that uses any technology or tool to artificially generate a voice or text using computational technology or other machine learning, including predictive algorithms, and large language models, to process natural language and produce voice or text content to communicate with a called party over an outbound telephone call
This definition of “AI-generated calls” is critical in light of the struggle to formulate a uniform definition of AI technology. The FCC believes this definition is consistent with other state and federal definitions of AI and is tailored to reflect privacy protections under the TCPA.
The NPRM also seeks to require the clear and conspicuous disclosure of the use of AI in robocalls and robotexts when it is used by callers to collect consumers’ consent. This is in addition to the existing TCPA disclosure requirements for robocalls.
The NPRM proposes requiring a disclosure during a consumer interaction as well. The FCC “propose[s] requiring callers using AI-generated voice to, at the beginning of each call, clearly disclose to the called party that the call is using AI-generated technology.” This would create a dual disclosure system for AI-generated calls, first at the consent stage and then again during the call or text.
The FCC has followed the lead of other agencies, including the Federal Trade Commission (FTC), in its proposed regulation of AI technology. It remains to be seen, considering this quickly developing technology, whether there will be unintended consequences of the proposed rule down the road, and certainly this would be an important concept for interested parties to inform the agency about during the comment period.
Other important issues are the informed consent requirement, how to minimize burdens of disclosure, the type of disclosure at the beginning of calls (voice, special tone, icon), and opt-out provisions. The FCC will seek comments on the NPRM for 30 days after publication in the Federal Register.
In the meantime, the FCC has also initiated a separate proceeding, which will also be voted on at the August meeting, intended to improve the agency’s Robocall Mitigation Database.
For more insights into advertising law, bookmark our All About Advertising Law blog and subscribe to our monthly newsletter. To learn more about Venable’s Telecommunications services, click here or contact the lead author.