As the United States government actively promotes rapid innovation and adoption of artificial intelligence technologies, a critical question emerges: who will adequately protect consumers from potential harms? A recent analysis published in Lawfare on March 14, 2025, by Kevin Frazier, posits that state attorneys general (AGs) are uniquely positioned to fill this potential void and safeguard the interests of individuals in the burgeoning AI landscape.
Balancing Innovation and Consumer Protection
The article highlights a perceived imbalance in the current national strategy surrounding AI. While significant emphasis is placed on accelerating the adoption and development of AI – driven, in part, by the Trump administration’s stated goal of achieving “global AI dominance” – the commensurate effort to protect consumers has lagged. This tension between fostering innovation at speed and ensuring fundamental consumer rights, such as privacy and autonomy, remain intact is a central theme of Frazier’s argument.
The author contends that the focus on rapid technological advancement risks making comprehensive federal consumer protection an afterthought. This scenario, Frazier suggests, necessitates action at other levels of government, specifically by leveraging the existing authority and capabilities of state-level legal offices.
The Strategic Position of State AGs
State attorneys general possess broad mandates to protect the residents of their respective states from illegal and harmful business practices. This wide-ranging authority makes them inherently capable of addressing novel issues arising from advanced technologies like AI. Frazier points out that state AGs have a demonstrable track record of investigating and prosecuting instances of technological misuse, indicating their capacity to adapt to new challenges presented by AI.
Their proximity to local issues and direct responsibility to state constituents further positions them as responsive arbiters of consumer welfare. While federal agencies often focus on national policy and enforcement, state AGs can hone in on specific harms affecting consumers within their borders, allowing for potentially more targeted and agile responses to AI-related issues.
Leveraging Existing Legal Frameworks: UDAP Statutes
A core tenet of Frazier’s analysis is the power of existing legal tools available to state AGs. Specifically, he identifies state unfair and deceptive acts or practices (UDAP) statutes as particularly effective instruments for addressing AI-related consumer harms. These statutes are critical because they are largely technology agnostic.
Unlike laws written for specific technologies that can quickly become outdated, UDAP statutes focus on the effect of a practice – whether it is unfair or deceptive and causes harm to consumers – regardless of the underlying technology used. This inherent flexibility means that AI applications, even those unforeseen when the statutes were enacted, can be evaluated based on their impact on individuals rather than their technical architecture.
Frazier argues that the application of UDAP laws allows state AGs to prevent practices causing consumer harm without necessarily impeding the broader development or adoption of beneficial AI technologies. The focus remains on the outcome for the consumer, providing a framework to address issues like deceptive AI interfaces, algorithmic bias leading to unfair outcomes, or misuse of personal data, all under the umbrella of established consumer protection principles.
State Action and Future Needs
The article provides specific examples illustrating the potential for state AG action. It references California AG Rob Bonta on the applicability of UDAP statutes to pressing issues like AI’s potential for exaggeration or digital impersonation. These are tangible problems where AI can be used to create misleading content or falsely represent individuals or entities, directly impacting consumer trust and potentially leading to harm.
Furthermore, Frazier notes that states are already taking action. Texas AG Ken Paxton, for instance, has initiated enforcement actions related to AI use, demonstrating that states are not waiting for federal guidance to address perceived misuses of the technology. These actions underscore the proactive stance state AGs can take using their current authority.
However, the analysis also acknowledges limitations. Frazier points out that while state AGs have the legal mandate, they often require more robust technical support to effectively investigate complex AI systems and practices. Understanding the intricacies of algorithms, data flows, and AI applications is crucial for successful enforcement, and ensuring AGs have access to necessary expertise is a significant factor for their continued effectiveness.
Conclusion: A Crucial Role in the AI Era
In conclusion, Kevin Frazier’s article in Lawfare on March 14, 2025, makes a compelling case for the pivotal role of state attorneys general in protecting consumers as the federal government prioritizes AI innovation. By leveraging their broad mandates and the adaptable framework of existing UDAP statutes, state AGs are uniquely positioned to address the consumer protection challenges posed by rapid AI advancement.
While the federal push for “global AI dominance” continues, the analysis suggests that state-level action provides a necessary safeguard to ensure that the acceleration of AI adoption does not come at the expense of consumer privacy, autonomy, and overall welfare. Supporting state AGs with the required technical expertise will be vital as they navigate this complex and rapidly evolving technological landscape.