Listen to this post

On May 16, 2023, the U.S. Senate Judiciary Committee conducted a significant oversight hearing on the regulation of artificial intelligence (AI) technology, specifically focusing on newer models of generative AI that create new content from large datasets. The hearing was chaired by Sen. Richard Blumenthal for the Subcommittee on Privacy, Technology, and the Law. He opened the hearing with an audio-cloned statement generated by ChatGPT to demonstrate ominous risks associated with social engineering and identity theft. Notable witnesses included Samuel Altman, CEO of OpenAI, Christina Montgomery, chief privacy and trust officer at IBM, and Gary Marcus, professor emeritus of Psychology and Neural Science at New York University – each of whom advocated for the regulation of AI in different ways. 

Altman advocated for the establishment of a new federal agency responsible for licensing AI models according to specific safety standards and monitoring certain AI capabilities. He emphasized the need for global AI standards and controls and described the safeguards implemented by OpenAI in the design, development, and deployment of their ChatGPT product. He explained that before deployment and with continued use, ChatGPT undergoes independent audits, as well as ongoing safety monitoring and testing. He also discussed how OpenAI foregoes the use of personal data in ChatGPT to lessen the risk of privacy concerns, but also noted how the end user of the AI product impacts all of the risks and challenges that AI represents.

Montgomery from IBM supported a “precision regulation” approach, focusing on specific use cases and addressing risks rather than broadly regulating the technology itself, the approach taken in the proposed EU AI Act as an example. While Montgomery highlighted the need for clear regulatory guidance for AI developers, she stopped short of advocating for a federal agency or commission. Instead, she described the AI licensure process as obtaining a “license from society” and stressed the importance of transparency for users so they know when they interact with AI, but noted that IBM models are more B2B than consumer facing. She advocated for a “reasonable care” standard to hold AI systems accountable. Montgomery also discussed IBM’s internal governance framework, which includes a lead AI officer and an ethics board, as well as impact assessments, transparency of data sources, and user notification when interacting with AI.

Marcus argued that the current court system is insufficient for regulating AI and expressed the need for new laws governing AI technologies and strong agency oversight. He proposed an agency similar to the Food and Drug Administration (FDA) with the authority to monitor AI and conduct safety reviews, including the ability to recall AI products. Marcus also recommended increased funding for AI safety research, both in the short term and long term.

The senators seemed poised to regulate AI in this Congress, whether through an agency or via the courts, and expressed bipartisan concerns about deployment and uses of AI that pose significant dangers that require intervention. Furthermore, the importance of technology and organizational governance rules was underscored, with the recommendation of adopting cues from the EU AI Act in taking a strong leadership position and a risk-based approach. During the hearing, there were suggestions to incorporate constitutional AI by emphasizing the upfront inclusion of values in the AI models, rather than solely focusing on training them to avoid harmful content.

The senators debated the necessity of a comprehensive national privacy law to provide essential data protections for AI, with proponents for such a bill on both sides of the aisle. They also discussed the potential regulation of social media platforms that currently enjoy exemptions under Section 230 of the Communications Decency Act of 1996, specifically addressing the issue of harms to children. The United States find itself at a critical juncture where the evolution of technology has outpaced the development of both regulatory frameworks and the case law. As Congress grapples with the task of addressing the risks and ensuring the trustworthiness of AI, technology companies and AI users are taking the initiative to establish internal ethical principles and standards governing the creation and deployment of artificial and augmented intelligence technologies. These internal guidelines serve as a compass for organizational conduct, mitigating the potential for legal repercussions and safeguarding against negative reputational consequences in the absence of clear legal guidelines.