As Artificial Intelligence (AI) continues to transform our technology and social landscape, steps toward developing a regulatory framework are getting underway. This week, OpenAI CEO Sam Altman is set to make his public debut before Congress. Predictably, last month’s Open Letter didn’t result in a temporary pause in AI development. Now the debate over regulations moves before Congress in a critical push to establish rules around the use and development of AI technologies.
Where this ultimately lands is anybody’s guess. We don’t have many precedents for regulating digital technology in the United States. However, we have a long history of regulatory actions for food, medicine, the transportation industry, and many other areas.
A Call for Understanding AI and Its Risks
Altman’s appearance before the Senate Judiciary subcommittee on AI oversight follows the meteoric rise of ChatGPT, an accessible version of AI developed by OpenAI and released at the end of November 2022. ChatGPT’s popularity has surged exponentially, with a staggering 1.76 billion visits recorded in April alone. This underscores the profound influence and potential of AI technology.
To put those numbers in perspective, ChatGPT still has a long way to go to catch up to Google. The search giant generated approximately 83.9 billion visits in April 2023 (in contrast to Microsoft’s Bing, which had 2.12 billion visits). But those are still remarkable numbers, given that no one outside of AI researchers had access to it six months ago.
Joining Altman at the Congressional hearing will be Christina Montgomery, IBM’s Vice President and Chief Privacy and Trust Officer, and Gary Marcus, Professor Emeritus at New York University. Their aim is to help legislators understand the risks associated with generative AI and explore potential mitigation strategies. Marcus, in particular, will underscore the critical need for independent scientists to be involved in these discussions.
Here are Gary Marcus’ thoughts on the urgent need to build reliable AI systems and his call for a global, nonprofit organization to regulate the tech for the sake of democracy in a TED Talk posted last week.
Putting aside for a moment the challenge of creating a new global regulatory body for AI technology, it’s hard to imagine Congress working through these issues. The average age in the House of Representatives is 58.4 years, and in the Senate, 64.3 years. Many of them still seem puzzled by the workings of a smartphone, much less something like generative AI. Altman, Montgomery, and Marcus are going to need a lot of patience this week.
Will Congress Take a Bipartisan Approach?
AI regulation may be one of the few topics before Congress that is not a partisan issue. Both Democrats and Republicans seem eager to understand and address the implications of AI for society. Senate Majority Leader Chuck Schumer (D-N.Y.) has proposed a bill to establish an AI regulation framework, promoting transparency and accountability. As The Hill noted,
Sen. Richard Blumenthal (D-Conn.), chairman of the Senate Judiciary subpanel holding the hearing, said in a statement that AI ‘urgently needs rules and safeguards to address its immense promise and pitfalls.’
Sen. Josh Hawley (R-Mo.), ranking member of the subcommittee, said in a statement that AI will be ‘transformative in ways we can’t even imagine’ in ways that could implicate elections, jobs and security. He called the hearing a ‘critical first step towards understanding what Congress should do.’
We rarely find Blumenthal and Hawley on the same side of any issue, so enjoy it while it lasts.
The Industry’s Role in Setting the Rules
Despite the urgent call for regulation, most of the government’s guidance so far seems to be designed to develop voluntary guidelines. Vice President Harris is engaging with leaders of major AI-developing companies, discussing potential risks and strategies, though the conversations don’t seem to be leading anywhere. This leaves the industry with the responsibility of establishing its own rules. It highlights the pressing need for collaboration between the government and the AI industry, a delicate balance between ensuring necessary safeguards without stifling innovation.
Can The Upcoming Hearing Generate Results?
In the past, Sam Altman has taken a stance similar to Google’s Sundar Pichai, arguing for the need to have some regulation of AI. Montgomery’s insights from the industry perspective will undoubtedly play a crucial role in shaping this discussion.
UPDATE May 16, 2023: Here is Sam Altman’s testimony before Congress on May 16th:
Voices of Concern in the AI Community
We’ve seen growing concerns about the rapid development of AI in recent months. High-profile figures such as Elon Musk have advocated for a pause in AI development, recognizing the potential risks of unchecked advancements.
In a surprising move, Geoffrey Hinton, often called the “godfather of AI,” has resigned from his position at Google. Hinton’s decision allows him to voice his concerns more freely about the rapid evolution of AI and the need for effective regulatory measures.
The Path Ahead For Congress And The Tech Industry
The pace of AI development is breathtaking, and the concerns voiced by leaders like Musk and Hinton underscore the importance of striking a balance between innovation and safety. Along with the Congressional hearings this week, Europe is moving rapidly toward a regulatory framework. In fact, the EU parliament will pass legislation long before the U.S. Congress does. They’ve been working on this for years (I worked with them back in 2018), and already have legislation on the books regulating technology.
Will we see some form of collaboration between the AI community and lawmakers that results in legislation regulating AI? We’re not placing any bets on it, but we’ll keep you posted as these developments unfold. But, no matter the outcome, the push for regulations will dramatically impact the future of AI.
Emory Craig is a writer, speaker, and consultant specializing in virtual reality (VR) and generative AI. With a rich background in art, new media, and higher education, he is a sought-after speaker at international conferences. Emory shares unique insights on innovation and collaborates with universities, nonprofits, businesses, and international organizations to develop transformative initiatives in XR, GenAI, and digital ethics. Passionate about harnessing the potential of cutting-edge technologies, he explores the ethical ramifications of blending the real with the virtual, sparking meaningful conversations about the future of human experience in an increasingly interconnected world.