WASHINGTON — U.S. Sen. John Thune (R-S.D.) today spoke on the Senate floor about the Artificial Intelligence (AI) Research, Innovation, and Accountability Act of 2023, his bipartisan bill that would bolster the United States’ leadership and innovation in AI, improve transparency for consumers, and create common-sense safety and security guardrails for the highest-risk AI applications.
Thune’s remarks below (as prepared for delivery):
“Mr. President, artificial intelligence – or AI – has been with us for quite some time now.
“And whether it’s the chatbot providing help on Amazon, or personalized recommendations on Netflix, or the algorithms curating your social media feeds, these days most of us interact with artificial intelligence on a daily basis.
“But as the release of ChatGPT to the public last year demonstrated, artificial intelligence is about to take a giant step forward.
“The AI applications I’ve mentioned, like chatbots and personalized recommendations, are examples of so-called narrow AI – AI trained to perform specific tasks.
“But ChatGPT is an example of the next generation of AI – artificial intelligence systems set up to imitate the human brain and produce original content based on the assimilation of vast sets of data.
“Mr. President, this next generation of AI – so-called foundation models, which underpin systems like ChatGPT – offers tremendous possibilities.
“Advances in medicine. In farming. In manufacturing.
“The automation of routine tasks.
“New ways to manage infrastructure.
“Better and more resilient supply chains.
“Advances in national defense.
“The list goes on.
“But as with any sophisticated technology, this next generation of AI presents risks as well.
“And those risks are heightened by the enormous capabilities of AI and the potential for this technology to pervade every corner of our society.
“And our goal needs to be encouraging the promise of AI while putting safeguards in place to minimize potential dangers.
“The light-touch approach the United States has taken to internet regulation is a good model to follow as we approach AI regulation.
“The explosive growth of internet innovation in the United States is in large part a result of the fact that the government has not weighed down this sector of the economy with heavy-handed regulation.
“And we should maintain a similarly light touch when it comes to AI to encourage innovation and keep the United States at the forefront of the next generation of artificial intelligence.
“Leadership in AI will benefit our economy.
“And there are also serious security reasons why staying at the forefront of the AI revolution is important.
“There is no question that AI will come to play an important role in national defense, and falling behind adversaries – like the Chinese Communist Party – in this area could put our country at a serious disadvantage when it comes to our national security.
“So we need to start establishing some basic rules of the road that will allow AI innovation to flourish while at the same time minimizing the dangers it presents.
“Mr. President, the race to regulate AI has already started.
“President Biden has released a sweeping executive order that empowers multiple government agencies and departments to regulate all AI systems, and even the algorithms that recommend our next movie on Netflix or remind us that we need to order more paper towels.
“And internationally, the European Union has continued to press forward with a heavy-handed regulatory regime.
“It’s time for Congress to ensure innovation in the United States continues.
“Regulating AI by executive order is not the way to go about things.
“Even if the president’s executive order on AI weren’t overly broad and heavy-handed, executive orders are by their very nature not permanent, since they have the potential to be reversed or amended at any time – and stand a good chance of being reversed or amended when a new administration comes into office.
“This creates uncertainty for companies, which can stunt expansion and innovation.
“The right way to approach AI regulation is to pursue a bipartisan, nationwide approach in Congress that will protect innovation while putting in place the necessary safeguards for the riskiest applications of this technology.
“To that end, shortly before Thanksgiving I introduced bipartisan AI legislation with Senator Klobuchar and several of our Commerce Committee colleagues from both parties.
“Our bill is intended to establish some basic rules of the road for artificial intelligence while protecting the ability of companies to innovate and advance this technology.
“Our bill focuses on two things – transparency for consumers, and a tiered, risk-based framework for oversight of the highest-impact applications of AI.
“On the transparency front, our bill would require any large-scale internet platform that uses generative AI to create content to clearly inform consumers of that fact.
“One of the risks presented by generative AI is the difficulty of distinguishing AI-produced content from human-produced content.
“That may not be a huge issue if the content we’re talking about is an amusing meme, but it’s a real issue if a consumer is trying to figure out whether information or an image is real or whether it’s been generated by AI.
“So requiring transparency about whether content is being produced – or partially produced – by generative AI needs to be a priority.
“The second part of our bill deals with high-impact and critical-impact AI – that is, AI applications used to make significant decisions in particularly high-risk sectors.
“Our bill establishes a two-tiered system for overseeing these applications.
“Critical-impact AI applications – for example, like those used to make significant decisions in the operation of critical infrastructure – would be required to self-certify compliance with testing, evaluation, validation, and verification standards.
“High-impact AI applications would be subject solely to transparency reporting requirements.
“Importantly, this part of the bill is carefully tailored to apply only to AI applications making complex decisions in high-risk sectors, and is meant to respond directly and narrowly to the recent leap in capabilities of foundation models that power them.
“Mr. President, I believe that the bill Senator Klobuchar and I have introduced is the right first step when it comes to AI technology.
“Unlike the White House’s executive order, our bill doesn’t instantly assume that artificial intelligence technology is bad and that it should be subject to heavy-handed government intervention.
“Nor does our legislation rush us into regulations before we have a clear idea of what aspects of this technology need to be regulated and in what way.
“Instead, our bill puts in place guardrails to mitigate the dangers posed by the highest-impact AI applications, while leaving American innovators and entrepreneurs free to move forward with innovation.
“I am grateful to Senator Klobuchar and our other co-sponsors for working with me on this bill, and we will continue to welcome ideas to further improve our legislation.
“Legislation on an issue of this magnitude calls for the deliberation of the committee process and regular order consideration, and I will work to ensure that we take it up in the Commerce Committee in the coming months.
“This bill will not be the last bill that Congress needs to consider when it comes to AI, but I believe it is the right place for us to begin.
“And I look forward to working with colleagues from both parties to get this bill through Congress and across the finish line.
“Mr. President, I yield the floor.”