Recent Op-Eds

Rules of the Road for Artificial Intelligence

By Sen. John Thune

December 1, 2023

The launch of a new wave of consumer-facing artificial intelligence (AI) applications over the past year has ushered in a renewed focus on AI. Amazon chatbots, Netflix recommendations, and even directions on our phone all use AI, but recent developments demonstrate that the technology is about to take a giant step forward. While the applications with which most people have already interacted are trained to perform narrow tasks, the next generation of AI is designed to produce original content and make complex decisions based on massive amounts of data.

This new technology brings with it seemingly endless possibilities. It promises potentially tremendous advances in medicine, farming, and manufacturing. It can improve everything from national defense to daily life. But, as with any sophisticated technology, this next generation of AI also presents risks. The challenge, then, is how to encourage the promise of AI while ensuring there are basic safeguards in place to minimize potential dangers.

I believe the light-touch approach the United States has taken on internet regulation is a good model to follow for AI. The explosive growth of internet innovation in our country is in large part a result of government not weighing down a new technology with heavy-handed regulation. Leadership in AI will benefit our economy and make America more competitive, so we need to be sure we’re promoting innovation while protecting consumers from the riskiest applications of AI.

To this end, I recently introduced bipartisan legislation that would establish some basic rules of the road for artificial intelligence. Our proposal focuses on two things: transparency for consumers and risk-based oversight of high-impact AI applications. On transparency, our bill would require big internet platforms to clearly inform consumers if the platform is using generative AI to create content. For AI being used to make high-impact decisions – such as those related to health care or critical infrastructure – our bill establishes an oversight framework to ensure it meets certain standards. This bill won’t be the last word on AI, but it’s the right place for Congress to start to preserve space for innovation while protecting against serious dangers and the knee-jerk reaction from Washington to overregulate.

It’s clear that a race to regulate AI has already begun. President Biden has issued a sweeping executive order that empowers multiple government agencies to regulate AI systems. The European Union is pressing forward with a heavy-handed regulatory regime. But this is the wrong approach. It risks stifling innovation just as it’s getting started, which we cannot afford to let happen. If we fall behind adversarial nations, particularly China, there will be profoundly dangerous implications for our national security and economic prosperity.

Unlike these heavy-handed approaches, the bipartisan bill I’m proposing does not assume the worst about artificial intelligence, and it doesn’t rush into sweeping regulation of all uses of AI. Instead, the bill puts guardrails in place to mitigate dangers on high-risk, high-impact AI applications, while leaving American innovators and entrepreneurs free to move forward. I look forward to continuing to work with my colleagues to get this bill across the finish line, and to ensure the United States is once again the leader in an important new technology.