Recent Press Releases

Thune Presses Experts on Companies’ Use of Technology and How It Affects What Consumers See Online

“Consumers should have the option to engage with a platform without being manipulated by algorithms powered by their own personal data – especially if those algorithms are opaque to the average user.”

June 25, 2019

Click here or on the image above to watch Thune’s opening remarks.

 

WASHINGTON — U.S. Sen. John Thune (R-S.D.), chairman of the Commerce Committee’s Subcommittee on Communications, Technology, Innovation, and the Internet, today led a hearing entitled, “Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms.” The hearing examined how algorithmic decision-making and machine learning on internet platforms influences the public.

 

During the hearing, Thune pressed witnesses on ways companies use technology to influence outcomes and whether algorithm transparency or algorithm explanation are appropriate policy responses.

 

Thune’s opening remarks below (as prepared for delivery):

 

“Good morning. I want to thank everyone for being here today to examine the use of persuasive technologies on internet platforms.

 

“Each of our witnesses today has a great deal of expertise with respect to the use of artificial intelligence and algorithms more broadly, as well as in the more narrow context of engagement and persuasion, and brings unique perspectives to these matters.

 

“Your participation in this important hearing is appreciated, particularly as this Committee continues its work on crafting data privacy legislation. 

 

“I’ve convened this hearing in part to inform legislation I’m developing that would require internet platforms to give consumers the option to engage with the platform without having the experience shaped by algorithms driven by user-specific data. 

 

“Internet platforms have transformed the way we communicate and interact, and they have made incredibly positive impacts on society in ways too numerous to count. 

 

“The vast majority of content on these platforms is innocuous, and at its best, it is entertaining, educational, and beneficial to the public. 

 

“However, the powerful mechanisms behind these platforms meant to enhance engagement also have the ability – or at least the potential – to influence the thoughts and behaviors of literally billions of people.  

 

“That is one reason why there is widespread unease about the power of these platforms, and why it is important for the public to better understand how these platforms use artificial intelligence and opaque algorithms to make inferences from the reams of data about us that affect behavior and influence outcomes. 

 

“Without safeguards, such as real transparency, there is a risk that some internet platforms will seek to optimize engagement to benefit their own interests, and not necessarily to benefit the consumer’s interest.

 

“In 2013, former Google Executive Chairman, Eric Schmidt, wrote that modern technology platforms “are even more powerful than most people realize, and our future will be profoundly altered by their adoption and successfulness in societies everywhere.”

 

“Since that time, algorithms and artificial intelligence have rapidly become an important part of our lives, largely without us even realizing it.

 

“As online content continues to grow, large technology companies rely increasingly on AI-powered automation to select and display content that will optimize engagement.

 

“Unfortunately, the use of artificial intelligence and algorithms to optimize engagement can have an unintended – and possibly even dangerous – downside.  In April, Bloomberg reported that YouTube has spent years chasing engagement while ignoring internal calls to address toxic videos, such as vaccination conspiracies and disturbing content aimed at children. 

 

“Earlier this month, the New York Times reported that YouTube’s automated recommendation system was found to be automatically playing a video of children playing in their backyard pool to other users who had watched sexually themed content.  

 

“That is truly troubling, and it indicates the real risks in a system that relies on algorithms and artificial intelligence to optimize for engagement. 

 

“And these are not isolated examples.

 

“For instance, some have suggested that the so-called ‘filter bubble’ created by social media platforms like Facebook may contribute to our political polarization by encapsulating users within their own comfort zones or echo chambers.         

 

“Congress has a role to play in ensuring companies have the freedom to innovate, but in a way that keeps consumers’ interests and wellbeing at the forefront of their progress.  

 

“While there must be a healthy dose of personal responsibility when users participate in seemingly free online services, companies should also provide greater transparency about how exactly the content we see is being filtered. 

 

“Consumers should have the option to engage with a platform without being manipulated by algorithms powered by their own personal data – especially if those algorithms are opaque to the average user.  

 

“We are convening this hearing in part to examine whether algorithmic explanation and transparency are policy options Congress should be considering.

 

“Ultimately, my hope is that at this hearing today, we are able to better understand how internet platforms use algorithms, artificial intelligence, and machine learning to influence outcomes.  

 

“We have a very distinguished panel before us.

 

“Today, we are joined by Tristan Harris, the co-founder of the Center for Humane Technology, Ms. Maggie Stanphill, the director of Google’s User Experience, Dr. Stephen Wolfram, founder of Wolfram Research, and Ms. Rashida Richardson, the director of policy research at the AI Now Institute.

 

“Thank you again for your participation on this important topic. 

 

“I now recognize Ranking Member Schatz for any opening remarks he may have.”

 

###