Recent Press Releases

Thune Questions Industry Experts on Big Tech’s Use of Persuasive Technology and Secret Algorithms

“With some rare exceptions, people aren’t given a choice to easily opt out of a black-box algorithm that secretly selects the content they see.”

December 9, 2021

U.S. Sen. John Thune (R-S.D.), ranking member of the Subcommittee on Communications, Media, and Broadband, today questioned industry experts on technology companies’ use of algorithms and artificial intelligence at a subcommittee hearing titled, “Disrupting Dangerous Algorithms: Addressing the Harms of Persuasive Technology.”

Yesterday, Thune questioned Adam Mosseri, the head of Instagram, about the company’s use of secret algorithms that work to keep users on their platform for a longer period of time. Thune also pressed Instagram on giving consumers more options, and in response, Instagram announced it would provide users with the ability to select a feed displayed in chronological order on its platform. At both hearings, Thune expressed deep concern about social media platforms’ use of persuasive technology and their ability to manipulate users’ experience while online. Thune highlighted two bills he reintroduced earlier this year: the Platform Accountability and Consumer Transparency (PACT) Act and the Filter Bubble Transparency Act, which would help increase transparency for consumers and accountability for big tech. 

Thune recently questioned representatives from TikTok, Snap Inc., and YouTube about social media platforms’ ability to manipulate users’ experience through algorithms. Thune also questioned Facebook whistleblower Frances Haugen about the dangers of big tech’s use of algorithms.

 

Thune’s opening remarks below (as submitted to the hearing record):

 

“Thank you, Chairman Lujan.

 

“In 2019, six months before the onset of the COVID-19 pandemic, I served as chairman of this subcommittee, and convened a hearing on the use of algorithms and persuasive technology on internet platforms.  

 

“During that hearing, I noted that former Google Executive Chairman, Eric Schmidt, once said modern technology platforms “are even more powerful than most people realize, and our future will be profoundly altered by their adoption and successfulness in societies everywhere.”

 

“Now we are near the end of 2021, and it seems like 2019—only two years ago—was a different age.  

 

“Since that time, the pandemic has accelerated the use of technology platforms, and algorithms have become even more ubiquitous in every aspect of the technology we use each day, largely without us even realizing it.  

 

“Indeed, while the pandemic has damaged many sectors of the economy, 2021 has been a year for Big Tech’s record books, even more so than was 2020.  

 

“Already in June of this year, the Financial Times observed that Apple, Microsoft, Google, Amazon, and Facebook were adding a combined $51 billion dollars of equity value a week.  

 

“The current stock market value of these five companies, at over $9 trillion dollars, is more than the value of the next 27 most valuable companies put together.  

 

“As Eric Schmidt predicted, these technology companies have grown exponentially more wealthy and powerful, especially over the course of the pandemic. 

 

“So Mr. Chairman, I appreciate you convening today’s hearing to continue the conversation on how these massively wealthy and powerful big tech platforms are using persuasive technology, and how they are profoundly altering our future.

 

“There is no question that internet platforms, and especially social media platforms, have transformed the way we communicate and interact with friends, family, and the public.

 

“So much of the content we see on social media is entertaining, educational, and can be beneficial to the public.

 

“However, I, along with several of my colleagues, and the public, have become more and more concerned about the influence and power of big tech within our society and the potentially damaging effect these platforms can have on individuals. 

 

“We’ve heard testimony earlier this year about social media’s damaging effects on consumers.

 

“Hearing after hearing demonstrated that users are not aware of how their data is being used by big tech to affect their behavior and influence certain outcomes.

 

“We know that a major problem with social media platforms, as well as search engines, is their use of secret algorithms – artificial intelligence developed by Silicon Valley software engineers that’s designed to shape and manipulate users’ experiences. 

 

“The powerful AI behind these platforms serve as prediction engines, creating a unique universe of information for each user, a phenomenon that’s often referred to as the “filter bubble.”  

 

“The filter bubble contributes to political polarization and social isolation. 

 

“Perhaps the most important thing to understand is that most users don’t make a conscious decision to enter the filter bubble. 

 

“This can be particularly troubling for younger users.

 

“For example, a recent Wall Street Journal investigation described in detail how TikTok’s algorithm serves up highly inappropriate videos to minors. 

 

“As a general matter, the economic incentives are aligned for big tech to keep the filter bubble in place, without users’ awareness, because platforms only make money by keeping eyeballs on their platforms as long as possible.  

 

“Without congressional intervention, platforms have very little incentive to be more transparent about the existence of the filter bubble.

 

“The days are over when you logged into your favorite social media platform and consumed content that had been posted chronologically since your previous log-in.

 

“Now platforms like Facebook, Instagram, and TikTok – and other social media platforms, as well as search engines – use algorithms to shape your newsfeed and suggest additional, seemingly never-ending content, emphasizing posts the platforms think you’ll be interested in and deemphasizing the ones they want you to scroll past.

 

“And looking to the future, at the cutting edge of the use of algorithms, machine learning, and artificial intelligence on internet platforms, is the idea of the “embodied internet,” or “metaverse,” which Mark Zuckerberg described as “the next evolution of social connection” when he changed the name of his company to Meta this past October.

 

“The aspect of algorithms amplifying or suppressing content leads to increasing dissatisfaction and suspicion about bias being built in to the algorithms.

 

“Big tech platforms are certainly free to deploy algorithms that select content based on what will keep each user engaged.

 

“But the platforms should not be free to keep their users unaware of the fact that an algorithm is controlling which content each consumer sees on the platform.

 

“With some rare exceptions, people aren’t given a choice to easily opt out of a black-box algorithm that secretly selects the content they see.

 

“We’re learning more and more about what the problem is, and I have offered several proposals aimed at giving the public more transparency into these systems, and more control and accountability for consumers. 

 

“I’ve introduced the bipartisan Filter Bubble Transparency Act, which would give consumers the privacy, choice, and transparency that has been absent on these platforms for too long. 

 

“Specifically, large-scale internet platforms would be required to notify users that their platform uses secret algorithms to select the content they see, what’s often described as the “filter bubble.” 

 

“In addition, users would be given the choice to switch to a different version of the platform that is filter bubble-free.

 

“At the very least, users should have the option to engage on these platforms without being manipulated by secret algorithms.

 

“There’s also a growing bipartisan consensus that we need to shed greater light on the secretive content moderation processes social media companies use.

 

“That’s why I’ve introduced the bipartisan PACT Act, which, among other things, would require internet platforms to make biannual transparency reports outlining material they’ve removed from their sites or chosen to deemphasize available to the public — and not just in intentionally complicated, hard-to-understand legalese. Sites would also be required to provide consumers with more due process and explanation when content is removed or otherwise moderated.  

 

“It's time to make big tech more transparent and accountable, and I look forward to working with my colleagues to get these proposals across the finish line.

 

“Today, each of our witnesses has deep expertise regarding algorithms and artificial intelligence more broadly, as well as in the more narrow context of engagement, prediction, behavior modification, and persuasion, and brings valuable perspective about where we are today and what we can expect in the future on these matters.

 

“Your participation in this important hearing is appreciated, particularly as this Committee works to enact meaningful legislation to hold big tech accountable.

 

“Thank you, Mr. Chairman.”