Filter bubbles and self-selected bubbles

In 2014, Eli Pariser created an eight-minute TED Talk that introduced many Americans to the problem of the filter bubbles: "Beware online filter bubbles." Pariser explains how social media sites like Facebook, Instagram, and others rely on algorithms to decide what to put in a user's feed. If a user has both conservative and liberal friends in their feed but tends to click a bit more often on the liberal friends' posts, the algorithm recognizes this and will shows more and more of their liberal friends' posts because it's designed to measure things based on engagement. 

Pariser noticed this happening in his own feed and started researching because he wanted to see a variety of perspectives and posts. The filter bubble phenomenon is problematic since users are more inclined to click links that feed into their preconceived ideas and beliefs—literally things that they "like"—which slowly pushes them deeper into either a conservative corner or into a liberal corner. (Psychologists talk about this in terms of confirmation bias.)  

Now, we have learned from other studies that many factors shape individuals' political attitudes, as is smartly noted by Dr. Richard Fletcher in this Reuters Institute article. Fletcher essentially argues that we aren't merely hapless victims of technological algorithms; we each choose our various news sources and social media venues. That is, he talks about how we self-select our tech personalization as much as things are pre-selected for us by algorithms. Think of the elderly relative who refuses to get their news from any source other than Fox News or the younger relative who refuses to get their news from any source other than Reddit. 

Misinformation and disinformation, especially from AI

All of the different kinds of filters, whether imposed by humans or by technology, can lead to serious problems with the spread of misinformation or disinformation.

Multiple studies have shown that misinformation and disinformation are a significant driver of erosion of public trust in our democracy. Too many Americans uncritically click on links and believe whatever is presented as "news" or "facts," without careful assessment of the source quality and reliability. 

And with the recent release of artificial intelligence chatbots like ChatGPT and others, disinformation researchers are sounding the alarms about the serious damage those can do to exacerbate the situation, i.e., to make things far worse.

Students should be very aware that AI chatbots like ChatGPT seem very helpful but they are very helpful liars. They do not have comprehension so they do what researchers call hallucinate, or make up facts and information. They make up sources. They make up direct quotes. They make up research studies that do not exist. It all looks very legitimate but it's not. When you ask the AI program if they are lying or if the sources are legitimate, they will assure you they are. In late May 2023, an attorney did exactly this and was caught using ChatGPT before a judge; he argued for leniency before the judge, saying that he had asked ChatGPT if it was lying and it said no. 

Thus it's important than ever to be aware of and resist the filter-bubble phenomenon and misinformation/disinformation—especially as disseminated by AI chatbots—for two major reasons: 

  1.  There's significant and global research that shows us political polarization often isn't as bad as we imagine.

    You can read about it here: "Political Polarization: Often Not as Bad as We Think," Columbia University Mailman School of Public Health, 2021. This article documents that people tend to "overestimate the negative feelings of their political opponents" and that people's "opinions on less reported issues are often more similar than we think." This matters greatly. 

  2. The best way to push back against all of this is to rely on the most reliable, unbiased, and authoritative sources you have access to. This includes