Would deleting social media establish peace among adversaries?

by Tyler Sarkisian

As early as Genesis 3:15, disagreement and subsequent argumentation have been two constants in our society. The belief that the world could ever achieve total agreement is severely flawed; however, employing stringent social oversight similar to the tactics seen in George Orwell’s 1984, such as “groupthink”—occurring when the necessity of collective conformity leads to irrational decision-making—would surely be a step in this direction. Given the existence of social media websites, a world with uniformly accepted logic and no outlandish, radical opinions does not appear like something humans will ever regress to. Is it possible to promote more positive discourse in our society through modern-day internet media, or is everyone who uses the internet to argue inevitably doomed to perpetual hostile altercations? Assuming the majority of users opt for trying to improve the quality of their internet use rather than simply powering down, how can individuals escape echo chambers, better understand and articulate their beliefs, and avoid false rationalization not just on the internet, but in the course of everyday life?

How does an individual recognize and escape an echo chamber?

Before plotting an escape, let’s first start by defining an echo chamber and the rationale behind why people are so susceptible to them. In his book Breaking the Social Media Prism, Chris Bail frequently refers to echo chambers (a concept first introduced by political scientist V.O. Key in the 1960s) in the context of polarization on social media platforms. Key’s initial use was “to describe how repeated exposure to a single media source shaped how people vote…” (Bail). It is evident how much the echo chamber phenomenon has contributed to polarization between members of the right and left political parties. With the numerous technological advancements since the 1960s, echo chambers have become more prevalent with increasingly active users on social media platforms. Most people would agree that consuming media from only a single source gives way to the potential of a skewed viewpoint. The primary difference between televised/broadcasted news and social media sites is the even greater ability that individuals have to filter what content they subscribe to, and as Bail says, “the problem is that most people seek out information that reinforces their pre-existing views” and consequentially “the more we are exposed to information from our side, the more we think our system of beliefs is just, rational, and truthful” (Bail). Many infer a potential solution: simply make more of an effort to seek out different perspectives and this will, in turn, give them a more centrist viewpoint. To better explain why this is a failed theory for escaping filter bubbles, Bail and his colleagues conducted a study with Twitter bots in which several candidates were recruited, after being identified as right- or left-leaning on the political spectrum. Each candidate received twenty-four tweets per day for one month containing information and viewpoints from outside of their individual echo chambers. Rather than meeting the expectation of becoming more moderate, the results demonstrated the opposite—conservatives became more conservative and vice versa for liberals.

Now that it is understood why the quick fix does not work, one might ask the following question: if simple exposure to differences is not enough to escape echo chambers, what, in actuality, plays a role in changing people’s minds? To borrow from Julia Galef, there are three reasons “Why Some People See Things Clearly and Others Don’t” in relation to arguments:
“1. We misunderstand each other’s views…
2. Bad arguments inoculate us against good arguments…
3. Our beliefs are interdependent—changing one requires changing others” (Galef).
It is remarkably rare that two people can have a casual conversation with any earth-shattering changes in attitude resulting. What is much more commonplace is that over time, an individual begins to see cracks in the foundation of his/her beliefs. These persisting challenges to their interwoven beliefs typically propel them into a “state of uncertainty” (Galef). What pushes them over the edge, you might ask? Generally, it’s not one particular event, but instead, to use language from Thomas Kuhn in his book The Structure of Scientific Revolutions, the recurring presence of anomalies in their previously accepted paradigm that causes a gradual shift to occur resulting in an updated set of beliefs. Surely, there is no shortage of information on the internet that challenges one’s beliefs, but the language and ideologies required for paradigm shift are harder to come by.

In order to optimize disagreements for more desirable outcomes, Galef offers the strategy of “listening to people you find reasonable” (Galef). One of the most intriguing examples she cites occurs between Jerry Taylor, an employee of a libertarian think tank, and Bob Litterman, a climate activist who runs his own investment advisory firm and previously worked for the famous investment bank, Goldman Sachs. After becoming exposed to scientists who falsely represented data surrounding climate change, Taylor has retreated on his former climate-change-skeptic position and entered a “state of uncertainty.” In his argument, Litterman portrays climate change as a “nondiversifiable risk” (Galef) that cannot be hedged against, so it is logical to invest largely in ways to avoid the occurrence all together. After their conversation, Taylor not just became less of a skeptic, but became an activist. What is most interesting is the reason he cites for such a beneficial interaction: “even though Litterman was on the other side of the issue, he was nevertheless someone with ‘instant credibility with people like me… He is from Wall Street. He is kind of a soft libertarian’” (Galef). Had Litterman not held some of the identities and ideologies he does (Wall Street, libertarian), would Taylor have been as willing to listen to him? This leads to our next question surrounding persuasion and beliefs.

Should a person believe to understand or vice versa?

To aid in the previous exploration of echo chambers/filter bubbles, it is necessary to have an understanding of what comes first. Bail points to solving the “chicken or the egg problem” further asking “…do our social media networks shape our political beliefs, or do our political beliefs determine who we connect with in the first place?” (Bail) At a higher level, it is human nature for a person to be so compelled to rely on their beliefs. When anomalies arise, as present in the case of Taylor, does the associated doubt impact one’s ability to rely on their beliefs? In The Gospel in a Pluralist Society, Lesslie Newbigin views the relationship between belief and doubt as somewhat sequential. He states: “…while both believing and doubting have a necessary place in the whole enterprise of knowing, believing is primary and doubting is secondary” (Newbigin). Essentially, establishing a belief helps reveal a deeper sense of knowledge. He references the many aspects of science—namely beliefs that cannot be proven, such as the rationality of the universe—as the initial belief for unlocking further knowledge. On the contrary, any sort of understanding before belief must rationally come as an update to some initial belief, regardless of how miniscule. Newbigin explains the logic as follows: “we have no other way of starting except by accepting a-critically the evidence of our senses and the guidance of the tradition represented by teachers and textbooks.” There is an implication that to continue learning, one must accept some sort of stance, even if only temporarily. Then, through further experience, an individual can further establish their framework by confirming the initial belief or modifying it to align with new information.

Diving deeper, how is it men like Taylor make drastic changes to their ideologies through optimal disagreements, while others follow suit of the Twitter bot experiment and become more polarized after exposure to contrasting viewpoints? The modes of persuasion put forth by Aristotle in The Art of Rhetoric serve as an ideal framework to explain this outcome. Aristotle lists and defines three methods of persuasion:

  • Logos – appealing to reason, or logic
  • Pathos – appealing to emotions
  • Ethos – appealing to trust, and authority

To effectively persuade someone, the combined use of two, or even three methods is optimal. In the example between Jerry Taylor and Bob Litterman, Litterman exemplified logos with his explanation of “nondiversifiable risk” in addition to his likely shared libertarianism, and ethos in that he had worked on Wall Street for a well-respected firm. The willingness Taylor had to listen to someone with differing beliefs to further improve his roadmap is not something that is shared by most members of society. It would be remissive not to also point out the pitfalls most people encounter when trying to engage in internet argumentation with those who are not as receptive as Taylor. This leads to our final question surrounding being consumed by a single identity or strongly held belief.

Does consciously avoiding labels/identities aid in combatting rationalization?

To shed more light on rationalization, and why it can have poor uses in specific

situations, we first need to define the concept it falls under—directionally motivated reasoning. This concept is defined by Galef as when “our unconscious motives affect the conclusions we draw” (Galef). A prime illustration of this concept is asking questions like “can or must I believe this?” rather than questions associated with accuracy motivated reasoning, such as “is this true?” Rationalization falls into the directionally motivated reasoning bucket along with concepts like self-deception and wishful thinking. All of which a person is more likely to employ when the belief they hold to be true is challenged.

Circling back to Galef’s earlier discussion of obstacles to clarity, we focus on the third reason: “Our beliefs are interdependent—changing one requires changing others…” (Galef). Moreover, becoming too deeply rooted in an identity can be problematic, leading to the further effects of rationalization and self-deception. Consider the age-old differences between Democrats and Republicans who align more with the party name than any specific issue. It has been proven on numerous occasions that human nature has deeply rooted tribalistic tendencies; social media and the internet further enables people to present themselves differently in order to “fit in” with whatever they want to be perceived as. In an article titled “Keep Your Identity Small,” author Paul Graham discusses the many issues he sees individual identity playing in internet argumentation, mainly surrounding religion and politics, and his associated theory. Graham sees the two topics as highly correlated because of their distinct capabilities to frame personal identity. Accordingly, Graham theorizes: “If people can’t think clearly about anything that has become part of their identity, then all other things being equal, the best plan is to let as few things into your identity as possible.” Therefore, if one can simply avoid subscribing to an overarching identity and accept as few labels as possible, they should find contradictory information much more tolerable and perhaps even informative. Again, thinking rationally, it is not possible to avoid all labels because each person is individualistic; the practical application of Graham’s theory is to avoid any groups a person deems unnecessary and could drastically alter their perception to others. If used effectively, the aforementioned tactics can certainly help promote more positive and fulfilling argumentation among all people.

Conclusion

A large proponent of our answer to the initial question is dependent on whether or not people want to improve the quality of their time online opposed to remaining in their echo chamber for eternity. Given our initial assumption that a large majority of users are interested in more thoughtful engagement, it is certainly possible. Employing simple strategies, like listening to reasonable people, updating previously accepted knowledge, and using “Scout Mindset” are powerful steps in the right direction. Social media and the internet are not inherently bad, it is just people who lack any sort of self-discipline which contribute to the larger problem. Using the internet as a mere extension of reality rather than an escape can be more rewarding. Ultimately, learning the ability to swallow one’s own pride, admit you may be wrong, and listen thoughtfully to other people you think highly of are of the utmost importance in solving this problem in the long-term. The answer to more civil encounters and a more peaceful society stems from the golden rule: Treat others the way you want to be treated.

 

 

 

 

 

Resources

Chris Bail, Breaking the Social Media Prism: How to Make Our Platforms Less Polarizing (Princeton: Princeton, 2021).

Julia Galef, The Scout Mindset: Why Some People See Things Clearly and Others Don’t (New York: Portfolio, 2021)

Paul Graham, “Keep Your Identity Small,” February 2009, http://www.paulgraham.com/identity.html

Thomas Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago, 1997)

Lesslie Newbigin, The Gospel in a Pluralist Society (Grand Rapids: Eerdmans, 1989)

Scroll to Top