Informing Opinions

The Dangerous Rise of Online Networks

a connected network of dots

Devin Gaffney ’10 is an affiliate at the Berkman Klein Center for Internet and Society at Harvard and has degrees in Network Science and the Social Science of the Internet. He is a data engineer in Boston. His expertise has been featured on WBUR, in The Atlantic, and at conferences throughout the country. You can read more about his study of networks by visiting

The mantra during the early days of social media was that the technology would completely upend the relationship between the governed and their governments. During the Iran election protests of 2009, so implicit was the notion that social media was an unambiguous multiplier of civic engagement that the U.S. Department of State sent a request to Twitter that they postpone server maintenance so that dissidents could continue to use the platform to challenge the regime. Although their effect was greatly exaggerated, the Iran election protests solidified this almost universally accepted notion of social media as a civic good. Then the Arab Spring effectively solidified the sense that social media was a civic good. In light of such events, “connecting the world” and “fostering conversations” became the general ideology sold by Facebook, Twitter, and Instagram during their explosive growth.

Collectively the platforms grew to billions of users partly on the basis of the ideology of civic good. From the heady early days of social media through the explosive growth phase, these platforms enjoyed a position in our shared cultural imagination as transformative, intoxicating systems that could topple regimes, reorient communication pathways, and collapse social and geographic distance.

This sense of its possibility was also balanced against the “bad” side of social media networks. Trolling—or the deliberate provocation by Internet users against others—has existed as long as the Internet. 4chan’s user network has long been infamous for their harassment. Before 4chan, Bulletin Board and message board users were known to do the same thing, and before that, angry listserv users were similarly guilty. These maladaptive online behaviors have never gone away.

In this era of mega platforms, however, a new spectrum of maladaptive behaviors have cropped up. Distinct from trolling, phenomena such as fake news, platform toxicity, and internet outrage are generally characterized less by their individual delinquent enactors and more as a problem of the collective effervescent energy of the system. A recurring common denominator in instances of these phenomena suggest that the excessive growth and social connectedness of the last decade is a necessary condition for these issues to exist in the first place.

An online platform in which you are the sole user is useless. With the growth of behavioral analysis and artificial intelligence baked in as platform features, these social systems have become increasingly capable of accruing new users. Added to the mix are features that maximize their utility from existing users by raising the exit costs. Dialing up the head count and social connectivity as much as possible increases the total screen time, and thus, advertising time.

By designing and pushing for such maximizations, platforms have foisted onto society the external costs of this excessive growth and connectedness. As platforms have grown to nation-state proportions, the value of hijacking them has grown accordingly. With densely connected platforms, the natural ability of social friction to moderate diverse ideologies has collapsed. As the platforms have expanded, they have brought with them conditions ripe for abuse.

“Fake news,” or really bad-faith content advertising, is the most clear growth-oriented issue. When the entire population of the United States is on the platform, even the marginal chance of hijacking the attention space is a worthy prize. Hijacking is exponentially more effective with automated microtargeting campaigns, and the ability to prevent bad-faith actors is hindered when platforms are economically incentivized to look away while the actors themselves are incentivized to take extreme measures at hiding their true aims. The type of bad-faith behavior engaged in by Cambridge Analytica is probably the single most visible exemplar of politically motivated hijacking, but the story of Macedonian teens generating fake news for purely monetary gain in the aftermath of the election applies just as well.

Too little connectivity and too much connectivity result in suboptimal outcomes—finding the right mix of independence and connectedness is key. 

Somewhere between an issue of growth and connectedness is the issue of increasing online toxicity and radicalization. Particularly, since the 2014 Gamergate campaign—a coordinated, targeted harassment campaign against women in the videogames industry—it has become relatively commonplace to observe large-scale toxic communities. While radicalization efforts and toxic communities have always existed online, Data & Society’s widely lauded “Media Manipulation and Disinformation Online” report makes clear that many such intersectional movements are in a growth phase. Although the specific nature of their orientation may differ from community to community, it’s now relatively commonplace to find large subcultures within platforms that are centered on racist, misogynistic, and anti-Semitic sentiments and the maintenance and growth of those sentiments. To their credit, platforms have begun to more substantially moderate these communities.

As online networks have grown, the probability for a quorum of like-minded individuals to be already present on a platform has also grown—larger platforms afford larger chances that there’s a community for every interest. Because online networks have reduced points of social friction, the ability to get “connected” to these communities has also rapidly increased. Paired with recommendation systems that anticipate the next likely points of valuable social engagement, researchers such as Becca Lewis have accurately identified the emergence of complex and thorough radicalization pipelines on platforms such as YouTube. In short, the vast scale of online platforms ensures that large toxic communities can be established and actively maintained within platforms with impunity. The degree to which platforms have optimized for connectedness in turn, ensures that these communities continue to thrive.

Internet outrage is a phenomenon that can mostly be blamed on the extreme connectedness of platforms. Helen Nissenbaum’s research on contextualized privacy made an important point about how individuals may perceive their speech acts online: when offline, we moderate our behavior according to our environment, and account for our context when determining the potential appropriateness of behavior. With online comments, these latent environmental cues are not present, and thus, individuals may “read” their context incorrectly. Further, as distinct from offline behavior, our online behavior is recorded permanently in the space we inhabited rather than experienced as an ephemerally locked event. As a result, the ability for our words to be taken out of the “imagined” initial context of the author has always existed when talking online.

In an era of global social systems such as Twitter, where the social distance and friction of engagement between two complete strangers is equal to that of kin, the problem of context collapse has become a chronic point of abuse—on any given day it is relatively easy to find one person’s words re-contextualized for a new audience to become enraged over. While in some cases this outrage may be genuinely warranted, the daily cycle of collective shaming around decontextualized comments has become a core feature of Twitter’s culture and is a clear sign of dysfunction.

These three types of behaviors—fake news, radicalization, outrage—are not some set of binary switches that suddenly flipped on in 2014, 2015, or 2016. Instead, as the networks have grown, the incentives for abuse have grown as well. As platforms have optimized for connectedness, they have negligently optimized for the growth of mob-like communities connecting around noxious yet identity-defining goals.

As platforms have become overly dense through “People you may know” features, they have tied people together who may have been better off being mediated through social distance.

Combined, we have unwittingly fumbled into a society where the dominant mode of information exchange is prone to misfires of an advertising system that threatens the maintenance of civic life.

In the early days, when online networks were an unambiguous good, it made intuitive sense to extract more civic good out of the platforms by growing our networks online and creating as many opportunities for conversations between people as possible. In the abstract, this platitudinal approach to civic life seems obvious. But bigger is not always better. Research from an old advisor, David Lazer, helps to illustrate why.

In David Lazer and Allen Friedman’s 2007 work on networked cooperation, they sought to understand the effect of communication pathways on team effectiveness. In one extreme, team members could work in complete isolation to solve a complex task. In the other extreme, the complex task could be solved by committee, where all individuals are in constant interaction. In the isolated extreme, a high diversity of solutions can be found, but all of them suffer from a lack of being able to brainstorm and workshop their ideas. In the other extreme, groupthink and the myriad latent social pressures may stifle novel approaches. Their research pointed to an argument for a middle ground. In their study, they state that “an inefficient network maintains diversity in the system and is thus better for exploration than an efficient network, supporting a more thorough search for solutions in the long run.” In other words, too little connectivity and too much connectivity result in suboptimal outcomes—finding the right mix of independence and connectedness is key.

Before the 2009 Iran election protests, it was easy to argue that previously isolated individuals, harnessing their collective energy through social media would be able to achieve more than they would on their own. In our contemporary environment, we’ve gone to the opposite extreme, where the systems have grown to a point where the incentives for abuse are ever present, and the social maze has become so interwoven as to be more stifling than liberating. As for next steps? Platforms, if they were ever in any realistic position to solve these problems, clearly won’t slow down their drive for growth and connectedness until it proves to be even more of an existential threat than it already is. In light of that, maybe we are better served by potentially doing our part ourselves and clicking “unfriend,” and knowing when to take a breather.