There’s no easy way to put it: The internet is having a midlife crisis. From a pandemic spread of misinformation and hate speech to Supreme Court cases weighing the responsibility of search engines and social media platforms in monitoring user content to the significant gaps in access that make up the “digital divide”—the internet’s freedom of connectivity and information is plagued by real and mounting challenges. So, how do we address those challenges, and who is responsible?
Ahead of the conference, Blake Reid, clinical professor of law specializing in technology policy and telecom and disability law, offers a deeper look at how the state of the internet came to be, and how possible solutions could alter the internet we know today.
How did the commercial internet come to be?
Tracing the Internet’s full history is complicated, but the commercial Internet that we know today started to flourish in the early-to-mid 1990s. One way to think about the rise of the commercial internet is to look at the breakup of AT&T. AT&T was not a phone company, they were the phone company. Anything you wanted to do on the phone network, you had to ask AT&T for permission.
The rise of the commercial internet was, in part, a response to that—the design of a network that shifted power away from the network’s operators, allowing anyone to build applications without having to ask first. We sometimes call this the notion of “permissionless innovation.”
What did permissionless innovation provide?
On the one hand, permissionless innovation is arguably responsible for a lot of society-changing phenomena that didn't exist 25 years ago: You can search for any content via ubiquitous search engines, have a wide array of products shipped to your door from nearly any manufacturer via e-commerce platforms and communicate with anyone in the world through social media and messaging platforms.
However, we weren’t prepared for how to deal with the unintended consequences of unleashing these applications on the world.
Where did it go wrong?
One thing that many early internet pioneers missed was the possibility that power would accumulate in the hands of the internet’s new crop of application providers. Where we used to be concerned about the concentration of power in the phone company, which provided the network, we now see a similar concentration of power in the companies that use the network.
At the same time, we are starting to see eroding trust in these companies because of data breaches, privacy violations, accessibility issues, the proliferation of hate speech and a wide range of other issues. We also see eroding trust in the government, at least in the U.S., to be able to step in and do something about it.
The technology has changed radically, but the concentration of power and the problems that poses for competition, protecting consumers and users, healthy discourse and the future of democracy has led us back to the same set of problems—or arguably worse problems—than we had before the internet.
What role has social media played?
On the internet, attention is a commodity. And social networks are built on keeping people on the platform by directing them to content that captures their attention, which can in turn be monetized. That gets the flywheel going—spurring people to post more engaging content to get other people to pay attention to it and engage. And, it turns out, one good way to do that is with content that makes people angry and disagreeable via conspiracy theories, dis- and misinformation and other kinds of controversial and salacious content.
How could the government step in?
It's a hard question. We want to leverage technology for social good and social benefit, but who do we trust to set the rules about getting things to go in the right direction? On the flip side of internet companies, distrust of government intervention and political dysfunction makes public oversight a challenge.
Supreme Court cases questioning the protection of internet platforms under Section 230 of the Communications Act may give the government more latitude to step in. But even then, the First Amendment substantially limits the extent to which the government can step in and regulate platforms, at least in the U.S.
And what about companies, what could they do?
One debate we’ll be hosting at the Silicon Flatirons annual conference is whether you would rather have five tech giants or 500 startup companies. While giant tech companies cause a lot of problems, they do some things well and have designed some important systems that are hard to replicate at smaller scale.
The question is: Do we double down on a world where we try to regulate those companies and try to come up with ways to constrain the problematic aspects of their power, or do we try to enable a world where competition and innovation sweeps those companies away, and we enter new era of how the underlying communication, commerce and democracy happens on these platforms?
There are not easy answers to these questions, and I expect our debaters will have a lot of compelling arguments in both directions.
This sounds daunting. Any silver lining?
I don't think there's a strong case that the internet as a whole is harming people to such a great extent that people would be better off without internet access. There are certainly substantial social harms and unintended consequences of the internet—many of which some people thought internet-based technology would solve, not make worse.
But the internet has become a cornerstone of modern society for better and for worse, and many believe it should be ubiquitously available for everyone. We need to find ways to address and overcome these challenges to make it a safe and equitable place for everyone.