SAN FRANCISCO — Almost half a century after the first e-mail crashed the communication link between the computer science department at UCLA and the Stanford Research Institute, the Internet stands at a tipping point.
So how did it get there?
The first time I heard about the world wide network was back in 1986, from an instructor in an undergraduate elective course I took called Introduction to COBOL Programming.
The instructor had been a systems analyst in the U.S. Air Force, and it was there that he’d learned about a research project begun in the late 1960s by the Advanced Projects Research Agency, a skunkworks unit of the U.S. Department of Defense.
The goal of that project was to build a redundant, decentralized data network that could survive multiple points of failure in the communication infrastructure over which it ran.
While it was first used by computer science academics to send each other code and e-mails — and soon of interest to military leaders worried about an enemy nuclear missile strike — it wasn’t long before a much wider swath of people realized it could do much more.
One day during that elective course, as I was waiting for one of my programming projects to print out from a machine that was about the size and shape of a small kitchen stove, my instructor said something interesting.
“The universities have the Internet now, but eventually it will be controlled by AT&T,” the late Paul Hewitt said then, as we sat near a corner of a sealed room on the top floor of the St. Mary’s University academic library.
That conversation took place four years after the U.S. Supreme Court had broken up AT&T’s monopoly on American data communication services, creating seven regional U.S. rivals, dubbed the Baby Bells.
And it was a few years before Tim Berners-Lee, Marc Andreessen and others developed key software breakthroughs that gave birth to a commercial, consumer World Wide Web running over the Internet.
Now, after a wave of telecom consolidation at the turn of the 20th century, only two of those original seven Baby Bells remain, in the form of Verizon and the reconstituted AT&T.
Along with a handful of giant cable providers and satellite giants, less than a dozen companies control the overwhelming majority of U.S. Web traffic.
In the fourth quarter of last year, the number of TV-style commercials on digital entertainment delivered to U.S. high-speed Internet subscribers of those companies roughly equaled the number of pieces of content they appeared next to.
Moreover, Web-based video ads, and the TV shows, live events and movies that they are paired with, are growing in lock step at roughly 30% a year.
In other words, a network that began as a way for computer scientists to communicate is already half-commercialized, as Hewitt predicted to me 28 years ago.
With both business and consumers willing to pay for a broad array of products and services, the Internet has become the world’s first global medium for delivering news and entertainment.
Not surprising, then, that it’s started to look a lot like television — a medium that in the U.S. is overwhelmingly commercial (save for PBS and local public access channels, home of the original video bloggers).
Soon, it will likely be far more so.
A U.S. federal court ruling in January, which struck down rules concerning how Web traffic and capacity are priced, has already begun spurring a new wave of telecom consolidation, such as Comcast’s $45 billion bid for rival Time Warner Cable.
The Internet, already half-commercialized, has just been further deregulated.
Given its history and current data traffic trends, if there are going to be public spaces on the Internet of the future, online consumers may have to work hard indeed to find them.
John Shinal has covered tech and financial markets for 15 years at Bloomberg, BusinessWeek, the San Francisco Chronicle, Dow Jones MarketWatch, Wall Street Journal Digital Network and others. Follow him on Twitter: @johnshinal.