We just beat back a misguided attempt to break the Internet on the basis of some retrograde conception that equated the Internet with circuit switched telephony. But there is no debate that the Internet is under strain. We’ve been working with UN ESCAP, among others to address some of the problems. But the more fundamental questions of moving massive amounts of data from multiple devices are being addressed in the universities that begat the Internet. These are the solutions, not ETNO’s proposals, now seeping into European policy, to tax OTT players.
The Internet was designed in the 1960s to dispatch data to fixed addresses of static PCs connected to a single network, but today it connects a riot of diverse gadgets that can zip from place to place and connect to many different networks.
As the underlying networks have been reworked to make way for new technologies, some serious inefficiencies and security problems have arisen (see “The Internet is Broken”). “Nobody really expects the network to crash when you add one more device,” says Peter Steenkiste, computer scientist at Carnegie Mellon University. “But I do have a sense this is more of a creeping problem of complexity.”
Over the past year, fundamentally new network designs have taken shape and are being tested at universities around the United States under the National Science Foundation’s Future Internet Architectures Project, launched in 2010. One key idea is that networks should be able to obtain data from the nearest location—not seek it from some specific data center at a fixed address.
Comments are closed.