Internet Core Protocols: The Definitive Guide
Overview / [ Foreword ] / Table of Contents
Originally, the plan was to have Jon Postel write the foreword to this book, but he passed away unexpectedly while the book was in production. Vint Cerf stepped in and wrote the following foreword in his stead:
For general information about the book, refer to the overview page. If you would like to see what's covered in the book, see the table of contents. For additional information about the book (including any errata), please refer to the O'Reilly catalog page.
The Internet began as a research effort to link different kinds of packet switched networks in such a way that the computers attached to each of the packet networks did not need to know anything about the nature of or the existence of any network(s) other than the one(s) to which the host was directly connected. What emerged was a layered design which used encapsulation to carry end-to-end "Internet" packets from the source host, through intermediate networks and gateways to the destination host. The first Internet incorporated three wide/medium area networks including the ARPANET, the Atlantic Packet Satellite net (SATNET) and a ground mobile Packet Radio network (PRNET), and eventually also included the first 3 Mb/s Ethernet developed at Xerox PARC in 1973.
Now, some twenty-five years after the first designs, there are hundreds of thousands of end-user networks attached to the Internet, serving an estimated 45 million computers and 150 million users. Moreover, the original speeds of the trunking circuits in the constituent networks have increased from thousands of bits per second to billions of bits per second, with trillions of bits per second lurking in laboratory demonstrations.
As the Internet has grown, its complexity and the number of people dependent on it have both increased substantially, but the number of people with detailed understanding of the protocols and systems that allow the Internet to work represent a declining fraction of the total population of users or even operators of such networks.
Worse still is the fact that the number of protocols and services in use on those networks has also increased from a handful to hundreds. While it used to be that a single super-administrator could manage the routers, domain name servers, mail servers, and other resources on the network, we are now faced with so much specialization that it is impossible for any one person to follow everything. At many of the larger firms, there are entire departments that do nothing but manage the network routers, while other groups manage the dial-up servers and others still who manage the web and mail systems, domain name systems, and news groups.
This is a serious problem. Large corporations can afford to hire specialists who understand their respective parts of the overall picture, but most companies can't afford an army of specialists, and have to make do with a handful of network engineers who have to know "whatever's necessary." Furthermore, debugging and analyzing Internet problems defies specialization. Problems often arise because of the interactions between different parts of the network. If e-mail isn't being delivered, is the problem with the mail server itself? Or has something gone wrong with routing, the domain name system, or with the low-level protocols that map Ethernet addresses to Internet addresses? It may be unrealistic to expect one person to diagnose problems in all of these areas (plus a dozen more) but many network operators face this challenge daily.
When problems do occur, administrators have a variety of tools available for debugging purposes. This includes packet analyzers that can show you the inner core of the network traffic, although they won't tell you what that traffic means. Another set of tools is the vendor's own documentation, although more often than not the vendor's documentation is based on the same misreading of the specs as the problematic software. One of the last alternatives is for the administrator to prowl through the protocol's technical specifications in order to determine where the problem really lies. But when it's 4 a.m. and the web server in Chicago keeps dropping its connection to the database server in Atlanta, these specifications are of limited use. These documents were written largely as strict definitions of behavior that should occur, and generally do not describe ways in which the protocols might be made to fail.
That's why these books were written. Throughout this series, Eric Hall takes you behind the scenes to discover the function and rationale behind the protocols used on IP networks, offering thorough examinations of the theory behind how things are supposed to work. Furthermore, Hall backs up the tutorial-oriented discussion with packet captures from real-world monitoring tools, providing an indispensable reference for when you need to know what a particular field in a specific packet is supposed to look like. In addition, Hall also discusses the common symptoms of what things look like when they break, providing detailed clues and discussions on the most common interoperability problems.
This three-way combination of tutorial/reference/debugging-guide essentially makes these books all-inclusive "owner's manuals" for IP-based networks. They are attractive volumes for any network manager who works with Internet technologies, particularly as the Internet continues to go through the growing pains resulting from near-exponential growth. Even though there are already more than 44 million devices connected now, all indications point to there being nearly a billion devices on-line by 2006, including IP-enabled sensors, garage door openers, video recorders, IP-telephones and all other manner of office and home appliances. And of course, may of those devices will need new protocols... The Net is going to get a lot more complicated.
The research networks we linked long ago have given way to networks being adapted for inter-planetary distances (in which a different form of "the speed problem" emerges). Already planned is an Internet-enabled Mars base station, together with a set of interplanetary gateways that will link these networks back to Terra Firma. The NASA Mars missions begun last year will continue well into the second decade of the next millennium. A part of the plan for these explorations includes the formation of a network of Internets: an interplanetary Internet. Perhaps someday it will be the lifeline of communication for explorers and colonists to our neighboring planets, the moon, and the satellites of the larger planets in the outer solar system.
Back here on Earth, however, there will be plenty to occupy our attention as the Internet continues its relentless growth. We will need the help of books like the ones in this series to analyze problems arising on the Internet we already have, as well as the ones planned for the future.
If you are interested in purchasing a copy of this book, you may buy it from O'Reilly directly, or from Amazon.com.
Internet Core Protocols: the Definitive Guide
O'Reilly & Associates, 2000