---------------------------------------------

NOTE: This article is an archived copy for portfolio purposes only, and may refer to obsolete products or technologies. Old articles are not maintained for continued relevance and accuracy.
August 25, 2004

21st Century TCP/IP

The Internet's success as a global networking platform is due partly to its maturity and stability: TCP/IP is ubiquitous and well-understood, making it a solid foundation for new networked applications. But stability doesn't mean stagnation. Internet technology continues to evolve, with new protocols under development and existing technologies being continuously tweaked to deal with the demands of tomorrow's applications.

Q&A With Fred Baker

Fred Baker is a former chairman of the IETF and author of more than 30 RFCs. He's currently a member of the Internet Architecture Board (IAB) and a Cisco Fellow, responsible for setting the company's technical direction.

Q: What's the outlook for IPv6 deployment? Is it needed, or will IPv4 suffice for the foreseeable future?

A: I had a discussion last week with a school system that has a real operational need to run videoconferencing services between 10,000 schools. It uses IPv4 today, but is deploying IPv6 to fix the problems of trying to use IPv4 in such a large and distributed environment. There are real needs out there, but the folks who have real needs for IPv6 tend to get drowned out by folks who say there's no real requirement.

Q: Terabit networks can transfer a maximum-sized window of data in approximately 32ms, which is close to end-to-end latency on many distributed LANs. Are we starting to bump up against the theoretical limits of TCP window sizing?

A: Not for a decade. Generally speaking, discussions of networks and routers that have the word "terabit" in them are talking about the sum of the speeds of the links in the network or on the device, not the speed that can be achieved by a single data flow.

The issues aren't with windowing. They're with Selective Acknowledgment (SACK) and predictability. We need procedures that allow us to drop a packet and not constipate the data flow. The RFC 2581 and Fast AQM Scalable TCP (FAST)/Vegas implementations both have some interesting properties here. FAST tunes to the knee of the performance curve (maximizing throughput while minimizing jitter) rather than the cliff (maximizing throughput while minimizing what it assumes to be congestive loss).

Q: The transitional path to alternative transports is problematic on many levels. Do you favor going this route?

A: For many applications, I would favor turning to SCTP. Think of this as being a TCP that manages transactions, and so is useful for transaction applications in which we currently use UDP. But stack support is still an issue.

We're working with a customer in the movie business that sends daily footage to a preproduction center, reviews it in the evening, and shoots again the next day. This requires nine-digit link speeds or better, and effective use of them. That customer is reportedly using a product that improves TCP performance in proprietary ways. Users of satellite connectivity frequently do the same thing to work around the delays. There's a current grant program from the Defense Advanced Research Projects Agency (DARPA) studying how to deliver highly predictable TCP or TCP-like performance in military satellite networks.

Q: What's your thinking about the AIMD algorithm research work that's been going on? Will we be moving toward replacement protocols?

A: I think AIMD will go the way of all the earth-at least eventually. It may be replaced by FAST. If you want to have predictable rates, you're going to have to measure the capacity available in an Internet path and set the effective window accordingly.

Innovation can be seen throughout the TCP/IP stack, often as a result of improved hardware capability and emerging market opportunities. For example, the price of gigabit networking technology has dropped to a point where it's routinely installed on new workstations, while powerful handheld computers and pervasive wireless networks allow networked applications to move beyond the LAN. VoIP and iSCSI can require changes to the underlying network, as telephony and storage have very different requirements than traditional data traffic. The broad adoption of Internet technologies both within every organization and to the furthest reaches of the planet is forcing end users to rethink issues such as internationalization, security, and spam.

However, end users have relatively little input into the workings of the IETF, the standards body responsible for the core Internet protocols. Although the IETF is transparent and encourages participation by individuals, standards are driven by vendors, and they must be adopted in multiple product lines before they can be successfully used by the masses. The most important standards over the next two years will be those that are supported by the IETF, implemented by vendors, and ultimately adopted by users.

There are currently 130 active working groups within the IETF, cumulatively developing hundreds of standards-track specifications. Alongside these are thousands of independent proposals submitted by individuals, vendors, and consortiums. The ones receiving the most attention are those that seek to solve the Transport-layer problems caused by real-time applications and high-speed networks, and those that handle security, naming services, or collaboration.

A successful standard involves more than just an RFC that solves an important problem. It requires significant commitment of intellectual, political, and monetary capital, and the only parties that can justify this are vendors that believe that the standard will help them gain a competitive advantage.

Every user has a wish list of technologies that the IETF should be developing, such as platform-independent backup protocols, distributed database access standards, and even trivial services such as network gaming. But vendors are unwilling to devote sufficient resources to problems that don't have a perceived impact on their bottom lines.

At the same time, standards have been widely adopted by the vendor community but subsequently rejected by end users, either because they didn't satisfy a real need or because another technology provided a better solution. In IETF politics, customers are the executive branch, exercising veto power but little more.

IPv6: Addressing a Problem?

IPv6 illustrates the gap between engineering and deployment. The standard itself is mostly complete, and enough products and services are available for it to be deployed by almost any organization. However, IPv4 still satisfies most networking demands, so very few organizations have any real need for IPv6.

IPv4's continued popularity has meant that IPv6 isn't viable in some important applications. It wasn't until July of this year that the Internet Corporation for Assigned Names and Numbers (ICANN) announced support for IPv6 queries to the DNS root servers. The result has been fewer products to choose from and more difficult administration. Living the IPv6 lifestyle carries a much higher cost than sticking with IPv4.

IPv6's main selling point has been made relatively moot by external events. It was designed in part to ease a predicted shortage of IPv4 address space, but that crisis was averted by the use of NAT and stricter delegation policies. Without a shortage of IPv4 addresses, IPv6 has no single compelling benefit that can offset its higher costs.

Vendors are still working on the technology, but they aren't hyping it, nor have they switched over to it on their internal networks. Cisco Systems, Microsoft, and HP all like to demonstrate technology leadership, but even they can't justify the switch.

IPv6 will eventually reach critical mass, but it may take a decade or more. New networks and Internet-enabled devices are accelerating the increase in demand for IP addresses, which will put further pressure on IPv4.

If ISPs and small and home office appliances started to support IPv6 seamlessly—even making it the default—its adoption would be certain. However, such a switchover would also require seamless conversion of legacy IPv4 traffic. As things currently stand, there's no compelling reason to adopt the technology.

Transport Protocols

Of all the major technology areas under the IETF's purview, the transport protocols are under the most duress, squeezed by both next-generation network topologies and emerging applications. As a result, existing transport protocols are undergoing radical reconstructive surgery, while others are forking into optimized variants that each attempt to address a specific problem.

Q&A With Eric Rescorla

Eric Rescorla has been active in the IETF since 1995 and is the author of numerous security-related RFCs dealing with TLS, IPSec, and S/MIME. He's currently a member of the Internet Architecture Board (IAB) and principal engineer of RTFM, a security consulting firm.

Q: More and more users are deploying dynamic VPNs using SSL, TLS, or SSH tunneling agents rather than full-scale IPSec to the desktop. Where is this trend going?

A: I think it's likely to continue. It's not as if IPSec is suddenly going to get more suitable for VPNs than it is now.

I think what's interesting here is that people seem to want a technology that's less generic than an IPSec VPN. After all, there are a variety of IPSec VPN products on the market already, so I suspect that the issue here isn't so much the protocol but that people want finer control. This isn't at all what most security people-including me-anticipated. We should have done a bit more requirements gathering.

Q: The PKIX Working Group has also been going forever. While it's still working on very important issues, a generic system that multiple applications can tap into still isn't available.

A: I think that at one point people believed there would be a single integrated PKI-or at least a PKI format-that every application would use. Creating that has been a lot more difficult than anyone expected. Applications started to come out of the woodwork, and there seems to have been a desire to do a really principled job, including supporting a lot of very high-security applications. Both of these factors lead to a very general, complex system that's hard for implementers to wrap their heads around.

If all you want is to prevent SMTP domain forgery, it's tempting to cook up a simple certificate format rather than muck around with PKIX. I'm not sure that this is necessarily bad. Obviously, common tooling would be nice, but neither of the typical arguments for this-reducing engineering effort or promoting interoperability-really applies here. It's more effort to do PKIX than a custom thing, and DomainKeys isn't going to interoperate with most other PKIX implementations anyway.

That said, it often pays to package your private keying material in an X.509 wrapper since that does let you reuse commodity tools. I just don't think it's a showstopper.

Q: Do you think the major architectural pieces of network security will ever come together into a cohesive whole? We've got the pieces we need, but there are more holes than solutions at this point.

A: People talk as if all network security mechanisms should be top-to-bottom-integrated, but I'm not sure that's possible or desirable. To take an example outside of computers, look at transportation networks. There are lots of architectural pieces (planes, trains, trucks, cars, and so on), but you can't really say they've "come together" or that they're integrated. Instead, what we have are specific examples of vertical integration where there's a particularly high payoff (container shipping, for instance, or FedEx), but that doesn't mean you can somehow drive to the airport, park your car on the runway, and climb right onto a plane. I would expect the same pattern to emerge with networks.

That said, there are some obvious holes in our security story, but I'm not sure that they're necessarily in the communications arena. The big "network security" problems we face are mostly the result of the fact that the network software we're using is riddled with holes. That-and not the fact that we don't have transparent end-to-end encryption-is why we have worms and Distributed Denial of Service (DDoS) attacks. Those problems are starting to get some real attention, but unfortunately they're conceptually a lot harder and messier than the traditional communications security problems. That doesn't mean communications security isn't worth doing, but we shouldn't fool ourselves into believing that it's the big problem.

Q: We basically have two different Application-layer encryption mechanisms: Some applications use SSL/TLS, while others use SSH. At some point, will administrators want to consolidate administration?

A: The administrator doesn't care whether the protocol is SSL or SSH. What matters is what he has to do to install and maintain it. In the case of Telnet, the only advantage of SSL over SSH would be that you could use your SSL certificates instead of your SSH key. But because SSH keys are so easy to configure, that wouldn't confer a big advantage, especially as most SSH servers probably don't even have SSL certificates.

So, no, I don't think this trend is particularly damaging or that SSH is going away.

For example, the Stream Control Transport Protocol (SCTP) was originally introduced to provide a reliable transport for Signaling System 7 (SS7) telephony switching networks, but has since found uses in many other applications. It was necessary because SS7 requires support for framed messages and multihomed endpoints that can survive the loss of a specific path or network address, neither of which TCP can provide. (TCP doesn't preserve Application-layer message boundaries, and TCP sessions must be killed when an endpoint address becomes unreachable.)

SCTP has since been adopted as a transport for the Session Initiation Protocol (SIP), sometimes seen as a replacement for SS7. It's also being considered for iSCSI because, like SS7, the technology requires a transport that can manage multiple sessions of framed messages.

In addition to repurposing existing protocols, the IETF is also designing new ones. The Datagram Congestion Control Protocol (DCCP) is intended to replace UDP, adding congestion-awareness and control mechanisms. These will enable the transport to throttle down its data rate whenever congestion is detected—unlike UDP, which simply keeps blasting packets out and making congestion worse.

SCTP and DCCP both represent important lines of work, but their widespread use will eventually depend on applications having access to them. This in turn requires that they be supported by off-the-shelf OSs and APIs. There's no indication that this will happen anytime soon, as the problems that SCTP and DCCP solve aren't yet widespread enough to prompt OS developers to include them.

One problem with TCP that's received a lot of attention is its Additive Increase, Multiplicative Decrease (AIMD) algorithm. Governing the data rate in a TCP link, it means that a sending machine will slowly increase the amount of network bandwidth it consumes, then cut the utilization rate in half whenever any kind of packet loss is detected.

The algorithm helps to ensure fairness on networks in which packet loss is a result of congestion, but overly conservative gearing can make startup times and network-related errors painful. This is particularly true with burst-oriented applications such as iSCSI, where the endpoints may be quiet for extended periods of time, but suddenly need all the bandwidth at once when activated.

Several organizations are working on technologies that promise to improve TCP's architecture, though most of these are still experimental and unusable with common hardware and software. Some problems can't be easily solved and may require new protocols or forks in TCP. The IETF hasn't yet begun any formal process for identifying and evaluating these, so widespread support for TCP fixes are probably a decade away. We'll probably need them sooner than that, so proprietary protocols may be the only short-term solution.

Decoding Security

Internet security receives a lot of attention from users and developers alike. But while a lot of energy is expended, there's little visible movement. Although some of the core technologies have been in development for several years, they aren't seeing broad-based implementation.

For example, some portions of the IPSec technology family are still in active development, even though it's already widely deployed. In particular, the Internet Key Exchange (IKE) protocol, used to exchange public key data, is currently being rebuilt with an eye toward simplification and easier deployment. Rather than wait, large parts of the encryption market have decided to deploy dynamic VPNs based on Transport Layer Security (TLS) or Secure Shell (SSH). Because these only encrypt traffic from one Transport-layer port, not from every port on a node, they're simpler to build and operate than IPSec.

Based on Netscape's SSL, TLS is already used in critical services such as HTTP and e-mail. TLS' ability to provide both VPN and application-specific encryption makes it particularly compelling. However, there are some holes in the TLS story, resulting in opportunities for competing technologies such as SSH.

In particular, TLS support for applications such as Telnet and FTP were slow in coming and still aren't widely supported by vendors in a standardized form. SSH fills this gap by providing encrypted terminal emulation and file transfer services in a way that's easy to deploy, and users have flocked to it accordingly. Like TLS, SSH began as a proprietary technology, but has now been adopted by the IETF, with standards-track proposals for both generalized encryption and point-to-point VPNs.

TLS and SSH provide similar functionality, but for different applications: TLS for the Web and e-mail, and SSH for terminal emulation and file transfer. Nobody wants to manage two different encryption systems forever, however. If the interests of the user base are to be satisfied, these technologies need to be made interoperable, a feat still several years away. Here, IPSec may have an advantage: Because it handles all IP traffic, it can be cheaper and easier than deploying multiple services.

Many security technologies rely on PKI, a technology that could be simplified by standardizing a way to locate and retrieve X.509 certificates across the Internet. The IETF's PKIX Working Group has been working on this for many years, but has made little progress toward an actual "infrastructure" that's widely accessible by Internet users.

The absence of a standard infrastructure has led some applications and services to define their own implementations. For example, Yahoo's DomainKeys anti-spam technology requires that application-specific public keys be stored in DNS. An Internet-wide PKI would require a globally distributed directory service, but this hasn't even been proposed yet.

Internet Naming Services

The Internet's growth has led to a lot of activity around naming services. These now go beyond DNS and its related service to covering secure and ad hoc networks.

The increased popularity of wireless devices allows the formation of dynamic TCP/IP networks. Devices need to have network names of some kind, but temporary ad hoc networks have no authoritative domain. The naming service developed for these environments is called Linklocal Multicast Name Resolution (LLMNR). Based on the same message format as DNS, it uses multicast lookups instead of the DNS hierarchy. It was adapted from the Rendezvous networking service that Apple uses in OS X and is expected to be standardized within the next year.

DNS itself has long been vulnerable to forgery: False domain name data can be inserted into a victim machine fairly easily, redirecting the victim to hosts under the control of an attacker. The DNS Security (DNSSEC) protocol is intended to fix this, providing a chain of keys that can be used to verify a DNS record's authenticity. This technology is absolutely crucial, as DNS vulnerabilities are used to gather host and network information that can later be used for attacks.

Q&A With Eric Allman

Eric Allman developed Sendmail, the open-source server that handles most of the Internet's e-mail. He's now CTO at Sendmail, which sells a commercial version of the product.

Q: You've been trialing some proposed anti-spam technologies-Sender ID, DomainKeys, and so on. How effective have those technologies been?

A: Since the technologies aren't yet widely deployed, it's too early to say how effective they are. I can say there are several interoperating versions of DomainKeys, and that our implementation has quite good performance. Many skeptics have claimed that cryptographic approaches are just too slow to be useful, and that appears to not be true.

Q: One problem with brand-new distributed services is that it's hard to realize the network effect. How long before these technologies are going to be useful for administrators?

A: Protocols are a difficult issue in that you really do need two to tango (and preferably more than two). I'm guessing that you'll see a rather bizarre tendency here for a while, with large domains with strong brands willing to publish sender identification data. Conversely, smaller companies that are being inundated with spam and service providers that are losing customers because of spam will be more willing to validate against that data. Consider that AOL already publishes SPF/Sender ID records and that Microsoft has announced that as of October MSN and Hotmail will examine and use Sender ID records. These big players will create a lot of pressure to adopt the new solutions.

The problem is that authentication by itself isn't enough: Spammers could just authenticate as themselves. We'll need accreditation and reputation services-for example, the ability to look up an authenticated domain name to see if we want to trust it or not-to make this happen. Several groups are working on these already.

Q: Much of the IETF's activity has a lot of intellectual property baggage. Are we going to run into submarine patent issues, where specifications are published for marketing purposes with hidden royalty or licensing terms?

A: If I had to pick one potential showstopper, this is it. Personally, I believe that the individuals I've worked with on these specifications all seem to understand that we're on a sinking boat and that now isn't the right time to circle for competitive advantage. But a lot of the players in the anti-spam game these days are large companies, and large companies tend to have large legal departments.

It sometimes isn't clear that the lawyers understand the problem. I think it's tragic that we may find ourselves stuck with a technically inferior solution because of people who don't understand that their ankles are already wet. Nor do we techies understand the issues, either. We're making decisions on the basis of the possibility of stealth patents that may or may not actually exist. Ultimately, I can't see how anyone wins this game.

Q: Spam can already be filtered. What's left for the IETF to do here, and what do you want to see it do?

A: Filtering isn't good enough-I still get enough spam to make sorting a problem for me. Spammers continue to get better about tricking our tools, so the problem isn't, to my mind, solved yet.

Q: Are we at risk of describing services that everybody publishes but nobody validates because the cost-to-benefit ratio is too low to bother with?

A: There's always a risk, but traditional commercial enterprises care about productivity, and ISPs care about help desk overhead and customer satisfaction.

However, DNSSEC has also been under active development since the early 1990s. A new set of RFCs is likely to be published within the next year, but nobody knows whether these will satisfy either users or domain registration bodies. Even when published, it will take another five years before OSs and applications fully support the RFCs.

The IETF is also modernizing the Whois service, currently vulnerable to spam address harvesters and often inaccurate. The new service, known as the Internet Registry Information Service (IRIS), will use XML to describe and reference domain name delegation information. XML should allow improved programmatic access to the delegation data, necessary for operational support devices such as anti-spam tools and log-file analyzers. The IRIS specification is nearly complete, but it will probably be a year or more before registries have actual IRIS servers up and running.

Collaboration Tools

Although the IETF is involved with numerous application protocols and services, some of the heaviest work is in the area of collaboration tools: e-mail, VoIP, and IM.

Internet e-mail is one of the IETF's most successful technologies, with most of the necessary work completed years ago. However, broader trends have exposed a need for improvement. In particular, a significant amount of effort is going into developing tools that can help fight spam (see "E-mail Authentication Via Sender ID," Technology Roadmap, page 50).

Most current work is taking place within the IETF's research body, not the engineering wing that produces the actual standards. One such effort is MTA Authorization Records in DNS (MARID), a standard for publishing the authorized senders of a domain. Although this effort is principally geared toward eliminating forgeries, some percentage of spam and worm traffic will also be eliminated because many of those messages use forged sender addresses.

The hard truth is that the IETF alone can't fix the e-mail system. At best, it can standardize protocols for tasks such as transmitting spam-score data among filtering tools. Though some people dream of a replacement for SMTP itself, no technology can block spam while still allowing strangers to communicate with each other.

In the area of VoIP, SIP is emerging as a clear winner. OSs, applications, and phones are increasingly choosing it over the ITU's H.323 technology as a way to set up real-time communications sessions. However, there's still a significant amount of work going into SIP, particularly in the area of call-control services. Basic telephony features such as hold and transfer aren't yet standardized, meaning that most VoIP implementations are limited to a single vendor.

IM is the murkiest area of all. Most implementations are still proprietary, and the IETF has sanctioned two different technologies as possible standards. SIP for Instant Messaging and Presence Leveraging Extensions (SIMPLE) is designed to interoperate with SIP-based VoIP systems. This has earned it the support of all the VoIP vendors, as well as IBM and Microsoft, making it the preferred choice for many internal corporate networks. On the wider Internet, the Extensible Messaging and Presence Protocol (XMPP) has a large installed base. Based on the open-source Jabber software, it's tailored specifically for IM. At this point, it's still too early to declare a winner out of the two.

As the Internet expands, so does the effect of non-IETF standards bodies on IETF standards. For example, much VoIP work depends on ITU telephony specifications, and the practical technology needed to implement standards depends on the IEEE and vendor consortiums.

But the most important actors in the parade aren't vendors or standards bodies. They're the customers—the network managers who carry wallet-sized vetoes. Unfortunately, this arrangement, while providing balance, isn't necessarily efficient: Vendors have to spend more time and money on development, while users have to wait longer for deployable technology. To shorten the development cycle, customers need to move beyond a simple veto of vendors' actions and play an active role in the technology's development.

-- 30 --
Copyright © 2010-2017 Eric A. Hall.
Portions copyright © 2004 CMP Media, Inc. Used with permission.
---------------------------------------------