100 Gigabit Ethernet: Impractical and Unnecessary, But Coming Anyway
Last week, an article in InfoWorld reported that the IEEE was beginning to lay the groundwork for standardizing 100 Gigabit Ethernet networks. While this is an interesting development, and is sure to advance networking science and industry, it's totally unneeded from my perspective.
For one thing, the potential usage scenario of a 100 Gb/s link is extremely small, since it's nearly impossible for normal systems to generate that much TCP/IP traffic. For example, the latest revisions of the PCI-X specification have a theoretical ceiling of 4.3 GB/s (or approximately 34.4 Gb/s), while 16x PCI-E has a theoretical ceiling of about 4 GB/s (or approximately 32 Gb/s).
Meanwhile, the reliability mechanisms in TCP/IP rely on a 32-bit numbering scheme to keep track of data, but since that scheme uses a 32-bit number space, it can only track 4,294,967,295 bytes of data in-flight at any given time. But a 100 Gb/s network can send all of that data (plus packet overhead) in less than half a second, meaning the sender would essentially have to dump the entire payload onto the wire, and then immediately stop sending so that it could wait for an acknowledgent from the recipient. If the link is any significant distance (and therefore has any kind of significant latency), it could take longer for the data to reach the far end of the network than it took to generate the traffic.
All told, 100 Gb/s networks simply are not going to be viable for host-to-host or even site-to-host connections until current generation hardware and software are replaced. The only possible potential use for this kind of network with present-day technology is going to be for site-to-site interconnects. But frankly, there isn't much demand for that kind of usage scenario at the moment either, and what little demand there is can more easily be satisifed by running multiple 10 Gb/s networks in parallel.
Here's a little story that may help to illustrate the point. Back in the early-to-mid 90s when I was running Network Computing's test labs, I did some research on upcoming high-speed networking technology. At the time, the only people that were really pushing the boundaries were physics labs that needed to exchange huge quantities of data, and those guys were working with OC3 links (1 Gb/s). But it was easy to see that hardware architectures and the exponential growth of the Internet would likely lead to commercialization of the technology, and sure enough, speeds up to about 10 Gb/s can now be found on desktop networks and carrier networks alike. Today though, there isn't much of this happening with 100 Gb/s networks—even the cutting edge people are pretty well satisfied with 10 Gb/s links, and where more is needed they just lay more pipe. Indeed, there was a recent demonstration of 100 Gb/s networking in a pure research environment, and even that used multiple 10 Gb/s pipes chained together in parallel. So even what little demand we do see for this level of throughput is basically able to get by with multiple strands of fiber, and doesn't require a single link that is capable of 100 Gb/s speeds by itself.
Now, this doesn't mean that I think it's a waste of time to pursue this kind of technology. Indeed, research into this area is likely to yield numerous benefits to all areas of data networking (somebody may even come up with a viable solution to the TCP/IP sequence number wrap-around problem). But it's definitely not needed, and given the limitations of present-day technology, it's probably not a practical pursuit either. It's science for the sake of science, which is fine and good as a noble endeavor in its own right as long as everybody is aware that there won't be mass-market products on the shelves anytime soon (heck, we aren't seeing mass-market 10 Gb/s interfaces yet—it's all special-order gear still—and that has been standardized for a while now).
I also don't see why the IEEE needs to get involved at this point either. Simply put, it's far too soon to worry about standardization, given that we are just barely able to do research work in this space. We can't use it, we have workable alternatives, so what's the rush exactly? Worse, beginning the standards process too soon (say, in the next three or four years) is only likely to suppress the research and development that we need more than anything else. It's probably better to wait for multiple independent technologies to be developed to working prototype stage, and then use the standards process to hammer out a common specification.