Back in the early 1980s, when I first began working on standards committees for fiber optics, it was under the auspices of the Electronics Industry Association before they merged with the Telecommunications Industry Association (TIA) and turned the work over to them. The participants were the expected big guys such as AT&T and Corning, many other fiber optic manufacturers, and some government and military participants.
Practically every discussion about writing a new standard began with the same arguments—technology was moving so fast that standards couldn’t keep up, and writing standards at that time would stifle development. Compromise was difficult, and some standards took many years to write. Many of them still exist in virtually the same form today.
Now we don’t see those arguments as much because fiber optic technology is fairly stable, so committees focus more on applications. Thus, we have the TIA committees writing standards for public buildings, hospitals, educational facilities and data centers.
The problem is these are cabling standards for networking applications. TIA committees have a history of writing cabling standards that have not been particularly useful in the real world. For example, Cat 6 was written to create a cable for cheaper gigabit (Gb) ethernet, but the ethernet vendors never paid any attention to it because they knew the secret to making electronics cheap was simply high volume. Cat 6 became an orphan cable.
Applications standards for cabling tend to lag compared to networking applications. Wireless became the de facto standard for premises communication while copper or fiber cabling were off battling for control of the desktop connections. Standardized copper or fiber cabling today is mainly used to connect Wi-Fi wireless access points and cellular distributed antenna systems inside the building.
No application moves as fast as data centers. Astronomical internet use has relentlessly driven expansion of the communications backbone and the speed and capacity of data centers. Even the term “data center” has become confusing, being applied to small computers for enterprise networks as well as hyper-scale data centers operated by companies such as Alphabet, Amazon, Facebook, Microsoft, Apple, Equinix and others.
Hyper-scale data centers can have more than 100,000 switched links. Growth in data usage can require doubling capacity every 18–24 months, remarkably similar to Moore’s Law about the capacity of microprocessor chips.
Hyper-scale data centers are driving technology and standards. Several years ago, these data center operators realized the architecture of the servers and switches they were using was not optimal and they could drop an entire layer of switching, reducing complexity, power consumption and cost. Facebook took this a step further, designing its own switches and servers. They also posted the designs online, inviting others to use or improve on the designs.
Facebook ended up starting the Open Compute Project (OCP) as a consortium to design and use open-source products. All of the hyper-scale data center companies joined it. Left behind was the IEEE, the standards group for ethernet, Wi-Fi and most other networking hardware where standards development takes years. Since the OCP aimed at hyper-scale data centers, Microsoft and its newly acquired LinkedIn started a similar consortium for smaller-scale data centers.
All of this is having a major effect on cabling. The hyper-scale data centers are focused on 100-Gb networks today and 200/400/1,000-Gb as soon as possible. The others will follow as soon as they can. The only practical way to achieve such speeds is to migrate to single-mode fiber with wavelength-division multiplexing as used in telecom long-distance links. Single-mode fiber is the only solution that allows upgrades in speeds without replacing the cabling.
The OCP project also convinced single-mode transceiver vendors to offer data center products at a fraction of the cost of typical telecom products, making single-mode an affordable choice.
Meanwhile, cabling standards committees are still focused on multimode fiber for data centers. Multimode parallel optics requires four to 10 times the number of fibers to reach the same link speeds. It also needs multifiber MPO connectors that are only available factory-assembled. The new wideband multimode fiber—OM5—allows wavelength-division multiplexing over two fibers, but recent inquiries indicate wideband transceivers are neither readily available nor cheaper than single-mode.
All of this reinforces what I learned long ago as a supplier of fiber optic test equipment. It’s vital to understand the needs of end-users. A single organization such as Facebook can change a market overnight while a standards committee spends years arguing over details of last-generation applications.
About The Author
HAYES is a VDV writer and educator and the president of the Fiber Optic Association. Find him at www.JimHayes.com.