Last month, we discussed how long-distance dark fiber is used to connect data centers. Everywhere you look, you read about new data centers being built by Google or Facebook or Amazon or some other one with a strange, unpronounceable name. It’s just an indication of how fast Internet usage is increasing.


What is happening inside these data centers may interest contractors because it involves thousands of cables. Data centers have servers, which are the computers that process requests for data from the Internet; storage where the data is kept; and switches that connect the servers and the storage. All of those components must tied together by a cabling infrastructure.


The five basic types of data center cabling are unshielded twisted-pair (UTP) copper Cat 6a (and maybe someday the proposed Cat 8); coaxial; active optical cables, which are multimode fiber with permanently attached transceivers; multimode fiber; and single-mode fiber.


Speed is the top concern for connections in a data center. Secondary concerns involve power usage and cooling; cost, not so much; just get the job done.


Mostly because of the market size, there have been many articles, white papers, application notes, web pages and webinars about data center cabling. Standards have even been written. I must have read hundreds of papers recently for background information as I developed the Fiber Optic Association (FOA) Data Center Certification and training curriculum. When I talked to people who were actually designing, installing or training workers for data centers, I found that many of the papers I read did not correspond to what I saw actually being built.


If you read all of those articles and standards, you would think that data centers are built like the local area networks (LANs) of a decade ago using standard structured cabling architectures. In reality, data center architecture and cabling are very specialized and optimized for speed and low power consumption.


Let’s start with architecture. Data centers are built with an architecture that simplifies switch/server/storage connections and provides the fastest switching and data transfer possible. Data center architects talk about “fat tree” or “spine-leaf” architecture. Equipment is arranged in top of rack (TOR) or end of row (EOR) configurations.


A TOR system puts rows of servers in a rack with a switch on top of the rack. Inside the rack, the servers connect to the switch over very short cables, just the height of the rack. Those cables will usually be Cat 6a UTP, CX coaxial or duplex multimode fiber for 10-gigabit (G). As the connections migrate to 40G and 100G, things get more complex, as I will discuss later. Each of these TOR racks connects to another layer of switches for connections to storage area networks (SANs) that are like the server/switch connections but between switches and storage units. This higher level of connection seems to usually be single-mode fiber for its capability for higher speeds and longer distances.


An EOR system puts all of the servers in racks that are connected to a patch panel on top of the rack. That patch panel connects to a rack at the end of the row that is all switches, also connected to a patch panel at the top of the row. Masses of cables, one for each server/switch connection, are run in cable trays above the racks to connect each rack to the switch rack at the end of the row.


EOR architecture will usually allow the cables to be Cat 6a for up to 10G, but the cable trays above the rows are filled with masses of cables; if you use fiber, you use much less. The transceivers for Cat 6a consume much more power than fiber connections. Above 10G, this architecture needs fiber.


Data centers looking at 40G or 100G (practically all of them) know fiber is the only solution. Most articles about data centers in the fiber and cabling business have focused on multimode fiber using parallel optics and array connectors. This uses masses of multimode fibers—a dozen for 40G and two dozen for 100G with many unused fibers, a solution that seems to have met much user resistance. The only possible salvation for multimode fiber is a Cisco-led development of wavelength-division multiplexing for multimode fiber that will allow using two fibers up to 100G, but only if that catches on.


Data center users have mostly gone straight to single-mode fiber, knowing that the slightly higher immediate cost means virtually infinite bandwidth for future upgrades. Contractors installing this cabling are using traditional termination techniques such as fused-on pigtails or the new fusion-spliced prepolished/splice connectors to get the best possible performance for the single-mode cabling.


The FOA’s online self-study website, Fiber U (fiberu.org), has a free data center cabling self-study program that expands on this topic. For the online version, we also have a video on data centers available at the FOA’s YouTube channel.