Data Centers On Dark Fiber

Data center connection is one of the most common uses for dark fiber, and it’s all due to growing data needs. You have probably seen graphs of the Internet’s growth and heard claims about how much data moves along it. It’s not gigabytes or terabytes; today it’s petabytes and exabytes, and it won’t be long before it’s zettabytes or yottabytes (those are some big numbers).


Moving data to a data center closer to users is one tactic for data management that prevents overloading the Internet backbone. For example, someone doing a Google search from Washington, D.C., doesn’t have to connect all the way to Google headquarters in the San Francisco Bay Area. Instead, the user connects to a closer Google data center in North Carolina.


Streaming video services, such as Netflix, are even better examples. Netflix customers use about half of the Internet’s bandwidth, especially at night in North America. If all of that data were stored in a single data center, download traffic would swamp both the data center and the Internet. However, Netflix has built regional data centers and also co-­located smaller data centers in Internet service provider (ISP) facilities. Those data centers are filled with the most popular movies, so the ISP can deliver directly to the customer without using the Internet backbone at all.


Data centers hold a lot of data, that’s for certain. Most of us have 1-terabyte (TB) drives on our computer and perhaps even bigger external drives for backup. The TB drive will hold about 200 full-length movies. Fifty TB drives will hold 10,000 movies, probably adequate for storing the most popular movies at most local ISPs.


This network methodology means the Internet backbone is only needed to “mirror” data centers, so they all have the same data available for delivery through ISPs. In fact, it’s smarter than that; data analysis identifies regional preferences, providing timely updates and planning for massive use during large public events like sporting or entertainment events. Once you start thinking about this, the term “big data” makes more sense.


All of this requires “big pipes,” reminding us of the late Sen. Ted Stevens’ infamous description of the Internet as a “series of tubes.” A single house can consume 10–20 megabits per second (Mbps) on average if there are two or three video streams being viewed at once. Unlike web browsing or searches, which are intermittent users of bandwidth, video requires a continuous stream of data at speeds of greater than 5 Mbps for a high-definition TV signal, according to Netflix.


Some simple math finds that 200 houses streaming one video at a time would require greater than 1 Gbps, and 2,000 homes would require greater than 10 Gbps. You can estimate the statistical usage of all of that bandwidth, but when everybody is streaming video, you better have enough bandwidth available.


If you are a reasonable-sized ISP, you probably have one or more 10-­gigabit connections with carriers. Those connections will be on single-mode fiber and depend on the dark fiber we have been discussing these past few issues. Carriers are continually updating their backbones to higher speeds, with 100-gigabit being the norm these days. They also use wavelength-division multiplexing to put multiple 100-gigabit signals on the same pair of fibers, up to 128 in some systems, for a total of almost 13 terabits per second over a fiber pair. Now you can see why those dark fibers need to be tested carefully to see if they can support the needed data links.


What goes on inside data centers might interest electrical contractors even more. Big data centers have rows of storage—amounting to thousands of drives—that store the data. But the action in a data center is finding that data and moving it to those requesting it. The “brains” of a data center are the servers that keep track of the data and manage the input/output requests.


Servers connect to switches that allow them to communicate with other servers, the outside world and other switches that connect the data storage, not much different from a corporate local area network except for the scale. In a typical data center, switches are designed in levels, switches that talk to servers on one side and higher level switches on the other side. Those higher level switches talk to storage-area-network switches that complete the data center network.


Any consideration of those servers and switches leads to one conclusion—data centers have a lot of connections. Most connections are Ethernet, although Fibre Channel has been used in many storage networks. Fibre Channel over Ethernet is becoming popular, making everything just Ethernet. As for cabling, there are standards calling for Cat 6a, coaxial, multimode fiber and single-­mode fiber. That’s a good subject to cover next month.

Editor's Note: This is the final installment of a four-part series on dark fiber. For the other parts in the series, click the following links: part one, part two, and part three.

About the Author

Jim Hayes

Fiber Optics Columnist and Contributing Editor

Jim Hayes is a VDV writer and trainer and the president of The Fiber Optic Association. Find him at www.JimHayes.com.

Stay Informed Join our Newsletter

Having trouble finding time to sit down with the latest issue of
ELECTRICAL CONTRACTOR? Don't worry, we'll come to you.