Recently, I heard a report on National Public Radio about a crisis in office computer networks. Employees were watching the NCAA “March Madness” basketball games online, hogging bandwidth and slowing corporate networks to a crawl. This is an example of why network bandwidth, and eventually
the cable plant itself, is so important.

Networks, usually called local area networks (LANs), are shared resources. If an office has hundreds of workers connected to the network, sending e-mails, attaching files, viewing Web sites and accessing the company databases, all that traffic passes through various parts of the network, shown in the figure [on opposite page]. The typical worker in a cubicle has a PC and perhaps a voice over Internet protocol (VoIP) phone that connect to and communicate over the network using either wired or wireless connections. Desktop computers and VoIP phones generally will be connected by cables, while laptops now generally use wireless connections.

These devices send and receive data over their connections through the network. They are directly connected to hubs or switches that consolidate signals, manage traffic and connect to the network backbone. The network backbone must carry the data from all users to and from the corporate servers and provide a connection to the Internet. In the computer room, the backbone connects to switches that attach to the company data servers and, usually, several mass storage devices. Internet connections are through devices that provide a firewall to reduce the risk of hackers attacking the network or sending in viruses, works or other “malware” (bad software).

The backbone must carry all the data traffic from every user, so if there are 100 users, the traffic on the backbone will be 100 times higher than the average for any individual user. Thus, the backbone speed should be much higher than the individual connections. Larger numbers of users require faster backbones, and it has always been the backbone bandwidth requirement that has driven network development to higher speeds.

E-mail or accessing customer data on a server does not require much bandwidth. Downloading pages from the Web may require more bandwidth than transferring most internal files. Engineering or graphics tends to use much more data, as those types of files can be very large, as you probably know from downloading and e-mailing pictures from your digital camera.
But individual files, no matter how large, do not load up networks anywhere as much as streaming video from the Web. The need for high bandwidth is caused by the continual updating of the picture, as many as 15 or 30 times per second. If you have viewed streaming video at home, you know that the quality of the viewing (noise and jumpiness especially) is directly dependent on the bandwidth of your broadband connection. Unless you have connection speeds of 5 megabits per second or more, the video quality, even on small viewing windows, will not be very good. Imagine what happens on a corporate backbone when everyone is downloading video.

But data protocol has some other rules that affect traffic. If an error in transmission is detected, the data must be retransmitted. If lots of errors occur in transmission, as can happen if the cabling is bad or wireless interference is encountered, the retransmitted data on the backbone can go up fast, and the actual useful data transfer rate will go down accordingly.

Thus, having a network infrastructure (cabling and wireless access points) that transmits data properly is very important for overall network throughput. Likewise, identifying big users of bandwidth (such as all those people streaming the basketball games) is important. It’s sometimes necessary to limit their network usage in order to keep traffic moving on the network.
Today, all corporate networks are based on Ethernet, developed in 1972 by Xerox when the company was looking at a way to create a “paperless office.” Ethernet originally had a data rate of 2.94 megabits—millions of bits—per second (Mbps). At that time, only coaxial cable was capable of transmitting data this fast. Ethernet grew to 5 Mbps, then 10 Mbps, where its widespread usage began.

The acceptance of Ethernet was helped by the development of transmission over two pairs of special unshielded twisted pair wiring (UTP) similar to phone wire. Cabling standards groups, recognizing the rising importance of fiber optics to telecommunications, later added standards for fiber cabling, initially used for longer backbone links or in adverse electrical environments.
The development of Ethernet to higher speeds was mainly driven by the growth of the number of network users. Having a hundred or a thousand users on a network means the backbone traffic must provide 100 or 1,000 times as much bandwidth to prevent backups. Network equipment manufacturers have continually developed Ethernet, multiplying speeds by 10 times about every five years. Users adopt the faster speeds on their backbones first, migrating faster connections to the desktop only as the PC manufacturers offer faster interfaces free on every PC.

As Ethernet network speeds increased from the original 10 million bits per second (10 Mbps) to 100 million bits per second (100 Mbps) to 1 billion bits (gigabit) per second (1 Gbps), optical fiber was easily able to carry the higher bit rate signals over the longer distances needed for backbones, so fiber became the medium of choice for the backbone. To use copper cable, it was necessary to develop new UTP cables with higher performance, initially Category 5 at 100 Mbps, which became the mainstay of the structured cabling business for almost a decade. In fact, Cat 5 became the universal term for copper cabling for networks and is still the way most people today refer to UTP cabling.

A further bump up to 1 Gbps was again more difficult for copper than fiber, which could handle the speed with ease at distances even longer than specified by structured cabling standards. New network transmission methods required the use of all four pairs carrying 250 Mbps simultaneously in both directions. That required a whole new set of cabling specifications and a new, higher bandwidth cable called “enhanced Cat 5” (Cat 5e).

Fiber manufacturers did some research and development on their part, too, taking advantage of the new, inexpensive VCSEL laser sources used for gigabit Ethernet. Going back to an earlier multimode fiber design with a 50 micron core that was optimized for lasers back when the phone systems still used multimode fibers, the manufacturers came up with “laser optimized” 50 micron fibers that could go even longer distances, maintaining the fiber superiority over copper.

Only a few years later, the Ethernet standards group developed a 10 Gbps version for faster backbones. Again, fiber had an easy time developing several options for transmitting Ethernet data at this speed, but again the copper industry was up to the challenge. It just ran into some roadblocks caused by the laws of physics. It took about three years longer, a new type of UTP cable (“augmented Cat 6” or Cat 6a) and some very powerful transceivers that had two drawbacks, signal delays (latency) caused by extensive digital signal processing and very high power consumption.

I would never dare to say these obstacles to copper use at 10 Gbps cannot be overcome. The copper types are highly motivated to never let fiber beat them in the marketplace. But who needs 10 Gbps on copper anyway? Since most of the backbone cabling has already gone to fiber, where do the copper manufacturers expect to find a market for the products they have spent so much money to develop?

Nobody believes 10 Gbps is needed at the desktop. Even if companies allowed employees to stream basketball games to their computers, that link only needs 10 Mbps Ethernet. Estimates of home broadband needs to handle two high-definition video channels, music, Internet surfing and phones are less than 40 Mbps, well below the 100 Mbps or 1 Gbps already available.
Now, 10 Gbps is limited to backbones and the computer room, where connections to servers at that speed will reduce latency—except the latency inherent in 10 Gbps copper transceivers is a problem, as is their power consumption and the heat generated by it. Some large data storage users already have standardized on an obscure 10 Gbps parallel coax cable link for short connections and fiber for all others.

Some long-time advocates of optical fiber have supported using fiber to the desktop. But the end-user, seeing the free UTP Ethernet port on their computer, balks at the cost of a fiber-to-copper media converter and wants to connect to a matching copper port in the wall. However, fiber to the desk has found a few converts.

But while fiber has not proven a viable competitor to copper for the desktop connection, something has. Users now want mobility and have turned to wireless fidelity (Wi-Fi) to get it. They want to carry their computer to meetings, home and wherever they travel. Mobility appeals to users, except those who have big data needs, such as engineering and graphics types. Current Wi-Fi standards provide more than enough bandwidth for most users. And Wi-Fi provided the best incentive for widespread adoption: It was free in the laptop the user purchased and free at most locations where they wanted to use it.

Is this the end of UTP cabling? I doubt it, as half the corporate PC purchases are still desktops, although laptop usage is growing. Will there be a copper version of the next generations of Ethernet at 40 or 100 Gbps? It’s highly likely, because copper advocates will refuse to concede the technology. But if you currently design and install corporate networks, you are probably already seeing a change in cabling architecture—fewer UTP desktop connections and more Wi-Fi access points all carried over a fiber optic backbone.

HAYES is a VDV writer and educator and the president of The Fiber Optic Association. Find him at www.jimhayes.com.