Imagine what would happen if you were working on a construction project and had to make a measurement. You used your tape measure and got a length based on the markings on the tape. A member of your crew showed up with a laser ranger and made his measurement. Another worker brought over a yardstick, while his buddy just stepped off the length. The tape, yardstick and laser ranger measurements should have been pretty close, but the stepped-off distance was, well, a bit less precise than the others, and would surely be disregarded.
Exactly the same thing goes on in fiber optics every day. Almost any time two people make the same measurement, there will be differences-especially when testing multimode fiber commonly used in premises cabling. The key is to understand the differences and what causes them and how to explain all this to your customers. To understand these differences, you need to understand standard measurement methods, variations in test conditions, and instrument or setup differences.
To begin with, there are two ways to measure the loss of a fiber optic cable plant: insertion loss using a source and meter-or optical loss test set (OLTS)-or backscatter measurement using an optical time domain reflectometer (OTDR).
In all, four ways are listed in various international standards from the Telecommunications Industry Association (TIA) and International Electrotechnical Commission (IEC) to test installed fiber optic cable plants. Three of them are insertion loss measurements using test sources and power meters (OLTS) to make the measurement, while the fourth uses an OTDR.
The source/meter method measures insertion loss as shown in Figure 1. Insertion loss approximates the way the actual network uses the cable plant, with a transmitter (source) on one end and a receiver (power meter) on the other. One would expect the loss to be similar to the actual loss seen by the network, which is exactly what we need to know. The OTDR, however, is an indirect method (see Figure 2), using backscattered light to imply the loss in the cable plant, which can have large deviations from insertion loss tests. Most fiber optic technicians are familiar with both methods, but may not understand the differences they give in measured loss.
To further confuse the issue, there are three methods of measuring insertion loss. The differences in the three methods are how we define our reference for 0 dB or no loss. All three tests end up with the same test setup as shown in Figure 1, but the reference power can be set with one, two or three reference cables, as detailed in the TIA standard OFSTP-14. As discussed in my column on reference cables in the July issue of Electrical Contractor, the one cable method is required by U.S. standards (TIA-568-B.3) while the three-cable method is used by international standards (ISO 11801).
The essential difference in each method is how many connections are included when making each reference setting. The one-cable method has no connections, as the power meter is directly connected to the output connector of the reference cable. The two-cable method has one connection between the launch and receive cables, while the three-cable method has two connections on either end of the third reference cable, which is replaced by the cable under test. One would assume that the losses measured would be different, reduced by the loss of the connection(s) included in the reference measurement.
The OTDR uses a completely different method, sending a high-powered pulse down the cable and looking at the light scattered by the fiber back to the instrument. The OTDR can create a very nice picture of the cable plant loss, pinpointing features like connectors and splices, and measure the location of each feature. But the way it implies loss is very different from the direct insertion loss test in an unpredictable fashion. Over the last 20 years, I have been involved personally in many tests and discussions of how to correlate OTDR and OLTS measurements, with no success.
Just how much does the loss of a cable plant change with the different methods? Above is data showing a 520 meter multimode simulated cable plant tested all four ways at 850 nanometers using the same meter and light emitting diode (LED) source but several different pairs of launch cables to see the reproducibility of the results.
Note how the loss of our test cable plant reflects the differences we described above. The one cable reference method has higher loss than the other methods, but it also has much lower measurement uncertainty. The two and three cable reference methods have less loss, because we have subtracted the connector loss included when we set the reference for 0 dB loss. The uncertainty is higher because of the greater variance when setting the loss reference.
The OTDR measurement is always lower than the three insertion loss methods. First, it does not include the loss of the connectors on both ends of the cable, only the end connected to the OTDR launch cable. Adding a cable on the far end would test the other connector but negates the advantage of the OTDR in being able to test from only one end of the cable.
The majority of the difference is caused by the backscatter measurement technique and the fact that an OTDR uses a laser source-an issue we will discuss shortly. Backscatter is not always constant along the length of the fiber or in two different fibers, causing the OTDR measurements to vary, especially at connections between two different fibers. One can reduce backscatter errors by testing in both directions and averaging, but few techs ever take the time to do that.
The measured value can differ, due to differences in the reference cables or test sources. Each time you make a measurement, you are including the loss of the connectors on the cable under test mated to a connector on one of the reference cables. Even if the reference cables are high-quality and thoroughly cleaned before each use, different reference cables are highly likely to give different loss readings-as much a 0.2 dB difference is not uncommon.
The difference between a laser and LED source is caused by the way light is launched into the fiber. A laser has a tight beam while LEDs have a wide beam. The figure shows that the light from the LED fills more of the core of a multimode fiber. The light traveling near the outside of the core travels a longer path in the glass fiber, suffering more attenuation in the fiber and will cause higher loss in connectors or splices. Therefore, a measurement with a LED source will show significantly more loss than with a laser. Even two LEDs with different output power patterns can show significant differences in measured loss, so most standards call for a source with specific launch characteristics.
We mentioned the differences caused by source power output, but other measurement errors can cause problems. Sources may require significant warm-up time or not be stable over the time it takes to make a measurement. It is easy to test a source. Just connect it to a meter with a reference cable and turn everything on. Watch the power level change over time. If the source takes several minutes to stabilize, put a label on it warning the operator to allow it to warm up before testing. If it never stabilizes (to about 0.1 dB maximum variation), perhaps it needs to be returned to the manufacturer for repair. If you have to use a source that is not stable, recheck your zero reference often to reduce the magnitude of the problem.
The most important issue in fiber optic testing is to have agreement between installer and user as to how measurements are to be made. Remember that all standards require insertion loss testing-not OTDR testing-for installation acceptance. Just make certain that everyone agrees how that test is to be done before it becomes a problem. EC
HAYES is a VDV writer and trainer and the president of The Fiber Optic Association. Find him at www.JimHayes.com.