When someone asks what the best power quality meter is, the answer depends on who is responding and the basis for their response. It is similar to asking for the best baseball player, musician or dessert. So many factors go into the answer,
starting with what constitutes a power quality meter, monitor or instrument.
Market research groups show the PQ meter global market to be around $1 billion, growing at 5%–7% annually. However, drilling down into the reports shows this includes smart meters, electric utility revenue meters that have PQ capabilities and many other different types of instruments.
Companies such as AEMC, Dranetz, Electro Industries, Elspec, Extech, Fluke, Megger, SATEC, Schneider Electric and many more are likely to show up in most market reports and search engines.
For the purpose of this article, we will confine ourselves to the type of instrument whose primary purpose is to measure or monitor the quality of the electrical supply as it affects the equipment powered by it, for either troubleshooting or continual performance measurements.
Further categorizations help narrow down what we will examine in more detail. Handheld, single-phase meters such as digital volt meters or clamp-on power meters make up approximately a third of the market. Like smart meters, those aren’t part of this discussion, though they are useful tools in many power quality applications.
This leaves us with instruments that typically have eight measuring channels (four voltage, four current) that measure the waveforms of the voltages and currents, compute power quality parameters and categorize changes as sags, swells, harmonics, transients, etc. Nearly all these instruments have communication capabilities to transfer data and information to software programs for further analysis and reporting. We will do a comparison of these.
The specifications in the marketing materials for this set of instruments usually has information on the following: sampling rate, harmonics to the nnnth, transients, zillions of parameters per cycle, triggered/not triggered capture and—my favorite—standards: designed to/conforms with/certified to (by whom). What each category means to the end-user and how to differentiate useful versus marketing wizardry is not easy for those who don’t actually design these instruments.
Sampling rate
The voltage and current waveforms need to be converted from real-world analog values to digital values for the instrument’s processors to do their magic. Nyquist theory, a mathematical construct, says the rate of samples taken on the waveform in one cycle needs to be at least twice as fast as the highest frequency of interest.
For a 60-hertz (Hz) waveform, sampling at 120 Hz would only work if the waveform has no components other than 60 Hz. That is a very unlikely scenario with today’s electronic loads. Typically, waveforms are sampled at 256/512/1,024 samples per cycle, which equates to 7.7 kilohertz (kHz), 15.3 kHz or 30.7 kHz at 60 Hz. This allows instruments with proper anti-aliasing filters to see up to the 63rd harmonic in a 60 Hz system, the 127th or 255th, respectively. Be careful to see if the sampling rate is stated for each channel, not across all. If the latter, then the real bandwidth of detection is eight times lower than implied.
Harmonics to the nnnth
There were several decades in PQ instrument development where the 50th harmonic was more than enough bandwidth to accurately characterize the harmonic spectrum in the voltage and current. Typical systems had significant 3rd, 5th, 7th, 9th, 11th and 13th harmonic content. Most data collected showed very few nonzero values above the 25th harmonic.
Then power converters started using higher frequency converters. Signals began showing up above the 50th harmonic, a common upper limit specified in many harmonic standards. For the harmonics up to the 9 kHz bandwidth, make sure the instrument measures according to the IEEE 519 or IEC 61000-4-7 standards and is certified as such.
Without such certification, measurements may be highly suspect. For example, without the proper sampling rate and anti-aliasing filters, signals above the Nyquist frequency can “fold back” and show up as lower harmonic values instead. A signal at the 75th harmonic in an instrument capable of 256 samples per cycle and inadequate filter will show up as the 25th harmonic instead.
There are a new class of harmonics called supraharmonics that are for 9–150 kHz range frequencies resulting from very high-speed converters. Those aren’t that common yet, and the standards are still working out how to consistently and accurately measure them.
Data gathered through the sampling rate and harmonic spectrum is used to calculate the rms values used in the determination and characterization of sags, swells, interruptions and voltage fluctuations that result in light flicker, but there is a significant amount of mathematical crunching that goes on to determine the flicker parameters.
Billions and billions of parameters per cycle
It seems that with each revision of the power quality standards, new parameters are being defined, though the basic characteristics of PQ phenomena as categorized in IEEE 1159 and IEC 61000-4-30 have basically remained the same for several decades, except for the addition of rapid voltage change. Trying to find better ways to quantify what a trained human brain can discern from the voltage and current waveforms into a numerical value continues to increase the list of calculations done by PQ instruments. Some are done over one cycle, such as the rms of voltage and current, but the calculation is typically done starting every half cycle, and some instruments with every sample.
Some have changed because the electrical environment has made them inaccurate with their original formulas. For example, power factor using the cosine of the angle between voltage and current, and volt-amperes-reactive (var), is a frequency-dependent parameter, so the old power triangle doesn’t work when harmonics are present.
To comply with the flicker standards, a half-dozen parameters are required to be displayable, not just Pst and Plt. Unbalance (or imbalance) has two different calculation methods, one of which involves sequence components, which few nonutility engineers understand. All these parameters take up processing power and memory, as well as programming time if high/low limits are to be set for such. More isn’t better if you don’t understand or need them.
Triggered/not triggered capture
The instrument manufacturers have two philosophies about capturing waveforms and rms values for events. Some claim to not need any trigger limits to be set, as “everything is saved.”
Most manufacturers provide setup programs to determine at what level, for how long before the limit’s crossed and how long after is data to be saved. For example, the limit for voltage sag may be set to 90% of the nominal, with 10 cycles of pre-event waveforms, 30 pre-event rms values, 60 post-event waveforms and 120 rms values to be saved when the limit is crossed in either direction. The pros and cons of each can be read in the manufacturer’s literature.
Just remember that no memory is infinite. If conducting a two-month monitoring program during the summer thunderstorm season, you may not have all the detail you want as the data is either compressed or written over.
Transients
A transient in the power quality standards is clearly defined, and there are sufficient characterizations that the complex waveform can be described in just a few words to get a good idea of what it is, such as a power factor capacitor oscillatory transient. However, some marketing literature can disregard the standards and state that transients are things that are transitory, such as sags. The key is to look for specifications of the sampling rate for transient capture. Instruments that capture according to the standards will usually have a second set of hardware that samples at a much higher rate of speed (1 MHz or more) than the sampling engine used to determine the rms and harmonic values.
Since this produces lots of data in a cycle, these engines will save a much shorter duration of data. That isn’t a bad thing, as a very short event (such as a 10-microsecond impulsive transient) doesn’t need a dozen cycles of data to understand what happened.
Standards: Designed to/conforms with/certified to
My personal favorite is the trickery of specmanship. Standards-making committees strive to produce documents so that two different instruments would produce the same data/information/answers when subjected to the same voltage and current signals to the specified or required accuracy.
Initially, the standards fell short because there was too much interpretation possible by the manufacturer, test lab and user. Some of the definition standards now have companion standards that describe the tests required to prove compliance is met.
If you really want indisputable results, such as in legal cases, the instrument must be certified by an independent laboratory that is certified to produce the required results.
While there are many more differentiating features of PQ instruments, these half dozen are a good starting point for comparison for deciphering the what, when, why and how of specmanship.
Header Image: Dranetz Technologies
About The Author
BINGHAM, a contributing editor for power quality, can be reached at 908.499.5321.