You're reading an older article from ELECTRICAL CONTRACTOR. Some content, such as code-related information, may be outdated. Visit our homepage to view the most up-to-date articles.
The commercial construction market, in general, remains anemic, with one exception: data centers. Not only are we all buying more data-transmitting smart-phones, tablets, web-connected televisions—and, yes, even PCs—we also are moving data from our own hard drives to remote “cloud” servers. Data may seem like a more virtual than physical commodity, but all those ones and zeroes need to be stored somewhere. As a new generation of larger, faster data centers comes online, energy efficiency is becoming a critical design goal.
In 1865, British economist William Jevons noted that technological progress that improved the efficiency with which a resource is used tends to increase, rather than decrease, demand for that resource.
At their largest, these new facilities are among the biggest U.S. construction projects currently underway. In fact, the National Security Agency’s $1.1 billion Utah Data Center is the most expensive building project now in planning, according to McGraw-Hill Dodge Analytics. Building 2 of Facebook’s Prineville, Ore., center is fourth with a $200 million price tag.
Data center owners also are packing more computing power per square foot into these facilities. Where the load of an individual server rack might have maxed out at 4 kilowatts (kW) a few years ago, owners now might be able to squeeze slimmer blade servers totaling 9 kW or more into the same space.
“The power-density requirements are much greater,” said Dan Cohee, vice president at Santa Fe Springs, Calif.-based PDE Total Energy Solutions, an electrical contracting firm with a specialty in data center projects. He’s seeing extensive three-phase power requirements in the same footprint that required one-third the power 10 years ago.
So, why, you might ask, do we need all these new data centers, especially when those servers are progressively able to do more in less space?
It’s a good question. Server performance, in fact, seems to be following the timeline established in 1965 by Intel co-founder Gordon Moore, who postulated that the computing power of individual integrated circuits would double every two years. This prediction has since become a rule of thumb —“Moore’s Law”— for business planners and tech enthusiasts around the world.
The fact that more efficient data processing has increased demand for data centers rather than decreased it can be explained by a second hypothesis put forth a century before Moore made his prophetic statement. In 1865, British economist William Jevons noted that technological progress that improved the efficiency with which a resource is used tends to increase, rather than decrease, demand for that resource.
“In an elastic economy, being more efficient does not mean we’ll consume less,” said Mark Monroe, executive director of The Green Grid, an industry group focused on boosting the efficiency of data center operations. In fact, he said, as data gathering, processing and storage have become less expensive, our appetite for more is only growing stronger. “The demand is increasing, in some cases, faster than Moore’s Law.”
And today’s drive for even more efficient facilities could push that demand higher, if what’s become known as Jevons’ Paradox continues to hold true. Improving the energy efficiency of watt-hogging data centers has become a key operational goal for owners. For businesses operating their own data centers, higher efficiency means lower overhead. For co-location companies that specialize in leasing server space to others, every kilowatt-hour saved means a slightly higher profit margin in what is becoming a highly competitive market. As a result, developers and designers are looking at new ways to run both the servers that store and process data and the cooling equipment needed to keep the servers online and functional.
The Green Grid, with member companies that include leading computer and building-technology manufacturers, along with data center owners and developers, has developed metrics to help owners measure their facilities’ performance. Power usage effectiveness (PUE) is a ratio calculated by dividing a facility’s total power usage by the power used only by computing devices. An ideal result would be 1.0—that is, all the power used by the facility would be going to energize the data center’s servers. Respondents to a January 2012 survey of $1 billion-plus corporations by Digital Realty Trust, a leading data center developer, reported an average PUE of 2.8, meaning there’s plenty of room for improvement.
For Green Grid proponents, designing more efficient data centers means taking a holistic approach.
“We try to treat the data center as a whole ecosystem,” Monroe said.
This means considering the path power—as well as water and other resources—travels through the entire facility. With this in mind, many in the data center industry are beginning to question some long-held assumptions regarding cooling requirements, power distribution and backup-power needs.
“People are looking at their level of redundancy much more critically now and asking, ‘What do I really need?’” Monroe said, citing one example of the ways companies are rethinking their data operations.
Uninterruptible-at-all-costs has long been a goal for facility owners, requiring multiple, independent power-distribution paths, among other safeguards. These backups might earn a data center a Tier 3 rating from the Uptime Institute, indicating a facility’s ability to maintain 99.982 percent availability.
Meeting such stringent requirements, though, is becoming more expensive. As power density continues to rise, owners are beginning to be more judicious in their reliability aspirations. Monroe cited eBay as one company that recently audited its own operations and discovered that 80 percent of its operations could run at Tier I (99.671 percent reliability) or II (99.741 percent reliability).
Owners and designers also are second-guessing the priority long placed on cooling, especially as the increase in power density makes added waste heat an increasingly expensive problem. In 2008, Microsoft engineers reported operating five servers in a large, metal-framed tent outdoors in Seattle for seven months with no failures. The experiment, now legend in data center design circles, was one of the earliest examples of a move toward natural cooling. Similarly, eBay now is running servers at a Phoenix facility in rooftop containers that can reach 115°F. Much of the cooling load is carried by a unique hot-water circulating loop.
Even more traditional data center designs feature innovative cooling systems located closer to the actual cooling load—the server racks, Cohee said. Chilled-water loops might circulate directly to a server rack, instead of air handlers. And, he said, even mainstream operators are turning to outside air when conditions are cool and dry enough to simply, “open the windows.”
Similarly, Cohee said, designers also are rethinking power distribution to the servers. Getting power to server racks and then to individual servers traditionally has involved several transformations from alternating current (AC) to direct current (DC), with efficiency losses at each of those change points. Cohee said more of his company’s clients are bringing higher voltage power closer to the end-use appliance, up to 480V AC, in some cases.
“They transform it at the rack, itself,” he said.
Hansel Bradley is a construction executive with Minneapolis-based Mortenson Construction, a leading general construction firm with a specialty in mission-critical facilities.
“We have those discussions all the time,” Bradley said.
In addition to reducing transmission losses, it also can cut material and labor costs by bringing 480V cabling directly to the server room or rack.
“It reduces the infrastructure quantity along the way,” he said.
Other Mortenson clients are taking the power-and-cooling-to-the-rack approach to its logical conclusion and opting for modular solutions that incorporate server racks, with their associated power-distribution units and cooling into an almost plug-and-play design. Co-location companies that provide outsourced data center services for multiple customers under one roof have been using a modular approach for years, Bradley said, but now others are seeing its value.
“I see it pushing down to users who have not historically embraced it,” he said. “It’s smart engineering.”
The Green Grid’s Monroe agreed, calling modularization, “one of the big transformations in the industry that’s been needed for a long time.” To him, the advantage lies in the “right-sizing” of cooling and server capacity to actual demand.
“We can build in smaller increments in power and cooling than we could before,” he said, noting that, in the past, operators might have cooled a half-empty facility that hadn’t yet been built out to capacity. Modules help eliminate such waste. “They keep the infrastructure more closely loaded to its optimal points.”
ROSS is a freelance writer located in Brewster, Mass. He can be reached at [email protected].
About The Author
ROSS has covered building and energy technologies and electric-utility business issues for more than 25 years. Contact him at [email protected].