Advertisement

Advertisement

From Mainframe to Digital

By Jennifer Leah Stong-Michas | Jun 15, 2013
Data center construction is on the rise, and the energy these facilities demands is growing with them.
Web Exclusive Content

Advertisement

Advertisement

Advertisement

Advertisement

You're reading an older article from ELECTRICAL CONTRACTOR. Some content, such as code-related information, may be outdated. Visit our homepage to view the most up-to-date articles.

Data centers first appeared in the 1970s as mainframe computing environments. Their design is still evolving to best operate efficiently and reliably. The vast amount of information flowing in and out of a data center requires that all phases of design be viewed as a way to help ensure that the data center itself will function as needed. The design can come after the exterior has been completed, though tackling challenges earlier is advantageous to contractors and integrators.

While data centers deal with a multitude of specialized systems and components, they are traditional systems that need to be hearty enough to support everything else contained within.

“When you go to build a data center, you must do the design first and you need to start with the cooling,” said Bill Montgomery, founder of Premier Solutions.

Pete Sacco, founder of PTS Data Center Solutions agreed, describing key criteria with the following:

  1. What is the total load being designed for? This must be defined for both the day one as well as the future maximum load.
  2. What is the availability hoped to be achieved for the site? This is often defined in terms of the Uptime Institute’s Data Center Rating Guideline but not always.
  3. What is the density at which the critical load will be deployed?

“Because load densities are widely varying in a typical data center, we prefer to discuss density in terms of watts per rack/cabinet as opposed to watts per square foot of raised floor,” Sacco said.

Keeping cool

General power consumption is an inherent data center concern. The massive amount of equipment both pulling power and generating heat causes issues of cooling to reign as the dominant headache facing data center designers. This becomes apparent early in the design process.

“You get in to how many servers will reside in a cabinet, how many cabinets will be in a room, is the UPS going to be large enough to handle what is being put in. Then you start having to deal with the issues of loads and cooling,” Montgomery said.

“There are two major pieces to data center cooling system design,” Sacco said. “The first is the heat-removal piece. This is sometimes referred to as the air side of the cooling system. This piece refers to the CRAC or CRAH units deployed inside the computer room that are responsible for removing the heat generated by the IT [equipment] load. The second is the heat rejection piece. This piece is responsible for dissipating the heat into the ambient environment. There are many options to accomplishing this, but the most common are direct expansion [DX], or compressed refrigerant approaches, such as air-cooled condensers, glycol-based dry coolers, and water-based cooling towers, or via a myriad of chilled water approaches. Each approach has its own unique CAPEX and OPEX considerations which must be aligned with the projects requirements, budget and schedule.”

Options also exist in terms of how to best handle the heat generated within a data center, though no single option is the perfect fit for every data center. Every data center is unique, and each design will have different variables and components.

“There are a few ways to cool a data center. The traditional way is room based where cooling sits along the perimeter of the room. Then you have row based cooling that chills by row, which is closer to the heat source. Those are the two most common types these days,” Montgomery said. “You first need to design the room or data center and determine if it will be room- or row-based cooling. After that you can design the rest of the infrastructure around that. There is a higher upfront cost associated with row-based, however, the [return on investment] over time supports that cost. In a data center, you have huge losses of cooling in a room-based cooling setup, huge as in about 60 percent loss by the time the cooling hits the servers, which is where the heat is being generated.”

“PTS commonly designs and deploys data centers that use a common heat-rejection approach, but with multiple heat-removal methodologies to accommodate the load density deployed in a given region of the data center, such as using perimeter CRAC units for a general 5 kW per rack/cabinet cooling density, but the additionally deploying in-row, or close-coupled cooling solutions, or even in-rack cooling solutions to handle high and ultra-high density zones,” Sacco said.

Montgomery said that many of the cooling choices are dependent upon the servers being used in the data center.

“The whole cooling choice and data center design really is a pyramid of qualifiers. You need to take in to account the types of servers, their location, the type of power available; it all comes in to play,” he said.

One way the actual placement of the servers in the racks has an impact on the room design depends on whether the servers are mounted top to bottom or bottom to top. That helps determine where the heat flow will go to primarily and thus dictates what areas need to remain cooled to help with optimal performance and operations. It is then imperative to take the number of servers being used in to consideration.

The use of high-density servers also compromises the data center overall in terms of heat generation. The heat put off by servers is dictated by the kW dissipated by each server, which is then compounded based on how many of those are in each rack.

“Anything over 16kw is considered high density. What you generally see being designed is more of a medium density rack which runs at around 6–10 kW per rack,” Montgomery said.

After the cooling has been designed, the power is the next system that is to be tackled.

“You start having to make decisions and choices such as what UPS will be put in and whether or not you are going with a more traditional approach of N plus 2, which is using both a UPS and a generator to support the main feeds,” he said.

Sacco said the same critical design criteria considerations apply to the power systems design. In general, the two major pieces of data center power design are protection and distribution. For protection, decisions must be made on short-term protection (UPS) and long-term protection (emergency backup power). For distribution, appropriate distribution must be designed in accordance with the required loads. In all cases, the power systems configuration will vary greatly depending on the levels of capacity, redundancy, concurrent maintainability, and fault tolerance desired within the system.”

“Each server cabinet is dual power, A and B, which means two power sources. The A feed goes to the UPS and the B feed then would go to a generator or other backup power source. The point of this is so that there is no single point of failure for any rack or any area in the data center,” Montgomery said.

Various design options exist, and the decision is based on location and what is readily available.

“You could have two UPSs feeding each cabinet is you wanted or you could use utility or street power if available and reliable. But with utility power you get in to situations that come up such as power outages during hot months and spiking rates during certain months or even times of the day,” Montgomery said, adding that, in California, electric is billed using a four-tier system and the kw/hour charge can jump by as much as three times the normal rate during peak periods. “Because of those issues with utility supplied power, UPSs tend to win out during the data center design phase for most people,” he said.

However, when using UPSs for both power and backup power, Montgomery said that designing their load usage and balancing is just as important.

“You should never design to give one UPS any more than 50 percent usage because, if the other UPS fails and the first is past 50 percent usage, it cannot handle the extra load. You need to always make sure to calculate things properly and always re-run those calculations when new components are added or removed. Growth and runtime are big issues in a data center and you need to take those in to consideration and always be changing things as need be,” he said.

The cost of downtime

Many in the industry cannot stress enough the importance of designing a data center for maximum redundancy to prevent downtime, which can translate to costly mistakes that the data center owner and operator have to absorb.

“Take eBay. Through their data center they run $2,300 per minute in transactions. That can be costly if that data center goes down,” Montgomery said.

Because of the costs associated with both running and losing run time in a data center, contractors assisting in the design of data centers should always first determine what type of data center the customer has, either cost centers or profit centers. The determination helps then create the master plan for how the center will be designed in total.

A cost-based data center, such as for Northrop Grumman and Boeing, supports the daily operations of the business as a whole, though revenue is not directly generated through the data center itself. They support the other business operations such as accounting, human resources, accounts payable, accounts receivable and other standard business processes. They impact business operations but not direct revenue streams.

Profit-based data centers, such as for eBay, Amazon or Google, rely on the data center to generate revenue along with supporting business operations. Because these data centers are inherently important and essential to the health of the business itself, downtime is not an option. When a profit center goes down, revenue ceases. Therefore, these types of centers have a mandate that they need to be operational 100 percent of the time and no single point of failure can exist in the design.

The interdependence between systems is critical in a data center, especially when it comes to maintaining continual uptime and operations.

“If the power goes out, then the cooling goes out,” Montgomery Said. “Then if the cooling goes out, the heat within the data center rises and it then becomes very hard to cool things down again even when the power comes back on. Because of all of the redundancy built in to the design, it is actually very rare for an entire data center to go down all at once.”

Contractors need to understand that data centers rely upon proper design and system installation. This, coupled with continual maintenance and monitoring of the data center as a whole, is crucial to their optimal performance and success. Comprehending how data centers have evolved and how today’s interconnected and integrated infrastructure can assist in making sure data centers operate properly is one of the keys to success.

“Data centers have been around since the ’70s,” Montgomery said. “They started with mainframes and then moved to server-based during the late ’80s and early ’90s. Mainframe was the dominant computing model until ‘distributed’ processing began; where servers were spread out among the national companies sites, then consolidation took place to better manage servers. Digital/IP-based means of remote access further transformed data centers. Digital came about in IT around the mid to late ’90s and it changed our lives and the lives of IT. Thus, IT can now monitor and manage their operations and environments such as cooling, temperature, humidity, and power in real time and access the status anywhere. This digital IT component is very important in and after design, as many sites are dynamic and grow.”

About The Author

Jennifer Leah Stong-Michas is a freelance writer who lives in central Pennsylvania.

Advertisement

Advertisement

Advertisement

Advertisement

featured Video

;

Advantages of Advertising with ELECTRICAL CONTRACTOR in 2025

Learn about the benefits of advertising with Electrical Contractor Media Group in 2025. 

Advertisement

Related Articles

Advertisement