Many electrical contractors build, renovate, and maintain corporate and government data centers. They need to understand the new metrics needed to manage a data center, and they need to ensure those who are in charge understand these new metrics so, without wasting power, they can get the maximum performance out of the data center.

With more people switching over to new edge technology (e.g., smartphones and tablets), the demands on data centers have changed. Power consumption is a critical issue and more need to understand the relationship as it relates to information throughput to end-users.

I am currently working on a detailed white paper on this issue to shed more light on it to improve many current facility managers’ and contractors’ scope of understanding.

Questions needed to be asked and answered

Are your current metrics used to measure data center performance aimed at power consumption versus gigabyte throughput? (Power consumption versus actual information distribution.)

If not, you are missing a key element in determining if your data center is efficient when it comes to the use of power versus true information output to the end-users. This is a metric not commonly used by the traditional facilities manager. It should be as it opens up the idea of a new way of assessing costs and assessing overall service performance.

The key questions that anyone paying for a data center should be asking and getting answers for are:

  • How effective is the Data Center you own and manage?
  • How effective is the third party who manages your data center? (if you have it contracted out to a service bureau)
  • Is everyone who is managing and maintaining this function using the right yardsticks?

Once you start asking these questions, you might find out the data center(s) you have invested money in, are not as “energy efficient” when it comes to the actual transport and distribution of information.

Think about buying a car. One of the key measurements you look at is miles per gallon (mpg). When you compare cars, one would choose a car with better gas mileage. 30 mpg is much better than 20 mpg.

You would also look at the price differences. Is it worth paying more for that “extra” efficiency? What do you gain or lose when you do?

When it comes to data centers, are we even calculating that type of gigabytes per kilowatts (GPK)?

Applying some of the quality concepts

When you look at data centers, wouldn’t you want to know how much information you are sending out, versus how much energy your data centers are consuming while distributing it to the end-user base?

Wouldn’t 10,000 gigabytes an hour using 65 kilowatts per hour be better than 10,000 gigabytes an hour using 97 kilowatts per hour? Are there ways to cut energy without cutting throughput performance? Do you need to add power to gain more throughput?

With the anticipated growth in the number of devices as we move more into the Internet of Things (IoT), we need to understand if we have a path to more throughput to handle going from 10 billion wireless devices to 30 – 75 billion devices by the year 2020, which is not that far away. (This is the year many are seeing as the goal for implementing 5G networks.)

We need to start developing and applying better metrics which measure actual information throughput to better manage data centers as all facets of business and society depend more on the transport and availability of information.

Typical metrics used today focus on one facet of total energy management. We need to go beyond these industry-accepted practices:

PUE: power usage effectiveness—total facility energy/IT equipment energy

DCIE: data center infrastructure efficiency (is inverse the of PUE)—(IT equipment energy/total facility energy)

We need to be able to understand, analyze and calculate:

GPK: gigabytes per kilowatts (per hour)

KPT: kilowatts per terabytes (per hour)

'Mission critical' means no single point of failure

I highlight this as a reminder: if you are working with data centers supporting “mission critical applications,” you need to ensure everything within them is redundant, including power and network infrastructure. You cannot have a single point of failure in any component (or service to the facility).

If you have some comments or feedback on measuring data center performance based on the data centers you work with, please contact me and share your insights and experiences. When finished, the white paper should represent some new and sophisticated ways to measure performance and energy consumption.

Editor’s Note: Carlini will be addressing the International Drone Expo in Los Angeles, CA (in December) on Intelligent Infrastructure & Drones.