Tackling the Challenge of Industry Benchmarks

Dan Draper •

For the past several years, Vertiv has been evangelizing the value of collecting and analyzing the incredible amounts of data created by data center systems to improve the performance of those individual systems and the entire data center.

We’ve even partnered with other major technology providers, including Dell, IBM, Intel and HP, on the development of an open specification to facilitate out-of-band communication across systems. That specification, Redfish, is now under the management of the independent Distributed Management Task Force, and has the potential to greatly simplify communication and data transfer across disparate systems.

But collecting and using data is only part of the solution to improving data center performance. The other part is establishing industry-wide metrics and benchmarks that allow data center operators to compare their performance to others and drive improvements.

For example, availability is relatively easy to quantify and a data center designer knows that while some organizations can get by with “four nines” (4.38 hours of downtime per year), other require “five nines” (5.26 minutes of downtime per year). PUE plays a similar role in relation to energy efficiency. Most of us know that the average data center PUE is about 1.7, while new high-efficiency facilities are achieving PUEs below 1.2.

But how can we measure and optimize other factors that have become equally important to data center performance, such as speed-of-deployment, costs and security?

That’s the challenge Vertiv is now addressing. Working with the Ponemon Institute, we’ve created the Data Center Performance Benchmark Series.

We first partnered with Ponemon in 2010 on a study of data center downtime costs and causes. That study shed light on the business costs—direct, indirect and opportunity—of data center outages and helped data center managers make more informed infrastructure decisions. We commissioned an update of that study in 2013 that was also very well received.

Now, we have a new update of that landmark study: The 2016 Cost of Data Center Outages Report. With this new report, we are able to look at trends and document changes in both the cost and causes of data center downtime since the first report was published. We hope you continue to find this a valuable tool.

But the Cost of Data Center Outages Report is just the beginning.

In the coming months, we’ll be releasing four additional reports as part of the Data Center Performance Benchmark Series, covering the issues of security, productivity, speed-of-deployment and cost-to-support compute capacity.

We don’t expect to create a PUE-type metric for each of these challenges. But we do expect to provide industry-wide benchmarks and new insight into the challenges today’s data center managers face.

We’ve created a special section of our website to house these reports and related content at So, take a moment to download and review the 2016 Cost of Data Center Outages Report and share your perspective using #DCBenchmarks, by visiting our Facebook page or via LinkedIn. We’ll lead the effort to develop new benchmarks, but we need your input to ensure they are meaningful.

Related Articles


Language & Location