Working with our Vertiv Sales team enables complex designs to be configured to your unique needs. If you are an organization seeking technical guidance on a large project, Vertiv can provide the support you require.

Learn More

Many customers work with a Vertiv reseller partner to buy Vertiv products for their IT applications. Partners have extensive training and experience, and are uniquely positioned to specify, sell and support entire IT and infrastructure solutions with Vertiv products.

Find a Reseller

Already know what you need? Want the convenience of online purchase and shipping? Certain categories of Vertiv products can be purchased through an online reseller.


Find a Reseller

Need help choosing a product? Speak with a highly qualified Vertiv Specialist who will help guide you to the solution that is right for you.



Contact a Vertiv Specialist

Need help choosing a product? Speak with a highly qualified Vertiv Specialist who will help guide you to the solution that is right for you.



Contact a Vertiv Specialist

Rack temperature monitoring: The secret to comfortable data center equipment

Servers certainly have some ventilation and self-cooling capabilities, but we would hardly call them warm-blooded. Every 1 degree Fahrenheit increase in ambient temperature yields a 1 degree F increase for the average CPU. In other words, there's a clear correlation between data center temperature and rack equipment temperature.   

When, exactly, does this become a problem? It varies by the equipment, but most CPUs are at risk of a meltdown if a server is allowed to operate at temperatures between 86-95 degrees F for more than a few minutes. 

The majority of data centers aim for lower ambient temperatures, usually in compliance with ASHRAE's recommended range of 64.4 and 80.4 degrees F (variance is influenced by factors such as humidity and dew point). This range is evidently below the CPU point of no return; however, data center temperature in modern high-density facilities is hardly static from rack to rack. Hot spots borne of airflow deficiencies and other disruptive conditions can result in isolated instances of critical equipment becoming at risk of overheating. 

Furthermore, data center temperature isn't just about what's currently happening; it's also about what could happen. History is full of horror stories about CRAC failures leading to dangerous temperature spikes. And yes, running your servers at higher temperatures is more efficient; it saves money and the environment. However, operating closer to the edge means temperatures will rise to dangerous levels much faster in the event of a CRAC failure. 

This isn't to discourage data center managers from running equipment warm. Rather, it's to encourage them to make sure they have temperature visibility needed to react quickly should they discover signs of rack temperature exceeding safe thresholds. Uncomfortable data center equipment won't complain. It will just shut down, and take your critical operations with it. 

Let real-time temperature monitoring do the talking

ASHRAE recommends installing a minimum of six temperature sensors per rack. Three will go in the front (at the top middle and bottom) and three in the back in order to monitor air intake and exhaust temperatures. High-density facilities often use more than six sensors per rack in order to create more precise temperature and airflow models, which is highly recommended, especially for data centers operating at an ambient 80 degrees F. 

Why? The simple answer is because you can't discover a hotspot if you can't see it. Real-time temperature monitoring connected to your data center's network will notify designated staff via SNMP, SMS or email the second a safe temperature threshold is exceeded. 

And again, the more sensors, the merrier. It is great to know that you will always have a real-time alerting system on your side. It is even better to be able to look at a computer-generated model powered by many rack sensors, so you can trace the root of the deviation. 

Don't let your servers catch a cold, either

Far fewer data center managers concern themselves with cooler-than-average temperatures given the amount of heat servers tend to generate. Nevertheless, letting temperature drop below 65 degrees F becomes risky for a different reason.

Lower ambient air temperatures can hold less moisture. Consequently, high relative humidity in a low-temperature environment will result in condensation. And as most of us know from fourth-grade science, water and electricity don't play nice. Moisture can make quick and irreversible work of a server's CPU and motherboard. 

Thus, it is important to think of data center temperature as a balancing act. Allowing temperatures to drop without consideration for other environmental variables, namely humidity and dew point, will introduce undue risk to your equipment. Furthermore, there is rarely justification for a cooling capacity that drops below 65 degrees F. The last thing your power usage efficiency (PUE) ratio needs is energy applied toward cooling your facility below recommended temperatures.  

To avoid a situation where your servers "catch a cold," make sure you supplement your temperature monitors with a network of humidity and dew point sensors. In coordination with your temperature sensors, facility managers will be notified in real time should relative humidity, or temperature, reach a level that introduces the risk of condensation. Conversely, if humidity levels are too low, the air may become dry enough to induce electrostatic charges that can damage sensitive electronic components. 

Yes, your mission-critical data center equipment is high-maintenance. That probably won't change. But with comprehensive data center monitoring, you will know exactly what your servers need the moment they need it. 

PARTNERS
Overview
Partner Login

Language & Location