Cooler data centres for a warming world – Times of India

by Ramanujam Komanduri, Country Manager, India, Pure Storage
Across the globe, hot days are getting hotter and more frequent. Climate change is causing increased intense heat waves and the temperatures to soar to record degrees. The month of March this year was the hottest in India, since 1901. During this summer, many large corporations’ data centres were impacted by the severe heat leading to large-scale outages across countries. As the mercury continues to rise, the challenge of keeping data centres cool becomes more complex, expensive and power intensive. The electricity requirement to do this is impacting other infrastructure, as seen in London recently with new house building being affected by the high power requirements of data centres. With data volumes growing, this need is only going to expand.
For those of us in the data storage and processing world, keeping cool is not a new challenge. Any data centre manager will also be familiar with the need to balance efficient power consumption and consistent temperatures with answering the needs of the business. In fact, the International Energy Agency estimates that 1% of all global electricity is used by data centres and that by 2025, data centres will consume 1/5 of the world’s power supply. While there’s plenty of high-end tech out there that can help with cooling components, these can be hard to implement or retrofit into existing data centres. Thankfully, there are some pragmatic, sustainable strategies to explore as part of a holistic solution.
Keeping cooler air circulating
It should go without saying that good air conditioning should be a mainstay of all data centres. For those who have the option, building data centres in cooler climates can do a lot to reduce the cooling burden. Of course, for many, this is not a practical option.
Making sure that Heating, Ventilation and Air Conditioning (HVAC) systems have a stable power supply is a basic stipulation. For business continuity and contingency planning, backup generators are a necessary precaution — for cooling technologies as well as compute and storage resources. Business continuity and disaster recovery plans should already include provisions for what to do if power (and backup power) cuts out.
If temperatures do spike, then it pays to be running hardware that is more durable and reliable. In comparison to mechanical disc alternatives, flash storage is frequently far more suited to withstand temperature rises. Because of this, even at extreme temperatures, data is secure, and performance is constant.
Power reduction suggestions
Here are three strategies that IT organisations should be considering. When combined, they can help to reduce the power and cooling requirements for data centres:
More efficient solutions – this is stating the obvious: every piece of hardware uses energy and generates heat. Organisations should look for hardware that can do more for them in a smaller data centre footprint which immediately helps to keep temperatures—and, as a result, cooling costs—down. Increasingly, IT organisations are considering power efficiency when selecting what goes in their data centre. In the world of data storage and processing for example, key metrics now being evaluated include capacity per watt and performance per watt. With data storage representing a significant portion of the hardware in data centres, upgrading to more efficient systems can significantly reduce the overall power and cooling footprint of the whole data centre.
Disaggregated architectures – now we turn to direct attached storage and hyperconverged systems. Many vendors talk about the efficiencies of combining compute and storage systems in HCI (hyper converged infrastructure). That’s absolutely fair, but that efficiency is mainly to do with fast deployments and reducing the number of teams involved in deploying these solutions. It doesn’t necessarily mean energy efficiency. In fact, there’s quite a bit of wasted power from direct attached storage and hyperconverged systems.
For one thing, compute and storage needs rarely grow at the same rate. Some organisations end up over-provisioning the compute side of the equation in order to cater to their growing storage requirements. Occasionally, the same thing happens from a storage point of view, and in either scenario, a lot of power is being wasted. If compute and storage are separated, it’s easier to reduce the total number of infrastructure components needed—and therefore cut the power and cooling requirements too. Additionally, direct attached storage and hyperconverged solutions tend to create silos of infrastructure. Unused capacity in a cluster is very difficult to make available to other clusters and this leads to even more over-provisioning and waste of resources.
Just-in-time provisioning – The legacy approach of provisioning based on the requirements of the next 3 to 5 years is not fit for purpose anymore. This approach means organisations end up running far more infrastructure than they immediately need. Instead, modern on-demand consumption models and automated deployment tools let companies scale the infrastructure in their data centres easily over time. Infrastructure is provisioned just-in-time instead of just-in-case, avoiding the need to power and cool components that won’t be needed for months or even years.
As India progresses towards being a truly digital economy, investments from both local and international data centre operators is expected to touch $4.6 billion per annum by 2025, according to a recent report by Nasscom. The massive explosion in data creation as well as consumption is fuelling this exponential growth. Enabling a sustainable digital future with energy efficient infrastructure becomes paramount.
So with more effective solutions available out there, why wouldn’t we take steps to reduce equipment volumes and heat generation in the first place? If we can cut running costs, simplify and cool our data centres and reduce our energy consumption — all at the same time — then I’m not sure that’s even a question to ask.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! TechAI is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.