Data center energy efficiency is a complex issue. There are many factors that can affect it, including the physical location of the data center, its design and construction, the hardware and software used by the servers, and even the type of power being used.
Are you thinking about ideas to make your Data Center Infrastructure efficient or stuck on where to begin? We’d like to share 8 techniques to enhance your Data Center more efficiently. These techniques are easy to apply, simple to use, and light on budget.
Data Center Energy Efficiency, How to Improve?
Data center buildings use more than 100 times as much electricity as an equivalent-sized commercial office space because of their specific function—housing energy-intensive IT equipment and operating 24/7/365.
Using efficient energy conservation techniques in data centers can significantly lower energy consumption and the associated utility costs due to the high amount of electricity they consume.
1. Optimize supply air temperatures
Because data centers are “Always On,” electricity consumption accounts for about 25% of a company’s information technology (IT) costs.
Data centers are often designed to handle peak demand but cannot foresee when peak demand will occur. 70% of the maximum electrical load is still being used during off-peak times when the data center servers’ full capability is not required.
To create an environment for IT equipment consistent with the higher end of the acceptable temperature ranges given in the ASHRAE Thermal Guidelines for Data Processing Environments, adjust (raise) HVAC supply air temperatures.
Higher supply air temperatures will increase compressor efficiency if a DX-type unit is used for cooling. If water-cooled air handling units are used for cooling, higher supply air temperatures will increase chiller efficiency.
2. Cooling optimization
The idea of increasing energy efficiency is where the concept of cooling optimization comes from. Making the appropriate adjustments will result in greater energy levels. One of the steps to enhance the cooling facility includes recalibrating system controls. Additionally, increased data center energy efficiency might have several connected advantages.
Iterative optimization is used in cooling. A data center operates continually. Thus every day presents a unique set of circumstances. However, the primary goal of cooling optimization is adjusting to these various scenarios.
Controls must be adjusted as the first step in the optimization process. Controls can equalize after they are properly aligned. Datacenter managers, on the other hand, can handle upcoming situations by making more adjustments as they arise.
3. Virtualize and consolidate IT systems
Servers and storage are reasonable targets for power savings because the EPA estimates that they use 50% of the power in data centers. Server virtualization, an efficient tactic that results in space, power, and cooling savings, is currently a popular trend.
You require a storage infrastructure that offers pooled networked storage if you want to take full advantage of server virtualization. Storage virtualization yields the same cost reductions as server virtualization: fewer, larger storage systems offer more capacity and higher utilization, requiring less room, power, and cooling.
We have switched to a more energy-efficient storage approach by integrating server and storage virtualization. With 10 of the most recent storage systems, we swapped out 50 more aged ones.
4. Improve CRAC Unit Efficiency
Systems for under-floor air delivery have a few particular issues. The under-floor plenum frequently doubles as a wiring chase and duct. Since conduit, electrical trays, and data trays might obstruct airflow routes, coordination is required throughout the design process and throughout construction and operation throughout the center’s lifespan.
If users are likely to reconfigure them, the location of supply tiles needs to be carefully studied to prevent short-circuiting of supply air and monitored regularly. Adding or removing tiles to address hot spots might hurt the entire system.
The under-floor plenum’s high air velocity is another significant issue to be mindful of. This may induce room air back into the under-floor plenum and produce localized negative pressure. Apparatus closer This may generate room air back into the under-floor plenum and produce localized negative pressure. This impact can result in equipment receiving insufficient cooling air closer to downflow CRAC units or Computer Room Air Handlers (CRAH).
For a more consistent under-floor air static pressure, CRAC/CRAH systems with deeper plenums and careful planning are required. Refer to the “Air Handler” part of the “Cooling Systems” section below for a more detailed explanation of CRAH units and how they relate to data center energy efficiency.
5. Improve Transformers Efficiencies
Today’s servers, even the newest and most efficient, can only alter their electrical load between 60% and 100%, dependent on demand. In contrast, the actual load can vary between 5% and 75% due to most data centers’ usage of fixed, single-speed computer room air conditioning equipment, which is more prone to cool the space.
Most data centers use fixed, single-speed computer room air conditioning units, which further contribute to excessive energy usage. The data center is likely being overcooled from a facility perspective.
“You can’t control what you can’t measure” is an ancient formula for operational efficiency. We found that efforts to reduce energy inefficiency must start with basic measurements. If you don’t know where your power is going, you can’t know where to focus your attention. To help measure our energy use, we break it down into each of these categories:
- IT Systems
Additional airflow measures are required to stop hot exhaust from entering the cold aisle wherever there are high-density racks in a hot aisle/cold aisle configuration. Here, a little low-tech ingenuity can be useful.
Data center should use a low-cost method that is crucial and quite effective while venting away the heated air that rises. At the ends of the hot aisles and surrounding the cooling outtake system above the racks.
We should also to employ vinyl curtains that resemble those seen in a meat locker to block the heat. We use vinyl strips to create a physical barrier around ducts and equipment and control the air in heated corridors.
6. Use heat that would be wasted
High summer temperatures are accompanied by increased demand for power and rising energy costs. Our natural gas-powered cogeneration system comes online to efficiently power our one-megawatt data center during these times of peak temperature and electricity use. This strategy helps us in two separate ways.
First, we cut power costs and the quantity of electricity wasted in transmission by producing electricity near the point of use (a practice known as a distributed generation).
The second benefit is a direct result of cogeneration. Cogeneration is a fuel-efficient thermodynamic application. It makes use of the significant amounts of heat that are lost during the production of electricity and increases data center energy efficiency.
7. Switch to SSDs
Where possible, organizations should think about switching from hard discs to SSDs. SSDs use far less power and offer more IOPS than hard discs.
For instance, Samsung’s corporate SSDs use just 1.25 W of power while they are active and 0.3 W when they are not. A 15,000 rpm SAS hard disc drive uses about 6 W of electricity per drive. Thus this is roughly one-fourth of that. Additionally, SSDs generate a lot less heat than hard drives since they lack any moving elements.
Even though transferring IT workloads to a cloud or colocation provider externalizes the power consumption to the host site, many firms agree that large vendors are adept at getting the most out of each kilowatt. Hosted service providers frequently concentrate on giving their clients the best power value at the lowest cost, thus increasing data center energy efficiency.
8. Direct Liquid Cooling
The term “direct liquid cooling” refers to a variety of cooling techniques that all have a common feature the transfer of waste heat to a fluid at or very close to the location where the heat is generated rather than the transfer of the waste heat to room air and subsequent conditioning of the room air.
To capture and remove waste heat, one current method of implementing liquid cooling uses cooling coils mounted directly onto the rack.
The coolant lines that connect to the rack coil through flexible hoses are frequently run under the floor. The use of dielectric fluid chilled by a heat exchanger to submerge components in, among many other strategies, includes water cooling of component heat sinks.
Since water movement is a far more effective way of moving heat than airflow, liquid cooling can handle larger heat densities and is significantly more effective than standard air cooling.
Energy savings will be attained by using technologies that permit chilled water at medium temperatures (55° to 60°F as opposed to 44°F) and by minimizing the size and power requirements of fans that serve the data center.
Data centers are some of the most energy-intensive buildings in the world. According to studies by IDC and McKinsey & Company, they account for 1% of global electricity consumption — or about 1 billion kWh per year.
This makes them a major source of greenhouse gas emissions, with some estimates putting CO2 output from data centers at 1% of all greenhouse gases created by humans.
To reduce this environmental impact, we must look at ways to improve data center energy efficiency at every level: from power generation to cooling and lighting systems.
The potential for additional data center energy efficiency is increased by the warmer chilled water supply temperatures that make it easier to combine liquid cooling with a water-side economizer.
A complete and in-depth investigation of all energy-saving opportunities should be conducted to guarantee that the entire cost of operating the data center is kept to a minimum.
These techniques can enhance your Data Center energy efficiency and improve your business.