Data Center Manager: Six ways to make the data center more effective

Until recently, many organizations have not considered their data center infrastructure for more than a decade. As long as there is enough space to accommodate the new server rack, the existing cooling and power capacity and capacity can temporarily meet the newly increased demand. However, as the demand for computing power continues to increase, this situation will not last long now, because in the near future, there will be a shortage of power supply.

According to the survey results of market research company IDC (a sister company of CIO.com publishers), among the concerns of data center managers, the computer support infrastructure required to house and operate servers is the second largest after the price. problem. Steve Conway, vice president of IDC's high-performance computing research department, said, "Three or four years ago, these problems were ranked 12th, which means that at that time they were simply not taken seriously."

This change in situation prioritizes the changes in technology and the sharp increase in demand for processing power. Virtualization and multi-core processors allow us to put higher-density computing power in a small place. The increasing core business processing of all types of enterprises and the increased reliance on computer computing have driven enterprises to put more and more computer racks into their existing data centers. At the same time, Gartner predicts that by the end of 2008, half of the world's data center infrastructure will not meet the power and cooling requirements of high-density equipment in recent years.

These changes have brought to the managers of mainstream data centers like myself (managers in a high-end technology supercomputer center) the problems to be faced in the next ten years: how to correctly select infrastructure support equipment, How to optimize cooling to serve high-density server racks, how to balance the efficiency of the data center with business needs, and how to track all the details that may affect the success or failure of execution.

The data center I work for (the Department of Defense Supercomputer Center at the US Army Engineer Research and Development Center (ERDC)) is in a two-year, thorough inspection of the data center infrastructure. Designing a new data center or retrofitting an old one is a complicated process, but the following six ideas can keep you in the right direction at the beginning of the work. These six ideas are based on our experience over the past decade, and have been field tested during the ongoing infrastructure modernization process of the Army Engineer Research and Development Center.

1 Decide if you really need your own data center

More and more computer infrastructure is a challenging and expensive investment process. Before you decide to upgrade next time, be sure to ask yourself, "Do I really need my own data center?"

A minimal infrastructure will include power switching equipment and generators. However, almost no data center infrastructure is just that much. Also add fault tolerance, including an uninterruptible power supply (UPS) for batteries or flywheels, backup water supply (just in case your municipal water supply is interrupted), redundant components, and perhaps even multiple independent commercial power supplies. Then, you must protect yourself from fire and natural disasters. Once the data center is built, you need to hire someone to monitor and maintain it.

As Amazon Chief Technology Officer (CTO) Werner Vogels said in the recent "Next Generation Data Center Conference": Unless you are in an industry with high efficiency, running the data center itself will directly pay off. It may be better to run your application in the data center.

This solution may not be right for everyone, but it is at least worth considering when utility costs rise and demand for tightening infrastructure continues to grow.

2 Weighing the costs and benefits of green design

Rising costs and consumption have pushed the focus on electricity to the front of data center planning. Such as transformers, wires, cooling systems and UPS, there is a large, fixed power loss, cutting off a part of the effective power before the power reaches the first server.

GreenGrid (green grid), an association of information technology companies aimed at improving the energy efficiency of data centers, recommends rationally streamlining the infrastructure by removing redundant components and only installing the equipment you need to make the data center currently operational. . According to the organization's "Energy Saving Data Center Guide", rationally streamlining infrastructure can save up to 50% in electricity costs.

However, there is also an aging energy story. When the data center upgrade plan was just emerging, the utility infrastructure in the United States had begun to show signs of aging, because electricity supply always seemed to be problematic.

The collapse of the bridge in Minneapolis and the massive power outages in the first few years of the past decade are all signs of the rapid decline of the country ’s important infrastructure. On August 14, 2003, a power outage caused about 50 million people without power near Great Lakes. Events like this are expected to become more common in the next few years unless major measures are taken to curb demand and increase the reliability of the aging grid Sexual ability.

According to a recent North American Electric Reliability Council report on long-term power reliability, in the next 10 years, the demand for electricity is expected to increase by 19%, but the power generation capacity is expected to increase by only 6%. This means that the surplus of power supply is declining, and the annual surge in demand or regional climate events are likely to cause power outages across the country more than ever.

As municipal power outages may become more frequent in the short term, data center managers should actively design their own infrastructure to ensure power reliability, including redundant power distribution and power generation systems, to prevent commercial power outages , The system fails.

Obviously, you need to design your infrastructure to be as efficient as possible (you can even consider the high efficiency of infrastructure as a design requirement). However, the degree of energy savings in power distribution infrastructure will depend on the organization's cost assessment of continuous availability and increased capacity. For example, at ERDC, our supercomputing tasks require very strong computer availability. Our power distribution infrastructure includes: redundant switches, batteries and generators. These enable us to carry out daily maintenance without interrupting operation. In the event of a component failure, emergency operation can continue for a long time. Although these redundant devices increase our fixed power loss, they also take into account the requirement that our business cannot be interrupted.

3 Through design to achieve "closely coupled cooling" and increase flexibility

Computers are very effective in two things: processing numbers and turning electricity into heat. About 30% of the electricity entering the data center is converted into heat in the server.

The traditional approach is to use a large cooling unit outside the facility to cool the water, and then inject the cooling water into the computer room air conditioning (CRAC) device on the floor of the computer room. This approach essentially fills the entire room with cold air, but it provides very little flexibility for specific heat source points.

The concept of "closely coupled cooling" has been popular in supercomputing centers for many years, and we found it to be efficient and effective. The idea is to place the cooling very close to the heat source, the purpose is to remove the heat source. This approach can specifically cool and control the heat source point, and can shorten the air path, which requires less fan power than moving cold air to the entire room. "Closely coupled cooling" can make the rack density up to 4 times the usual situation. According to customers' needs to increase rack density, all major server vendors can now provide configurations that are suitable for "tight cooling".

There are many racks and chips based on "closely coupled cooling" solutions. For example, there is a design that installs the cooling device in a rack and leans against the side of the server rack, or a "top-down" cooling method that places it on top of each rack. Some solutions provide cooling water directly to the rear door of the rack, or place the cooler in the drawer of the rack, staggered with the computer drawer.

There are two basic forms of chip-based cooling solutions. The simplest is to deliver cooling water to one or more coolers located above the heat source of the server. More complex systems use inert liquids and apply them directly to the closed-loop system of server chips. Although this technology has only recently been adopted by common servers, the supercomputer industry has been using this technology for decades. In 2006, ERDC's supercomputing center used a chip-level vaporization heat exchange cooling system on some of its Cray supercomputers.

All of these methods require that the cooling water pipes just reach the computer rack. You need to take this into consideration when designing the piping for your data center. If the idea of ​​moving cooling water to the core area of ​​the data center stops your heartbeat and is terrified, there is a lot of engineering knowledge to minimize the risk that can rest assured. The measures you need to take include: make the water pipe as low as possible under the raised floor, install a leak detector, isolate electricity from the water pipe, and provide leak control functions such as gravity drainage pipes and drain pans.

4 The consideration of floor tiles can not be ignored

If you do n’t plan or cannot plan “tight cooling”, there are still some measures you can take to improve cooling efficiency.

Try to reduce the number of cables and pipes under the raised floor in the equipment room. This is the space that the air conditioning unit (CRAC) is using. The air conditioning unit pushes cold air to your computer, and, if you can minimize the interruption of cold air encountering cables and pipes during the flow, then use The energy efficiency of cooling can be greatly increased. Minimizing obstacles under the floor can also help eliminate heat sources in the data center.

Another measure is: you can commission a fluid dynamics research institute to conduct research on the data center, or purchase the software you need to perform the research yourself. This method uses a computer model to simulate the airflow around the data center, which can help you find the cause and solution of the cooling problem, including the optimal placement of perforated floor tiles.

A few years ago, the ERDC supercomputing center adopted this method to confirm that we obtained the maximum capacity of the cooling system. In data centers, perforated floor tiles are often laid just in front of the cold aisles of server racks. Paula Lindsey, the integrated leader of the data center, said: "Surprisingly, the most effective laying of perforated floor tiles is not always in front of the machine." Research in fluid dynamics shows that we need to add perforations to some floor tiles The diameter, in the key position, allows more cables and pipes to pass through.

5 Move the support equipment outside

Properly choosing the location of your computer infrastructure support system will improve the energy efficiency of the data center and make it easier for you to scale in the future. One of the most important measures you can take is to move power and cooling equipment as far as possible outside the data center. In fact, if you have space, a good way is to move most of these devices outside the building.

Below is an example. For a new supercomputing center at ERDC, we need a short-term installation to get 2 megawatts of additional power. We found that the UPS and generator equipment that needs to be added are not suitable for installation in this building where the remaining power infrastructure is placed. Ten years ago, our data center was located on a steep hillside and road. The solution (positioning the equipment outdoors on a flat ground area formed by cutting hillsides) is very expensive, and when the schedule is already tight, this will increase the time delay.

Our new long-term design is to place most of these components in a modular newly planned utility area outside the building. Greg Rottman, the engineer responsible for performing the upgrade, said: "When we need to expand the scale, this movement of equipment eliminates the restrictions imposed on us by the building fence, and should be able to provide us with flexibility and satisfaction for at least another 10 years. We need to expand and upgrade. "

Moving the transmission and external equipment to the outside is also environmentally friendly. In a report published earlier this year, GreenGrid found that up to 25% of the electricity entering the data center is converted into heat in power transmission units, UPS equipment and switching equipment. Moving these devices outside of the data center and, if possible, outside the building, will reduce your overall energy consumption, because energy is no longer needed to remove the heat generated by these devices.

6 Monitoring of power management

Do you know how much power your data center uses? Does your server use more electricity than the supplier says, or less? Is the power consumption of the equipment upgrade next year very close to the electrical capacity of your facility?

A system for power and cooling system infrastructure monitoring must be part of any upgrade plan for your data center. Active management and monitoring of energy use will help you plan for the future and evaluate the effectiveness of the measures you take to improve the energy efficiency of your data center.

It can be said that it is a challenge to convince senior managers who do not directly manage the data center business to invest in the upgrade of the data center. You can build the power monitoring system of the data center step by step, under affordable conditions. And observe whether the measures to save energy and improve efficiency are effective and meaningful. This will help you establish a long-term power improvement evaluation system and plan the future more effectively.

About the author: John E. West is a senior researcher in the US Department of Defense's High-Performance Computing Modernization Program and the executive director of the program at the Supercomputer Center of the US Army Engineer Research and Development Center in Vicksburg.

Bed Bench

Bed Bench,modern bed bench,luxury bed bench,minimalist bed bench

Foshan Poesy Furniture Co., LTD. , https://www.poesy-furniture.com