作者：叶明哲 Author: Eric Ye
翻译：苏旭江，费晓霞 Translate by: James Soh, Fiona Fei
English editing by: James Soh
Abstract: The construction& investment cost and secure operation of data centers are directly related to the choice and scale of the data center cooling system. This article mainly discusses the scale and redundancy of cooling water system, and suggests that large data centers could use district cooling i.e. a separate district cooling center can be set which reduce total investment cost and enhances the entire cooling system resiliency.
数据中心空调水系统规模 Scale of Data Center Cooling for Large Data Center project
Huge data center clusters can consist of many data center buildings. Instead of designing the cooling infrastructure and housing it on a per building basis, cost savings considerations has yielded the possibility of per cluster district cooling. The Inner Mongolia Cloud computing Campus built by China Telecom campus consists of 42 buildings, each of which covers an area of 18,000sq.m and requires multiple cooling centers.
独立供冷（单幢机楼供冷 Dedicated cooling system（individual building cooling system）
Dedicating one mechanical plant room for each data center building, the cooling capacity from this mechanical plant room is solely dedicated for that building’s IT load. This type of dedicated building cooling is easier for maintenance and repairing. When a failure happens in water cooled system, it will only impact this particular building, but not other buildings. For its ease of expansion and limit of impact on a per building basis, single building cooling is the most widely used model in data center industry. Figure 1 below shows the overview diagram of a two separate dedicated building cooling systems for two data center buildings.
Figure 1 (dedicated cooling plant per each data center building):
But when large data center campus with many buildings are considered, with each building needing a cooling system, such as the case of Inner Mongolia Cloud computing Campus, it means 42 separate dedicated cooling systems, which occupies large areas and associated infrastructure investment cost. Further, because of so many units of cooling system, the operation and maintenance burden is inevitably high. Given that the consideration of redundancy and availability on a per building basis, the investment cost will also be high for a campus environment with dedidcated cooling infrastructure on a per building basis.
Full Cooling System redundancy（single building cooling）
For a class A computer room (with reference to China GB 50174 standard or equivalent to T4), there must be two separate systems of chilled water system and pipes, and each system will carry sufficient chilled water to handle the heat load in a single building. Should one of them failed, the cooling capacity in one water pipe system can cool the entire building, that’s what we call system redundancy. If each system can bear 100% heat load, that’s what we call N+N system redundancy, as shown in the Figure 2. But it costs a lot more.
Figure 2 (N+N cooling systems):
Another form of redundancy system is to have N+1 redundancy on a component level. It is based on the assumption that a component failure will not cause a system failure. For a Class B computer room (China GB 50174 or equivalent to T3), it also requires two separate systems of chilled water pipes, but the chillers and cooling towers is on a common system connected to both pipe systems, which these component running in active-standby mode and planned system maintenance is supported. In another word, it is concurrently maintainable.
The comparison on investment between system redundancy and component redundancy
System redundancy (i.e. dual system or 2N) cost more than component redundancy. Two systems need twice as many investments as single system. With reasonable design selection, the initial investment cost can still be managed while balanced with requirements of a 2N design or Class A (meeting GB 50174 standard).
For Class C (GB 50174 definition) computer room, component redundancy say for example N+X redundancy (X=1~N) is the requirement. In practice, when N<4, It’sless likely that 2units failed at the same time, “x=1” should suffice to deal with most situation. For critical computer rooms, if it is not to be designed to Class A requirements while it need higher reliability or scalability, the computer room can initially be equipped with N+1 redundancy and subsequently increased to N+X where X > 1 when in the fully fitted out stage.
区域集中制冷 District Cooling
There is a cost disadvantage to single building cooling for a campus style scale because each building with have its own cooling plant room especially when 2N cooling system redundancy is chosen. For example the Inner Mongolia Cloud computing Campus which is planned to have 42 data center buildings， it will totally need 42 cooling system with 42 cooling system on standby. Each standby cooling system has chillers, pumps, cooling towers, piping system. However, if we consider two data center buildings as a single entity during the design phase such that building A’s cooling system is building B’s backup and vice versa, then we cut down the number of cooling systems from 84 down to 42.
District cooling refers to the centralized production and distribution of chilled water, which allows a number of buildings to rely on two or more large dedicated cooling centers, as shown in Figure 3. Chilled water is delivered via an underground, insulated pipeline to residential buildings to cool the indoor air of the buildings within a particular district. Compared to residential buildings, district cooling have added benefits. First of all, because of data center’s higher heat load and compact arrangement of buildings into clusters, district cooling will save space by combining all the cooling systems into one or several dedicated centralized cooling plant/buildings, it the aggregated higher heat load will allow distribution of chilled water production from higher capacity chiller plant/s that in turn improves energy efficiency ratio. Besides, decreasing the area occupied by cooling systems and potentially higher capacity chillers by aggregating and centralizing them can reduce the initial investment of cooling equipment.
In addition, district cooling enables the reduction of the number of cooling system required in reserve, which also lower the cost of investment.
For example, if 4 data center buildings requires dedicated cooling system of 3+1 configuration in each of the data center buildings, which equals to 16 cooling systems (16 of everything such as chillers, cooling towers, pumps and such) then combining and aggregation these cooling systems into two centralized cooling plant/buildings each of 6+1 that reduce the number of chillers needed for redundancy purpose which saves 2 cooling systems in total. Or if the cooling capacity required in each data center building matches capacity slightly more than 2 chillers but have to be rounded up to 3 chillers to meet the need (N), then combining these cooling systems gives raise to even more opportunity to reduce the total cooling systems required.
Centralized operation is good for optimal control and maintenance management. Fewer staff are needed.
If there is significant difference in terms of energy cost between daytime and nighttime use of electricity from the power grid, the district cooling system gives rise to the option of implementing chilled ice storage during the lower cost night time using special chillers which achieves further cost savings by releasing the chilled ice during the higher energy cost timing. A caveat though is that by doing such load shift will incur more energy overall as energy are expanded to chill water into ice or cold storage and such chillers are not very efficient.
Japan is one of the early adopter of district cooling system, whose technologies and experiences are quite mature. District cooling and heating are used in Nihon Shinjuku sinto and Harumi Triton Square in Japan, these systems can achieve COP of 1.19 which is the number one or two district cooling system in Japan.
In Singapore, district cooling is used in the Marina Square project, Changi Business Park and Biopolis districts. In the Singapore Data Centre Park which was launched in 2013 which will accommodate 7 data center buildings eventually, district cooling was considered.
In Chia, the world’s second largest and Asia’s second largest district cooling system cools the entire Guangzhou University Town. As shown in the Figure 4, there are four chiller plants (blue colour) in the entire district cooling and distribution system, and they are responsible for cooling 10 universities, multiple student hostels and two central business districts. The whole system consists of three subsystems, including chiller plants, air-conditioning chilled-water pipe network and terminal cooling system. The installed gross capacity of the district cooling is 106,000 RT while its ice storage capacity is 260,000 RT. The total building areas covered by the district cooling system is 3.5million square meters.
数据中心的区域供冷设计 District cooling design for Data Centers
As mentioned earlier in this article that district cooling enables centralizing of cooling systems from the data center buildings and has higher efficiency due to an aggregated and therefore higher heat load. In Figure 5, should on-site CCHP (combined cooling heating and power also known as tri-generation) is chosen, it can use lithium bromide cooling to ease the power draw from power grid. When the free cooling resource is available such as cold lake water, it can also be used in the case of district cooling. The entire length of water pipe should not be longer than 4km, or else the pump capacity will need to be increased., and water flow rate should not be too high. It would be best to use dual pumps (chilled water pump and condensed water pump) to control the chilled water flow and condensed water flow to optimize the power consumption of the pumps.
When system redundancy is considered, district cooling data center will need two cooling system to meet the standard of 2N. So that every chilled water plant can afford 100% cooling load, as shown in the Figure 6.
There will eventually be 42 data center buildings in the Inner Mongolia Cloud Computing Campus when fully completed by China Telecom. If each data center building has a cooling plant room with four 1,200RT chiller units (3 duty with 1 standby), it will mean that the entire campus will need 42 cooling plant rooms and 168 chiller units with its associated pumps and cooling towers, which has total of 42 chiller units and cooling towers plus associated pumps all on standaby. Furthermore, because each building may have different heat load, these cooling plant will deliver different cooling capacity, i.e different cooling efficiency as per each data center building. For a 1,200RT centrifugal chiller unit with 0.6KW/RT at full load efficiency made to run at partial load, the efficiency drops to 0.8KW/RT.
If we use district cooling and set up two centralized cooling plants as indicated in Figure 7, it will meet the most stringent resiliency level. In this case, we can choose centrifugal chiller unit with 2,000 RT cooling capacity, with the full load efficiency of up to 0.52KW/RT, which will save a great deal of energy. Incorporated in the design of the cooling plant will be the use of secondary condenser pumps which reduce energy consumption of water distribution system. The cooling pipe system can be fitted with electronically controlled bypass values and controls to only use the cooling tower without the use of chillers during cooler seasons to significantly increase the energy efficiency.
The total number of 2,000 RT chillers required in a full 2N design will be 76 x 2 = 152 units. However, there is cost savings on the investment and operating cost as 16 less chillers, cooling towers and associated pumps while enhancing the total cooling system resiliency level from a N+1 to a 2N level.
Figure 7: data center with two central cooling plants
The cooling capacity required for the cloud computing campus is huge, which allow us to further refine the design with 4 centralized cooling plants, each of the cooling plant will take care of 33% cooling load, to reduce the initial capital costs and the chiller units and cooling towers will be installed in stages to smooth out the investment required instead of one lumpy investment.
For example, 4 centralized cooling plants with 76 x 2,000 RT duty chiller units divided by 3 will be roughly equal to 26 chiller units per centralized cooling plant room, which meant a total of 104 chiller units plus associated cooling towers and pumps, a further cost savings from the 2 centralized cooling plant, and each data center building still sees a 2N cooling system supply.
Figure 8: data center with four centralized cooling plants.
Compared with single building cooling, there are many benefits for campus size data center park to use district cooling. Firstly, we can save land and space, contstruction cost, labour cost (fewer operations staff required) and energy consumption with higher energy efficiency design and equipment. Moreover, we can choose high-voltage chillers to reduce energy consumption of power distribution and centralize the chilled water storage tanks to enhance resiliency of the entire cooling system.
However, there are also some disadvantages for district cooling such as higher initial investments into the water pipes, values and controls on the entire campus level. When the huge complex cooling system fails, it will have a major impact to the entire chilled water supply to the data center park/campus. Scheduling and co-ordination of maintenance is also more complex given the scale and potential impact should anything goes wrong. There are still only a few cases of using district cooling in data center in China. Data center or M&E designers, contractors, suppliers and data center operators are cautious about using the district cooling approach. With more large scale data center campus/park under consideration in western and north west regions of China and the various stakeholders gaining more exposure and experiences with it, it won’t be long before district cooling will be in greater use in large data center projects.
- 清华大学：石兆玉 《供热系统分布式变频循环水泵的设计》
Tsinghua University Design of Heating System with Distributed Circulating Pumps
2. China National GB 50174-2008 Standard
标签：供冷规模 独立供冷 区域供冷
Tag: cooling scale, single cooling, district cooling
Original Chinese Article available on: Modern Data Center Network 现代数据中心网
About the Author and Translators:
The main author, Eric Ye (叶明哲）has worked in the telecommunications rooms and data centers for over 24 years. Eric has published 18 papers on data center technology and application mostly on mechanical cooling plant and technology. He has spoken at data center forums.
The English translation are done by Ms Fiona Fei (DCD China) and James Soh (Newwit Consulting). Editing of the English content and context are by James Soh.