Centralized District Cooling Plant for large scale data center 大型和超大型数据中心空调水系统供冷规模设计

作者:叶明哲   Author: Eric Ye

翻译:苏旭江,费晓霞   Translate by: James Soh, Fiona Fei

English editing by: James Soh

摘要:数据中心水冷系统采用何种形式和规模建设,直接关系到数据中心建设投资的成本和运行的安全;本文主要对水系统供冷的规模和冗余情况进行阐述和探讨,并提出在大型数据中心基地可以采用区域供冷方式,设立独立的区域供冷中心,从而降低数据中心空调系统总投资和提升数据中心空调系统可用性。

Abstract: The construction& investment cost and secure operation of data centers are directly related to the choice and scale of the data center cooling system. This article mainly discusses the scale and redundancy of cooling water system, and suggests that large data centers could use district cooling i.e. a separate district cooling center can be set which reduce total investment cost and enhances the entire cooling system resiliency.

 

  1. 数据中心空调水系统规模 Scale of Data Center Cooling for Large Data Center project

在大型数据中心,多幢数据机楼组成庞大的数据中心群机楼,选择制冷中心的数量和制冷规模是必须要考虑的一个问题,这直接关系到数据中心的建设成本和空调系统可用性。制冷规模可以采用单幢数据机楼供冷或区域供冷。如中国电信在建的云计算内蒙古园区,就由42幢楼组成,每幢楼约18,000M2,需要多个供冷中心。

Huge data center clusters can consist of many data center buildings. Instead of designing the cooling infrastructure and housing it on a per building basis, cost savings considerations has yielded the possibility of per cluster district cooling. The Inner Mongolia Cloud computing Campus built by China Telecom campus consists of 42 buildings, each of which covers an area of 18,000sq.m and requires multiple cooling centers.

 

  1. 独立供冷(单幢机楼供冷 Dedicated cooling system(individual building cooling system)

就是每一幢机楼设置一个单独的制冷机房,该制冷机房只对自己这幢楼进行供冷。单幢机楼供冷系统比较简单,这有利于系统的维护和检修,当水系统发生故障时,只对该楼设备造成影响,不会影响到别的机楼,故影响面较小,是目前数据中心普遍采用的方式,下图1是独立供冷示意图:

Dedicating one mechanical plant room for each data center building, the cooling capacity from this mechanical plant room is solely dedicated for that building’s IT load. This type of dedicated building cooling is easier for maintenance and repairing. When a failure happens in water cooled system, it will only impact this particular building, but not other buildings. For its ease of expansion and limit of impact on a per building basis, single building cooling is the most widely used model in data center industry. Figure 1 below shows the overview diagram of a two separate dedicated building cooling systems for two data center buildings.

Figure 1 (dedicated cooling plant per each data center building):

Picture1-district-cooling-dedicated.png

但对于多幢机楼组成的数据中心,需要每个机楼均搞一个制冷机房,如云计算内蒙园区,按这种方式需要建42个独立的制冷中心。这种方式导致制冷机房较多,相对占地面积较大,由于制冷机组多,操作维护工作量较大;而且各个供冷中心内部,为了安全,也需要考虑冗余和备份,导致投资过大。

But when large data center campus with many buildings are considered, with each building needing a cooling system, such as the case of Inner Mongolia Cloud computing Campus, it means 42 separate dedicated cooling systems, which occupies large areas and associated infrastructure investment cost. Further, because of so many units of cooling system, the operation and maintenance burden is inevitably high. Given that the consideration of redundancy and availability on a per building basis, the investment cost will also be high for a campus environment with dedidcated cooling infrastructure on a per building basis.

 

2.1.  独立供冷的系统冗余

Full Cooling System redundancy(single building cooling)

如果是A级机房(T4),水管管路必须是两个独立的系统,每个系统可以独立承担单幢楼数据中心所有的热负荷,运行时两个系统必须同时在线运行,单个系统故障不会对数据中心产生任何影响,这就是系统冗余。每个系统都独立承担100%的热负荷,这就是N+N系统冗余,如图2,但是这样投资很大。

For a class A computer room (with reference to China GB 50174 standard or equivalent to T4), there must be two separate systems of chilled water system and pipes, and each system will carry sufficient chilled water to handle the heat load in a single building. Should one of them failed, the cooling capacity in one water pipe system can cool the entire building, that’s what we call system redundancy. If each system can bear 100% heat load, that’s what we call N+N system redundancy, as shown in the Figure 2. But it costs a lot more.

 

Figure 2 (N+N cooling systems):

picture3-district-cooling-2N.png

2.2.  组件冗余

Component redundancy

如果不满足系统冗余,仅仅是部分组件故障有冗余,就叫组件冗余。B级机房(T3),水系统管路也需要设计为两个系统,但是主机和末端可以公用,运行可以采用主备用方式进行,支持有计划的系统检修;组件冗余就是系统中常用的组件考虑冗余,如水泵采用N+1方式,冷机采用N+1方式,冷却塔采用N+1方式,机房空调采用N+X方式,这些就是组件冗余。

Another form of redundancy system is to have N+1 redundancy on a component level. It is based on the assumption that a component failure will not cause a system failure. For a Class B computer room (China GB 50174 or equivalent to T3), it also requires two separate systems of chilled water pipes, but the chillers and cooling towers is on a common system connected to both pipe systems, which these component running in active-standby mode and planned system maintenance is supported. In another word, it is concurrently maintainable.

2.3.  系统冗余和机组冗余投资比较

The comparison on investment between system redundancy and component redundancy

采用高标准,势必会带来投资的增大。采用系统冗余的投资很大,从纯正的字面理解,双系统可能是单系统200%的投资,但如果合理设计系统冗余,达到A级标准(T4)的同时,也是可以大幅降低初期的投资费用。

System redundancy (i.e. dual system or 2N) cost more than component redundancy. Two systems need twice as many investments as single system.  With reasonable design selection, the initial investment cost can still be managed while balanced with requirements of a 2N design or Class A (meeting GB 50174 standard).

对于B、C级机房,机组不需要系统冗余,只需要考虑机组的冗余,一般采用的N+X 冗余,X=1~N,从实际运行来看,当N值较少时(N<4),2台机组同时出现故障的几率非常低,x取1基本已经可以应对突发故障情况。对于部分重要机房,不严格按照A级机房设计的,而又需要提高可靠性或者负载扩容的,可以先按照N+1配置,但预留扩容一台机组的位置。

For Class C (GB 50174 definition) computer room, component redundancy say for example N+X redundancy (X=1~N) is the requirement. In practice, when N<4, It’sless likely that 2units failed at the same time, “x=1” should suffice to deal with most situation. For critical computer rooms, if it is not to be designed to Class A requirements while it need higher reliability or scalability, the computer room can initially be equipped with N+1 redundancy and subsequently increased to N+X where X > 1 when in the fully fitted out stage.

 

  1. 区域集中制冷 District Cooling

单幢机楼供冷有一个缺点,就是1幢楼有一个制冷中心,如果数据中心够大,那建设的供冷中心就会足够多,如云计算内蒙云园区,按照单幢楼供冷的特点,需要42个供冷中心,而且各个数据中心内部需要冷机、水泵、冷塔、管路的冗余和备份,这些备份和冗余在各个数据中心之间无法实现共享,导致设备投资的大量浪费。以T4标准的数据中心举例,每幢楼建2个独立的水系统,2幢楼就需要4个独立系统;如果能够以两幢数据中心为单元进行数据中心建设, A楼的水系统作为B楼的备份,B楼的水系统作为A楼的备份,这样系统就会简单的多,每个楼只需要建一个水系统就可以了。

 

There is a cost disadvantage to single building cooling for a campus style scale because each building with have its own cooling plant room especially when 2N cooling system redundancy is chosen. For example the Inner Mongolia Cloud computing Campus which is planned to have 42 data center buildings, it will totally need 42 cooling system with 42 cooling system on standby. Each standby cooling system has chillers, pumps, cooling towers, piping system. However, if we consider two data center buildings as a single entity during the design phase such that building A’s cooling system is building B’s backup and vice versa, then we cut down the number of cooling systems from 84 down to 42.

区域供冷是指若干数据机楼统一由两个或几个专门的大型制冷中心进行供冷,通过管道把制取的冷冻水送到每一幢数据中心机楼,如图3,如果数据中心区域供冷,相比民用区域供冷,优势更为明显:数据中心发热量大,建筑集中,该种方式统一设置冷冻站,减少机组占地面积,集中供冷有利于提高机组的负荷率,冷源效果高,可获得更大的能效比;其次数据中心集中冷源站占地少,降低了冷源设备的初投资;另外数据中心区域供冷减少了机组备用数量,相对减少机组的投资;集中操作和运维,易于优化控制和维护管理,可以减少运维人员。

District cooling refers to the centralized production and distribution of chilled water, which allows a number of buildings to rely on two or more large dedicated cooling centers, as shown in Figure 3. Chilled water is delivered via an underground, insulated pipeline to residential buildings to cool the indoor air of the buildings within a particular district. Compared to residential buildings, district cooling have added benefits. First of all, because of data center’s higher heat load and compact arrangement of buildings into clusters, district cooling will save space by combining all the cooling systems into one or several dedicated centralized cooling plant/buildings, it the aggregated higher heat load will allow distribution of chilled water production from higher capacity chiller plant/s that in turn improves energy efficiency ratio. Besides, decreasing the area occupied by cooling systems and potentially higher capacity chillers by aggregating and centralizing them can reduce the initial investment of cooling equipment.

In addition, district cooling enables the reduction of the number of cooling system required in reserve, which also lower the cost of investment.

For example, if 4 data center buildings requires dedicated cooling system of 3+1 configuration in each of the data center buildings, which equals to 16 cooling systems (16 of everything such as chillers, cooling towers, pumps and such) then combining and aggregation these cooling systems into two centralized cooling plant/buildings each of 6+1 that reduce the number of chillers needed for redundancy purpose which saves 2 cooling systems in total. Or if the cooling capacity required in each data center building matches capacity slightly more than 2 chillers but have to be rounded up to 3 chillers to meet the need (N), then combining these cooling systems gives raise to even more opportunity to reduce the total cooling systems required.

Centralized operation is good for optimal control and maintenance management. Fewer staff are needed.

If there is significant difference in terms of energy cost between daytime and nighttime use of electricity from the power grid, the district cooling system gives rise to the option of implementing chilled ice storage during the lower cost night time using special chillers which achieves further cost savings by releasing the chilled ice during the higher energy cost timing. A caveat though is that by doing such load shift will incur more energy overall as energy are expanded to chill water into ice or cold storage and such chillers are not very efficient.

Figure 3:

Picture4-district-cooling-piece

区域供冷系统实践的技术已经非常成熟。日本是对区域供冷系统实践较早的国家,技术、经验相当成熟,采用区域供冷后,日本新宿新都和日本东京晴海Triton广场就采用区域供冷供热系统,该些区域供冷供热系统的年均一次能源能耗COP达到1.19,高居日本全国区域供冷系统第1、2位。

在新加坡,区域供冷系统案例包括购物商城加商用楼的滨海购物中心,樟宜商业区,纬壹城等项目。在建的新加坡西部的数据中心区也包括区域供冷系统,前期建设还没有实施这个区域供冷系统。

Japan is one of the early adopter of district cooling system, whose technologies and experiences are quite mature. District cooling and heating are used in Nihon Shinjuku sinto and Harumi Triton Square in Japan, these systems can achieve COP of 1.19 which is the number one or two district cooling system in Japan.

In Singapore, district cooling is used in the Marina Square project, Changi Business Park and Biopolis districts. In the Singapore Data Centre Park which was launched in 2013 which will accommodate 7 data center buildings eventually, district cooling was considered.

表Table:

table1-district-cooling-piece.png

在中国,广州大学城区域供冷是亚洲规模最大、世界第二大区域供冷系统,图4,整体整个系统建有4个冷站,空调负荷主要是10所高校的教学区和生活区大楼以及两个中心商业区供冷。整个系统由冷站、空调冷冻水管网及末端供冷系统三个子系统组成。该区域供冷总装机容量10.6万冷吨,蓄冰规模26万冷吨,供冷总建筑面积达350万m2。

In Chia, the world’s second largest and Asia’s second largest district cooling system cools the entire Guangzhou University Town. As shown in the Figure 4, there are four chiller plants (blue colour) in the entire district cooling and distribution system, and they are responsible for cooling 10 universities, multiple student hostels and two central business districts. The whole system consists of three subsystems, including chiller plants, air-conditioning chilled-water pipe network and terminal cooling system. The installed gross capacity of the district cooling is 106,000 RT while its ice storage capacity is 260,000 RT. The total building areas covered by the district cooling system is 3.5million square meters.

Figure 4:

Picture4-district-cooling-GZ.png

  1. 数据中心的区域供冷设计 District cooling design for Data Centers

对于数据中心机楼群,之前已经提过因为数据机楼集中,而且发热量大,采用区域供冷更具有优势,图5,在建设有热电冷三联供区域的地方,也可以采用溴化锂制冷,降低电网压力;在能够利用自然冷源的场合,如利用低温湖水供冷的,也可以考虑采用区域供冷方案。从区域供冷的实际使用情况来看,水管的距离不宜过长,否则会导致水泵耗能增加,最好控制在4公里以内,流量不宜太大,最好采用二级泵设计,这样可以降低水泵的消耗,这些条件均适合数据中心。

As mentioned earlier in this article that district cooling enables centralizing of cooling systems from the data center buildings and has higher efficiency due to an aggregated and therefore higher heat load. In Figure 5, should on-site CCHP (combined cooling heating and power also known as tri-generation) is chosen, it can use lithium bromide cooling to ease the power draw from power grid. When the free cooling resource is available such as cold lake water, it can also be used in the case of district cooling. The entire length of water pipe should not be longer than 4km, or else the pump capacity will need to be increased., and water flow rate should not be too high. It would be best to use dual pumps (chilled water pump and condensed water pump) to control the chilled water flow and condensed water flow to optimize the power consumption of the pumps.

 

 

Figure 5:

Picture5-district-cooling-piece

如果考虑系统冗余,数据中心区域供冷就需要两个制冷中心,组成2N系统,每个冷冻站制冷能力可承担100%负荷,如图6:

When system redundancy is considered, district cooling data center will need two cooling system to meet the standard of 2N. So that every chilled water plant can afford 100% cooling load, as shown in the Figure 6.

Figure6 (2N):

Picture2-district-cooling-centralized

中国电信云计算内蒙园区最终将有42幢机房楼,如果每幢机楼单独配一个冷冻站,每个冷冻站安装4台1200冷冻水机组(三主一备),相当于建设42个冷冻站,一百六十八台冷冻机组,由于每个机楼负荷不同,这些冷冻站输出也不相同,导致冷机效率也不仅相同,如1200冷吨离心机组满载效率为0.6KW/冷吨;当部分负荷时,效率下降为0.8KW/冷吨,如果采用区域供冷,设立二个冷冻站,如图7,就可以达到T4标准,而且采用2000冷吨以上的离心机组,冷机满载效率上升到0.52KW/冷吨,故可以节约大量的能耗,另外两级泵设计,可以降低输配系统的能耗,加上冬季采用冷却塔供冷技术,节能的效果会更加明显;另外可以节约多达42台备用冷机的投资,同样的相应的备用冷却塔、备用水泵的投资也可以降下来。

There will eventually be 42 data center buildings in the Inner Mongolia Cloud Computing Campus when fully completed by China Telecom. If each data center building has a cooling plant room with four 1,200RT chiller units (3 duty with 1 standby), it will mean that the entire campus will need 42 cooling plant rooms and 168 chiller units with its associated pumps and cooling towers, which has total of 42 chiller units and cooling towers plus associated pumps all on standaby. Furthermore, because each building may have different heat load, these cooling plant will deliver different cooling capacity, i.e different cooling efficiency as per each data center building. For a 1,200RT centrifugal chiller unit with 0.6KW/RT at full load efficiency made to run at partial load, the efficiency drops to 0.8KW/RT.

If we use district cooling and set up two centralized cooling plants as indicated in Figure 7, it will meet the most stringent resiliency level. In this case, we can choose centrifugal chiller unit with 2,000 RT cooling capacity, with the full load efficiency of up to 0.52KW/RT, which will save a great deal of energy. Incorporated in the design of the cooling plant will be the use of secondary condenser pumps which reduce energy consumption of water distribution system. The cooling pipe system can be fitted with electronically controlled bypass values and controls to only use the cooling tower without the use of chillers during cooler seasons to significantly increase the energy efficiency.

The total number of 2,000 RT chillers required in a full 2N design will be 76 x 2 = 152 units. However, there is cost savings on the investment and operating cost as 16 less chillers, cooling towers and associated pumps while enhancing the total cooling system resiliency level from a N+1 to a 2N level.

Figure 7: data center with two central cooling plants

Picture7-district-cooling-InnerMongolia1.png

考虑到现在云计算园区的制冷需求非常大,也可以设计成多个冷冻站,每个冷冻站承担33%的园区负荷,这样可以进一步降低投资成本,如图8。

The cooling capacity required for the cloud computing campus is huge, which allow us to further refine the design with 4 centralized cooling plants, each of the cooling plant will take care of 33% cooling load, to reduce the initial capital costs and the chiller units and cooling towers will be installed in stages to smooth out the investment required instead of one lumpy investment.

For example, 4 centralized cooling plants with 76 x 2,000 RT duty chiller units divided by 3 will be roughly equal to 26 chiller units per centralized cooling plant room, which meant a total of 104 chiller units plus associated cooling towers and pumps, a further cost savings from the 2 centralized cooling plant, and each data center building still sees a 2N cooling system supply.

图8:数据中心设置4个冷冻站示意图

Figure 8: data center with four centralized cooling plants.

Picture8-district-cooling-InnerMongolia2.png

  1. 结束语 Conclusion

相比单幢楼供冷,采用区域供冷的好处很多,比如节省用地,以及减少运行管理人员数量从而节省高额人工费,同时还采用了多种先进的节能措施来减小空调系统的能耗,比如冷机可以采用高压冷机降低配电能耗,也可以集中设置蓄冷罐,提升水系统可靠性等等。数据中心采用区域供冷不利的是初期的管路投资会增大,水系统较为庞大和复杂,一旦水系统故障,影响和波及面大,检修也不易,目前国内数据中心采用区域供冷案例基本没有,设计、建设和运维人员对区域供冷还比较谨慎,但随着设计技术的进步,建设水平和运维水平的提升,相信不久区域供冷会在大型数据中心推广使用。

 

Compared with single building cooling, there are many benefits for campus size data center park to use district cooling. Firstly, we can save land and space, contstruction cost, labour cost (fewer operations staff required) and energy consumption with higher energy efficiency design and equipment. Moreover, we can choose high-voltage chillers to reduce energy consumption of power distribution and centralize the chilled water storage tanks to enhance resiliency of the entire cooling system.

However, there are also some disadvantages for district cooling such as higher initial investments into the water pipes, values and controls on the entire campus level. When the huge complex cooling system fails, it will have a major impact to the entire chilled water supply to the data center park/campus. Scheduling and co-ordination of maintenance is also more complex given the scale and potential impact should anything goes wrong. There are still only a few cases of using district cooling in data center in China. Data center or M&E designers, contractors, suppliers and data center operators are cautious about using the district cooling approach. With more large scale data center campus/park under consideration in western and north west regions of China and the various stakeholders gaining more exposure and experiences with it, it won’t be long before district cooling will be in greater use in large data center projects.

 

参考资料:

References:

  1. 清华大学:石兆玉 《供热系统分布式变频循环水泵的设计》

Tsinghua University    Design of Heating System with Distributed Circulating Pumps

2. China National GB 50174-2008 Standard

 

标签:供冷规模  独立供冷  区域供冷

Tag: cooling scale, single cooling, district cooling

Original Chinese Article available on: Modern Data Center Network 现代数据中心网

About the Author and Translators:

The main author, Eric Ye (叶明哲)has worked in the telecommunications rooms and data centers for over 24 years. Eric has published 18 papers on data center technology and application mostly on mechanical cooling plant and technology. He has spoken at data center forums.

The English translation are done by Ms Fiona Fei (DCD China) and James Soh (Newwit Consulting). Editing of the English content and context are by James Soh.