A view on the South East Asia data center market – part 1

  1. Overview of the South East Asia Data Center Market

The South East Asia data center market is both a single market group while each country in the Association of South East Asian Nations (“ASEAN”, which Papua New Guinea and East Timor are seeking to be a part of the ASEAN group) is a State and therefore manages, license, and promote the local data center market within its national boundaries.

The telecommunications network hub (see link www.submarinecablemap.com places Singapore having the most fibre connections in this region, while Thailand, Malaysia have good number of submarine fibre connections but because of their vast land mass and higher population, bandwidth and speed of access by user (enterprise, home or mobile) to access application and content outside of their country to US and Europe is seriously bottlenecked. In this case, Singapore serves as a good network hub for app and content (including CDN) access for neighbouring countries.

2015 and this year so far, there have been quite a fair bit of news articles about the Singapore Data Center market. There is less information on the English media that talks about the rest of the South East Asia market, which is surprising to me as Singapore serves as a network hub, which she has the least number in terms of population and end users. I hasten to add that the growth of the Singapore Data Center market is feeding the tremendous growth of mobile data and IT usage in the populous nations in this region, especially Indonesia, Vietnam, Thailand, and Philippines.

Growth is still pretty good for Singapore, the network hub of South East Asia. Singapore is well positioned on several major factors, good telecommunications infrastructure, financial hub, good pool of IT and telecommunications work force, political stability, and regional headquarter location for MNCs in this region with the exception of a few companies that have chosen Kuala Lumpur or Manila (e.g. Emerson, Microsoft).

Various research reports in the past three years put the Singapore Data Center market to worth USD 1B this 2016, while the entire ASEAN data center market is worth between USD 2 to 3.7 B. Looking at the various reports, there is a wide margin of error, so a very rough mid point of the various estimates put Singapore Data Center market at between 35% to 60% of the ASEAN data center market.

While Singapore is most definitely a key hub to have a set of core data centers, the growth of the Singapore data center market is in most part due to the growth of the data processing and storage of information in the entire region. The various countries in ASEAN with a huge mobile and young work force will consume mobile 3G/4G content and utilize the social media apps to stay in contact or enjoy their entertainment be it video or games, will drive demand for data storage and processing in these countries itself, meaning a hub and spoke strategy should be considered. For those companies that leverage on these populous numbers, for example the streaming media content players or game service providers, they need a blend of hub + spoke + edge to get their users to stick with their content or entertainment and not switch to another content provider or game if the network speed is not near tip-top through CDN at the edge data centers.

The major cloud service providers, AWS, Microsoft Azure, Google, and Aliyun are already in Singapore and elsewhere in the Asia Pacific including China, Hong Kong, Japan, India, Australia. It is foreseeable that they will build or collocate their IT and data storage equipment in the huge developing countries’ cloud demand of Indonesia, Philippines, Thailand, Vietnam, and Malaysia.

It is also foreseen that CDN players that addresses video and game content will have a key role to play in the demand side of the colocation data center market.

  1. Background

Various research reports have stated that around 80% of data stored today were generated in the last two years. It is more so for the Asia region as adoption of Internet and recently implementation of 3G and 4G grows at a fast pace.

The population size and the number of mobile users (a good indicator of the mobile Internet user base) of the South East Asia countries are as follows (from Wikipedia or telecom report):

Country Population Size 3G or 4G subscribers Remark
Indonesia populations (2013) stands at 249 million. Number of subscribers is 106 millions Most populous country in South East Asia. 4G is starting to roll out.
Philippines Population (2014) stands at 101 million. Number of subscribers is 51 millions 4G take up is slow.
Myanmar Population 93 millions. Number of subscribers is 13 millions. No 4G plan yet.
Vietnam Population (2013) stands at 89 million. Number of subscribers is 52 millions 4G license to issue by end of 2016.
Thailand Population (2013) stands at 67 million. Number of subscribers is 58 millions. 4G just launched in 2015
West Malaysia Population (2010) stands at 22.5 million. Number of subscribers is 16 millions (east and west Malaysia). Most users on 3G
Cambodia Population 54 millions. Number of subscribers is 8 millions. No 4G plan yet.
Laos Population 16 millions. Number of subscribers is 3 millions. No 4G plan yet.
Singapore Population (2013) is only 5.4 million. Number of subscribers is 5 millions. Most subscribers are 4G. 2G will be decommissioned in 2016.
Brunei Population (2013) is only 0.42 million. Number of subscribers is 43% 4G take up is slow.

Full name is Brunei Darussalam.

  1. Singapore

The data center sector has enjoyed attention in the media and government has launched the Singapore Data Center park back in 2009. It has the most square footage for data center space in this region despite it being the second smallest country (minus PNG which is applying for ASEAN membership) in the Association for South East Asian (ASEAN) organization.

It has the most number of International fibre connections linking it through the South China Sea to Australia, Hong Kong, Japan, US west coast, and also to the middle East and Europe through the Malacca Strait. See the submarine cable map link for more information on the connectivity. All the rest of the countries in ASEAN are connected to Singapore even in their trans-Atlantic and Pacific submarine fibre links, which the exception of a couple of fibre links between Indonesia to East Malaysia and Brunei.

Singapore is a matured market for data center, with estimated 3 million square feet of data center space and decent occupancy rate. There are good mix of foreign data center players (DRT, Equinix, Telstra-Pacnet, Global Switch, Telin) and local data center players (Singtel, Keppel, STT, 1-Net, Kingsland) plus many others in the Singapore market. AWS, Google, and Microsoft have built there standalone data center facilities here in Singapore as well. The average power per rack in Singapore is the medium power capacity rack, averaging between 6-8kW.

As with other parts of the developed countries, the Government has played an active role as evidenced by the creation of the Singapore Data Center park project, and also supportive policies in attracting foreign data center operators to build their facility here.

Much has been covered about the Singapore Data Center market in news and research reports in 2015 and 2016. Please see the reference section for some of the links.

I have a few points to make here with regards to Singapore vis-a-vis other countries in South East Asia.

Firstly, the data center market in South East Asia is not a zero sum game, i.e. the growth of data center market is interlinked in such a way that it mostly grows together. Take for example Singapore and Indonesia, lots of business are done between the two countries with each having to generate, process, and store data and thus each having to have data centers.

Secondly, Singapore will not be able to accommodate all of the region’s need for data center space and power. Singapore has limited land area, and despite the fairly short distance between it and its neighboring countries, application, video, and large chunk of data processing and storage is best done at source or destination, i.e. closer to where it is generated or consumed. The traffic generated by Indonesian businesses and people or to be consumed by them are best served local.

Last but not least, it is cheaper to house the large chunk of data and the IT processing plus storage equipment closer to the source/destination. Singapore will still be important as it is also the financial hub for this region, and also as the regional network hub. Data Center market in Singapore is likely to enjoy healthy growth in itself and in support of the faster regional data growth.

  1. Indonesia

Stable economic growth by the previous two governments and the focus on infrastructure development by the current administration under Jokowi is encouraging. Even more encouraging are the deal of USD 5.5Bn medium speed train project given to China (funded by China) in October 2015. This rail project is still slowly moving along.

Even more encouraging is the Indonesia has opened dozens of businesses to full and partial foreign ownership, making it even more easier than Malaysia to incorporate companies without need for having local citizen as board member unlike some of the other South East Asia countries.

It is prudent to exercise caution given that the Indonesian government may move slowly to relax these rules.

There is a lack of good dedicated data center facility that meets Rated/Tier 3 and above, given that Rated/Tier 3 (Nowadays TIA-942 describes data centers tiers as Rated level) requires standalone and dedicated data center facility. Still, there are some data center players that are established in Indonesia, mainly in and around Jakarta but there is some liked IDC and Moratelindo that operates in other cities.

  • TelKomSigma (parent of Telin)
  • NTT NexCenter in mixed use building and newly acquired NEXCenter 2 (formerly the CyberCSF)
  • Moratelindo built 6 Nusantara Data Center (NDC) in Medan, Batam, Palembang, Jakarta, Surabaya, Bali and said that their Jakarta data center is TIA 942 compliant (according to their website)
  • Equinix has partnered with local new data center company PCI in 2013 to have a local data center called JK1 in Jakarta.
  • Nex DataCenter (not to be confused with NTT NEXCenter)
  • IDC Indonesia (has six facilities, sold part of shares of Cyber to NTT)

 

From 2015, the flurry of announcement of purchase of data center and joint ventures for new data centers:

  • NTT has acquired the CyberCSF data center facility (renamed to be called NexCenter 2) Company Moratelindo has data centers Nusantara data centers.
  • In May 2015, the Indonesian conglomerate Lippo Group’s joint venture between Lippo and Japan’s Mitsui to have Graha Teknologi Nusantari (GTN) build a data center 30km east of Jakarta.
  • Equinix announced in February 2016 that it is doubling its Jakarta JK1 data center capacity (by 400 racks)

One of the issue is land has to be owned by local companies, and I am not aware if the recent relaxation on foreign ownership changes it. Nevertheless, there are plenty of large conglomerate which have both internal demand for good data center space.

Indonesia had in the past proved to backtrack on its pro-investor stance, with the 2007 ruling by the Business Competition Supervisory Commission (KPPU) that Singapore Temasek Holdings (an investment arm of the Singapore Government’s Ministry of Finance) that it has through SingTel and STT owns shares (minority) in two separate telecommunications carriers (see reference).

Initial take up of data center space may be dominated by online and mobile game companies, online content companies, and the banking and financial sector that are still trying to meet the Indonesian Central Bank directive issued in 2012 requiring compliance by Oct 2017. It is likely that the central bank may grant an extension but banks are looking for good data center space given that building new one will not be fast enough to meet the regulation. We have understood that foreign banks in Indonesia are expanding their data center capacity requirements and some have moved into new data center facilities.

Changes are very fast given that it is only starting to take off in a big way in Indonesia.

What is holding back the data center market in Indonesia may not be the big items like land or money. It will be data center design specialist companies, and trained and knowledgeable data center infrastructure operations people. The understanding of the telecommunication facilities available for the chosen site or building will also be very important given that Indonesia’s telecommunication sector is not fully liberalized.

The expected faster rate growth of need for data processing and storage by the huge population and small businesses in Indonesia warrants a closer look for data center operators in this country.

In geographically dispersed countries with multiple large urban cities like Indonesia, it will likely have data center players that focus on a particular city or adopt a multi-city approach. Edge data center deployment makes sense in countries that have multiple smaller townships with populations less than one million.

However, Indonesia do suffers from co-ordination of effort in promoting and supporting the foreign investment and data center operators to set up their data center facility in Indonesia. There need to be a central agency or authority with the mandate and power to manage the allocation of land, power, and “nudge” the telecommunications carriers to provide the necessary fibre connectivity. However, all signs so far has shown that data center market in Indonesia is set to grow pretty fast in the coming few years.

End of Part 1

The rest of the series will covers the rest of the South East Asia data center market.

References:

  1. https://www.ericsson.com/res/docs/2015/mobility-report/emr-nov-2015-regional-report-south-east-asia-and-oceania.pdf
  2. https://www.bicsi.org/uploadedFiles/BICSI_Website/Global_Community/Presentations_and_Photos/Southeast_Asia/2012_SEA/2.1%20ASEAN%20Data%20Centre%20Market.pdf
  3. http://www.datacenterdynamics.com/design-build/how-will-south-east-asias-data-centers-look-in-2020/94687.fullarticle
  4. http://www.broad-group.com/reports/dc-seasia (5th Edition, Jan 2016)
  5. Why Now is the time to localize your mobile game for South East Asia. http://www.andovar.com/mobile-games-southeast-asia-localization/

Singapore

  1. http://www.dealstreetasia.com/stories/data-centres-a-growth-market-in-singapore-sea-ida-7244/
  2. http://www.businesstimes.com.sg/real-estate/singapore-a-small-landscape-with-a-large-vision-for-data-centres
  3. http://www.datacenterdynamics.com/design-build/how-will-south-east-asias-data-centers-look-in-2020/94687.fullarticle
  4. https://datacenternews.asia/story/booming-sea-data-center-market-undergoing-major-shift/
  5. https://www.dbs.com.sg/treasures/aics/pdfController.page?pdfpath=/content/article/pdf/AIO/AIO_2015/Sector-Report-009-TELCO-LOW-RES.pdf

Indonesia

  1. http://www.prnewswire.com/news-releases/frost–sullivan-enterprise-services-market-in-indonesia-is-expected-to-reach-us386-billion-by-2019-300042689.html
  2. http://articles.economictimes.indiatimes.com/2011-02-11/news/28540032_1_temasek-holdings-investment-firm-largest-sovereign-wealth-funds
  3. http://www.wsj.com/articles/indonesia-opens-more-big-businesses-to-foreign-investment-1455185389
  4. http://asia.nikkei.com/Politics-Economy/Economy/Widodo-shifts-up-a-gear-to-pursue-FDI?page=2
  5. http://www.wsj.com/articles/indonesia-ministry-cites-high-speed-railway-shortcomings-1454507248
  6. https://www.pwc.com/id/en/publications/assets/banking-survey-2015.pdf
  7. http://en.finance.sia-partners.com/apac-onshoring-case-indonesia
  8. http://www.datacenterdynamics.com/content-tracks/design-build/equinix-partners-with-indonesian-firm-for-jakarta-data-center/70057.fullarticle
  9. https://datacenternews.asia/story/equinix-doubles-jakarta-data-center-capacity-adds-partners-iix-connection/
A view on the South East Asia data center market – part 1

A Clearer Cloud in China?

Posted on 21st May 2016.

Background:

I had been writing mostly about data center market in China. In 2015, I was in charge of the data center side of the Cloud and Data Center business unit of a China private enterprise (let’s just called this company CPE) that has minority shares held by the local district government investment bureau, and I was general manager of a Cloud joint venture by the company with a large state-owned-enterprise in the another province.

CPE applied for and received the IDC license through a fairly difficult route. Because one minority shareholder (<2%) has switched his Chinese citizenship to a foreign citizenship, CPE cannot apply for an IDC license. It has to have its shareholders invest into another ISP company (“RH”) that has been operating for a few years without that particular shareholder, and RH applied for and received the IDC license. There are currently estimated to be more than 600 Chinese companies with the IDC license.

On 25th December 2015, The Ministry of Industry and Information Technology (“MIIT”) of China announced the classifications and categories of Telecommunication Industry (“电信业务分类目录”, download from URL link in reference 1) which stated that beginning from 1st March 2016, cloud service providers comes under the IDC license scheme.

B11  互联网数据中心业务

互联网数据中心(IDC)业务是指利用相应的机房设施,以外包出租的方式为用户的服务器等互联网或其他网络相关设备提供放置、代理维护、系统配置及管理服务,以及提供数据库系统或服务器等设备的出租及其存储空间的出租、通信线路和出口带宽的代理租用和其他应用服务。

互联网数据中心业务经营者应提供机房和相应的配套设施,并提供安全保障措施。

互联网数据中心业务也包括互联网资源协作服务业务。互联网资源协作服务业务是指利用架设在数据中心之上的设备和资源,通过互联网或其他网络以随时获取、按需使用、随时扩展、协作共享等方式,为用户提供的数据存储、互联网应用开发环境、互联网应用部署和运行管理等服务。

The words in bold, translated, says “IDC service includes Internet resource co-ordination service.  The provision of IT resources on top of data center infrastructure, through Internet or any other network to provide flexible, pay per use, near real time expandable infrastrcuture service, provision of data storage,” have brought Infrastructure-as-a-service which is the most basic cloud service offering of all cloud service providers under the licensing scheme.

This clearly stated that cloud service providers are now required to have IDC license. This caused a scrambling of Chinese cloud service providers to apply for the license. Amongst those that applied for and received the IDC license are Aliyun, Huawei, UCloud, Inspur, etc. In fact, Aliyun (the cloud service arm of Alibaba) only received its license in March 2016.

Foreign Cloud Service Providers do not have direct route to operate in China

There is no avenue for foreign cloud service provider to apply for the IDC license. The only allowed foreign companies are those registered in Hong Kong and Macau, and these companies can only hold up to 50% of the company that owns the IDC license.

Wait, aren’t there a number of cloud service providers including number one (as of 2015) AWS and Microsoft Azure. They operate by letting their local colocation service provider such as SINNet or 21ViaNet to buy their cloud service and offers their cloud services. Some of them are more overt while some are more in the capacity of cloud infrastructure or technology provider which sort of in a grey area because they “partner” such as the case of Microsoft for their Azure cloud.

The IDC license has reopened for application since 2012 but has only restrict to local Chinese companies (100%), the inclusion of local Chinese cloud service providers to this class of license while keeping out the foreign cloud service provider from directly offering cloud services is not a good sign of opening up.

In the January 2015 guideline by Chinese State Council on promoting Cloud Computing and growing the information technology industry (“国务院关于促进云计算创新发展培育信息产业新业态的意见”, see reference), main point number 7 stated and I quote here:

“(七)积极开展国际合作。
  支持云计算企业通过海外并购、联合经营、在境外部署云计算数据中心和设立研发机构等方式,积极开拓国际市场,促进基于云计算的服务贸易发展。加强国内外企业的研发合作,引导外商按有关规定投资我国云计算相关产业。鼓励国内企业和行业组织参与制定云计算国际标准。”

Roughly translated:

(7) Active participation and collaboration with International market

Support cloud computing enterprises in acquiring foreign companies, joint venture, planning and execution of cloud data centers and research facilities overseas, entering foreign markets, and service industry and trade based on cloud computing. Enhance joint development with both local and foreign research institutions, guidance to foreign investment into local cloud industry in accordance with existing rules and regulations. Encourage the local cloud enterprises and associations in participation in International cloud computing standard.

I interpret the paragraph above is about getting the Chinese cloud computing enterprises to go out and conquer the world while the foreign cloud computing players are to abide by existing rules and regulations which forbid direct foreign participation in the market. Aliyun has opened two data centers in the US, Singapore, Australia and India since 2015.

At the moment, it seems like the Chinese government has tacitly allows the participation of AWS and Microsoft Azure, among others to provide their cloud technology or service indirectly. The inclusion of cloud service provider to come under the IDC license scheme do warrant an active communications and exchange with the regulators.

Reference:

  1. http://www.miit.gov.cn/n1146285/n1146352/n3054355/n3057709/n3057714/c4564270/content.html
  2. http://www.gov.cn/zhengce/content/2015-01/30/content_9440.htm
  3. http://www.idcquan.com/Special/idcpolicy/
  4. http://www.raincent.com/content-10-457-1.html

 

 

 

A Clearer Cloud in China?

CENTRALIZED DISTRICT COOLING PLANT FOR LARGE SCALE DATA CENTER 大型和超大型数据中心空调水系统供冷规模设计

作者:叶明哲   Author: Eric Ye

翻译:苏旭江,费晓霞   Translate by: James Soh, Fiona Fei

English editing by: James Soh

摘要:数据中心水冷系统采用何种形式和规模建设,直接关系到数据中心建设投资的成本和运行的安全;本文主要对水系统供冷的规模和冗余情况进行阐述和探讨,并提出在大型数据中心基地可以采用区域供冷方式,设立独立的区域供冷中心,从而降低数据中心空调系统总投资和提升数据中心空调系统可用性。

Abstract: The construction& investment cost and secure operation of data centers are directly related to the choice and scale of the data center cooling system. This article mainly discusses the scale and redundancy of cooling water system, and suggests that large data centers could use district cooling i.e. a separate district cooling center can be set which reduce total investment cost and enhances the entire cooling system resiliency.

 

  1. 数据中心空调水系统规模 Scale of Data Center Cooling for Large Data Center project

在大型数据中心,多幢数据机楼组成庞大的数据中心群机楼,选择制冷中心的数量和制冷规模是必须要考虑的一个问题,这直接关系到数据中心的建设成本和空调系统可用性。制冷规模可以采用单幢数据机楼供冷或区域供冷。如中国电信在建的云计算内蒙古园区,就由42幢楼组成,每幢楼约18,000M2,需要多个供冷中心。

Huge data center clusters can consist of many data center buildings. Instead of designing the cooling infrastructure and housing it on a per building basis, cost savings considerations has yielded the possibility of per cluster district cooling. The Inner Mongolia Cloud computing Campus built by China Telecom consists of 42 buildings, each of which covers an area of 18,000sq.m and requires multiple cooling centers.

 

  1. 独立供冷(单幢机楼供冷 Dedicated cooling system(individual building cooling system)

就是每一幢机楼设置一个单独的制冷机房,该制冷机房只对自己这幢楼进行供冷。单幢机楼供冷系统比较简单,这有利于系统的维护和检修,当水系统发生故障时,只对该楼设备造成影响,不会影响到别的机楼,故影响面较小,是目前数据中心普遍采用的方式,下图1是独立供冷示意图:

Dedicating one mechanical plant room for each data center building, the cooling capacity from this mechanical plant room is solely dedicated for that building’s IT load. This type of dedicated building cooling is easier for maintenance and repairing. When a failure happens in water cooled system, it will only impact this particular building, but not other buildings. For its ease of expansion and limit of impact on a per building basis, dedicated building cooling is the most widely used model in data center industry. Figure 1 below shows the overview diagram of a two separate dedicated building cooling systems for two data center buildings.

Figure 1 (dedicated cooling plant per each data center building):

Picture1-district-cooling-dedicated.png

但对于多幢机楼组成的数据中心,需要每个机楼均搞一个制冷机房,如云计算内蒙园区,按这种方式需要建42个独立的制冷中心。这种方式导致制冷机房较多,相对占地面积较大,由于制冷机组多,操作维护工作量较大;而且各个供冷中心内部,为了安全,也需要考虑冗余和备份,导致投资过大。

But when large data center campus with many buildings are considered, with each building needing a cooling system, such as the case of Inner Mongolia Cloud computing Campus, it means 42 separate dedicated cooling systems, which occupies large areas and associated infrastructure investment cost. Further, because of so many units of cooling system, the operation and maintenance burden is inevitably high. Given that the consideration of redundancy and availability on a per building basis, the investment cost will also be high for a campus environment with dedidcated cooling infrastructure on a per building basis.

 

2.1.  独立供冷的系统冗余

Full Cooling System redundancy(dedicated building cooling)

如果是A级机房(T4),水管管路必须是两个独立的系统,每个系统可以独立承担单幢楼数据中心所有的热负荷,运行时两个系统必须同时在线运行,单个系统故障不会对数据中心产生任何影响,这就是系统冗余。每个系统都独立承担100%的热负荷,这就是N+N系统冗余,如图2,但是这样投资很大。

For a class A computer room (with reference to China GB 50174 standard or equivalent to T4), there must be two separate systems of chilled water system and pipes, and each system will carry sufficient chilled water to handle the heat load in a single building. Should one of them failed, the cooling capacity in one water pipe system can cool the entire building, that’s what we call system redundancy. If each system can bear 100% heat load, that’s what we call N+N system redundancy, as shown in the Figure 2. But it costs a lot more.

 

Figure 2 (N+N cooling systems):

picture3-district-cooling-2N.png

2.2.  组件冗余

Component redundancy

如果不满足系统冗余,仅仅是部分组件故障有冗余,就叫组件冗余。B级机房(T3),水系统管路也需要设计为两个系统,但是主机和末端可以公用,运行可以采用主备用方式进行,支持有计划的系统检修;组件冗余就是系统中常用的组件考虑冗余,如水泵采用N+1方式,冷机采用N+1方式,冷却塔采用N+1方式,机房空调采用N+X方式,这些就是组件冗余。

Another form of redundancy system is to have N+1 redundancy on a component level. It is based on the assumption that a component failure will not cause a system failure. For a Class B computer room (China GB 50174 or equivalent to T3), it also requires two separate systems of chilled water pipes, but the chillers and cooling towers is on a common system connected to both pipe systems, which these component running in active-standby mode and planned system maintenance is supported. In another word, it is concurrently maintainable.

2.3.  系统冗余和机组冗余投资比较

The comparison on investment between system redundancy and component redundancy

采用高标准,势必会带来投资的增大。采用系统冗余的投资很大,从纯正的字面理解,双系统可能是单系统200%的投资,但如果合理设计系统冗余,达到A级标准(T4)的同时,也是可以大幅降低初期的投资费用。

System redundancy (i.e. dual system or 2N) cost more than component redundancy. Two systems need twice as many investments as single system.  With reasonable design selection, the initial investment cost can still be managed while balanced with requirements of a 2N design or Class A (meeting GB 50174 standard).

对于B、C级机房,机组不需要系统冗余,只需要考虑机组的冗余,一般采用的N+X 冗余,X=1~N,从实际运行来看,当N值较少时(N<4),2台机组同时出现故障的几率非常低,x取1基本已经可以应对突发故障情况。对于部分重要机房,不严格按照A级机房设计的,而又需要提高可靠性或者负载扩容的,可以先按照N+1配置,但预留扩容一台机组的位置。

For Class C (GB 50174 definition) computer room, component redundancy say for example N+X redundancy (X=1~N) is the requirement. In practice, when N<4, It’sless likely that 2units failed at the same time, “x=1” should suffice to deal with most situation. For critical computer rooms, if it is not to be designed to Class A requirements while it need higher reliability or scalability, the computer room can initially be equipped with N+1 redundancy and subsequently increased to N+X where X > 1 when in the fully fitted out stage.

 

  1. 区域集中制冷 District Cooling

单幢机楼供冷有一个缺点,就是1幢楼有一个制冷中心,如果数据中心够大,那建设的供冷中心就会足够多,如云计算内蒙云园区,按照单幢楼供冷的特点,需要42个供冷中心,而且各个数据中心内部需要冷机、水泵、冷塔、管路的冗余和备份,这些备份和冗余在各个数据中心之间无法实现共享,导致设备投资的大量浪费。以T4标准的数据中心举例,每幢楼建2个独立的水系统,2幢楼就需要4个独立系统;如果能够以两幢数据中心为单元进行数据中心建设, A楼的水系统作为B楼的备份,B楼的水系统作为A楼的备份,这样系统就会简单的多,每个楼只需要建一个水系统就可以了。

 

There is a cost disadvantage to dedicated building cooling model for a campus style scale because each building with have its own cooling plant room especially when 2N cooling system redundancy is chosen. For example the Inner Mongolia Cloud computing Campus which is planned to have 42 data center buildings, it will totally need 42 cooling system with 42 cooling system on standby. Each standby cooling system has chillers, pumps, cooling towers, piping system. However, if we consider two data center buildings as a single entity during the design phase such that building A’s cooling system is building B’s backup and vice versa, then we cut down the number of cooling systems from 84 down to 42.

区域供冷是指若干数据机楼统一由两个或几个专门的大型制冷中心进行供冷,通过管道把制取的冷冻水送到每一幢数据中心机楼,如图3,如果数据中心区域供冷,相比民用区域供冷,优势更为明显:数据中心发热量大,建筑集中,该种方式统一设置冷冻站,减少机组占地面积,集中供冷有利于提高机组的负荷率,冷源效果高,可获得更大的能效比;其次数据中心集中冷源站占地少,降低了冷源设备的初投资;另外数据中心区域供冷减少了机组备用数量,相对减少机组的投资;集中操作和运维,易于优化控制和维护管理,可以减少运维人员。

District cooling refers to the centralized production and distribution of chilled water, which allows a number of buildings to rely on two or more large dedicated cooling centers, as shown in Figure 3. Chilled water is delivered via an underground, insulated pipeline to residential buildings to cool the indoor air of the buildings within a particular district. Compared to residential buildings, district cooling have added benefits. First of all, because of data center’s higher heat load and compact arrangement of buildings into clusters, district cooling will save space by combining all the cooling systems into one or several dedicated centralized cooling plant/buildings, the aggregated higher heat load will allow distribution of chilled water production from higher capacity chiller plant/s that in turn improves energy efficiency ratio. Besides, decreasing the area occupied by cooling systems and potentially higher capacity chillers by aggregating and centralizing them can reduce the initial investment of cooling equipment.

In addition, district cooling enables the reduction of the number of cooling system required in reserve, which also lower the cost of investment.

For example, if 4 data center buildings requires dedicated cooling system of 3+1 configuration in each of the data center buildings, which equals to 16 cooling systems (16 of everything such as chillers, cooling towers, pumps and such) then combining and aggregation these cooling systems into two centralized cooling plant/buildings each of 6+1 that reduce the number of chillers needed for redundancy purpose which saves 2 cooling systems in total. Or if the cooling capacity required in each data center building matches capacity slightly more than 2 chillers but have to be rounded up to 3 chillers to meet the need (N), then combining these cooling systems gives raise to even more opportunity to reduce the total cooling systems required.

Centralized operation is good for optimal control and maintenance management. Fewer staff are needed.

If there is significant difference in terms of energy cost between daytime and nighttime use of electricity from the power grid, the district cooling system gives rise to the option of implementing chilled ice storage during the lower cost night time using special chillers which achieves further cost savings by releasing the chilled ice during the higher energy cost timing. A caveat though is that by doing such load shift will incur more energy overall as energy are expanded to chill water into ice or cold storage and such chillers are not very efficient.

Figure 3:

Picture4-district-cooling-piece

区域供冷系统实践的技术已经非常成熟。日本是对区域供冷系统实践较早的国家,技术、经验相当成熟,采用区域供冷后,日本新宿新都和日本东京晴海Triton广场就采用区域供冷供热系统,该些区域供冷供热系统的年均一次能源能耗COP达到1.19,高居日本全国区域供冷系统第1、2位。

在新加坡,区域供冷系统案例包括购物商城加商用楼的滨海购物中心,樟宜商业区,纬壹城等项目。在建的新加坡西部的数据中心区也包括区域供冷系统,前期建设还没有实施这个区域供冷系统。

Japan is one of the early adopter of district cooling system, whose technologies and experiences are quite mature. District cooling and heating are used in Nihon Shinjuku sinto and Harumi Triton Square in Japan, these systems can achieve COP of 1.19 which is the number one or two district cooling system in Japan.

In Singapore, district cooling is used in the Marina Square project, Changi Business Park and Biopolis districts. In the Singapore Data Centre Park which was launched in 2013 which will accommodate 7 data center buildings eventually, district cooling was considered.

表Table:

table1-district-cooling-piece.png

在中国,广州大学城区域供冷是亚洲规模最大、世界第二大区域供冷系统,图4,整体整个系统建有4个冷站,空调负荷主要是10所高校的教学区和生活区大楼以及两个中心商业区供冷。整个系统由冷站、空调冷冻水管网及末端供冷系统三个子系统组成。该区域供冷总装机容量10.6万冷吨,蓄冰规模26万冷吨,供冷总建筑面积达350万m2。

In China, the world’s second largest and Asia’s second largest district cooling system cools the entire Guangzhou University Town. As shown in the Figure 4, there are four chiller plants (blue colour) in the entire district cooling and distribution system, and they are responsible for cooling 10 universities, multiple student hostels and two central business districts. The whole system consists of three subsystems, including chiller plants, air-conditioning chilled-water pipe network and terminal cooling system. The installed gross capacity of the district cooling is 106,000 RT while its ice storage capacity is 260,000 RT. The total building areas covered by the district cooling system is 3.5million square meters.

Figure 4:

Picture4-district-cooling-GZ.png

 

  1. 数据中心的区域供冷设计 District cooling design for Data Centers

对于数据中心机楼群,之前已经提过因为数据机楼集中,而且发热量大,采用区域供冷更具有优势,图5,在建设有热电冷三联供区域的地方,也可以采用溴化锂制冷,降低电网压力;在能够利用自然冷源的场合,如利用低温湖水供冷的,也可以考虑采用区域供冷方案。从区域供冷的实际使用情况来看,水管的距离不宜过长,否则会导致水泵耗能增加,最好控制在4公里以内,流量不宜太大,最好采用二级泵设计,这样可以降低水泵的消耗,这些条件均适合数据中心。

As mentioned earlier in this article that district cooling enables centralizing of cooling systems from the data center buildings and has higher efficiency due to an aggregated and therefore higher heat load. In Figure 5, should on-site CCHP (combined cooling heating and power also known as tri-generation) is chosen, it can use lithium bromide cooling to ease the power draw from power grid. When the free cooling resource is available such as cold lake water, it can also be used in the case of district cooling. The entire length of water pipe should not be longer than 4km, or else the pump capacity will need to be increased., and water flow rate should not be too high. It would be best to use dual pumps (chilled water pump and condensed water pump) to control the chilled water flow and condensed water flow to optimize the power consumption of the pumps.

Figure 5:

Picture5-district-cooling-piece

如果考虑系统冗余,数据中心区域供冷就需要两个制冷中心,组成2N系统,每个冷冻站制冷能力可承担100%负荷,如图6:

When system redundancy is considered, district cooling data center will need two cooling system to meet the standard of 2N. So that every chilled water plant can afford 100% cooling load, as shown in the Figure 6.

Figure6 (2N):

Picture2-district-cooling-centralized

中国电信云计算内蒙园区最终将有42幢机房楼,如果每幢机楼单独配一个冷冻站,每个冷冻站安装4台1200冷冻水机组(三主一备),相当于建设42个冷冻站,一百六十八台冷冻机组,由于每个机楼负荷不同,这些冷冻站输出也不相同,导致冷机效率也不仅相同,如1200冷吨离心机组满载效率为0.6KW/冷吨;当部分负荷时,效率下降为0.8KW/冷吨,如果采用区域供冷,设立二个冷冻站,如图7,就可以达到T4标准,而且采用2000冷吨以上的离心机组,冷机满载效率上升到0.52KW/冷吨,故可以节约大量的能耗,另外两级泵设计,可以降低输配系统的能耗,加上冬季采用冷却塔供冷技术,节能的效果会更加明显;另外可以节约多达42台备用冷机的投资,同样的相应的备用冷却塔、备用水泵的投资也可以降下来。

There will eventually be 42 data center buildings in the Inner Mongolia Cloud Computing Campus when fully completed by China Telecom. If each data center building has a cooling plant room with four 1,200RT chiller units (3 duty with 1 standby), it will mean that the entire campus will need 42 cooling plant rooms and 168 chiller units with its associated pumps and cooling towers, which has total of 42 chiller units and cooling towers plus associated pumps all on standaby. Furthermore, because each building may have different heat load, these cooling plant will deliver different cooling capacity, i.e different cooling efficiency as per each data center building. For a 1,200RT centrifugal chiller unit with 0.6KW/RT at full load efficiency made to run at partial load, the efficiency drops to 0.8KW/RT.

If we use district cooling and set up two centralized cooling plants as indicated in Figure 7, it will meet the most stringent resiliency level. In this case, we can choose centrifugal chiller unit with 2,000 RT cooling capacity, with the full load efficiency of up to 0.52KW/RT, which will save a great deal of energy. Incorporated in the design of the cooling plant will be the use of secondary condenser pumps which reduce energy consumption of water distribution system. The cooling pipe system can be fitted with electronically controlled bypass values and controls to only use the cooling tower without the use of chillers during cooler seasons to significantly increase the energy efficiency.

The total number of 2,000 RT chillers required in a full 2N design will be 76 x 2 = 152 units. However, there is cost savings on the investment and operating cost as 16 less chillers, cooling towers and associated pumps while enhancing the total cooling system resiliency level from a N+1 to a 2N level.

Figure 7: data center with two central cooling plants

Picture7-district-cooling-InnerMongolia1.png

考虑到现在云计算园区的制冷需求非常大,也可以设计成多个冷冻站,每个冷冻站承担33%的园区负荷,这样可以进一步降低投资成本,如图8。

The cooling capacity required for the cloud computing campus is huge, which allow us to further refine the design with 4 centralized cooling plants, each of the cooling plant will take care of 33% cooling load, to reduce the initial capital costs and the chiller units and cooling towers will be installed in stages to smooth out the investment required instead of one lumpy investment.

For example, 4 centralized cooling plants with 76 x 2,000 RT duty chiller units divided by 3 will be roughly equal to 26 chiller units per centralized cooling plant room, which meant a total of 104 chiller units plus associated cooling towers and pumps, a further cost savings from the 2 centralized cooling plant, and each data center building still sees a 2N cooling system supply.

图8:数据中心设置4个冷冻站示意图

Figure 8: data center with four centralized cooling plants.

Picture8-district-cooling-InnerMongolia2.png

 

  1. 结束语 Conclusion

相比单幢楼供冷,采用区域供冷的好处很多,比如节省用地,以及减少运行管理人员数量从而节省高额人工费,同时还采用了多种先进的节能措施来减小空调系统的能耗,比如冷机可以采用高压冷机降低配电能耗,也可以集中设置蓄冷罐,提升水系统可靠性等等。数据中心采用区域供冷不利的是初期的管路投资会增大,水系统较为庞大和复杂,一旦水系统故障,影响和波及面大,检修也不易,目前国内数据中心采用区域供冷案例基本没有,设计、建设和运维人员对区域供冷还比较谨慎,但随着设计技术的进步,建设水平和运维水平的提升,相信不久区域供冷会在大型数据中心推广使用。

Compared with dedicated building cooling model, there are many benefits for campus size data center park to use district cooling. Firstly, we can save land and space, contstruction cost, labour cost (fewer operations staff required) and energy consumption with higher energy efficiency design and equipment. Moreover, we can choose high-voltage chillers to reduce energy consumption of power distribution and centralize the chilled water storage tanks to enhance resiliency of the entire cooling system.

However, there are also some disadvantages for district cooling such as higher initial investments into the water pipes, values and controls on the entire campus level. When the huge complex cooling system fails, it will have a major impact to the entire chilled water supply to the data center park/campus. Scheduling and co-ordination of maintenance is also more complex given the scale and potential impact should anything goes wrong. There are still only a few cases of using district cooling in data center in China. Data center or M&E designers, contractors, suppliers and data center operators are cautious about using the district cooling approach. With more large scale data center campus/park under consideration in western and north west regions of China and the various stakeholders gaining more exposure and experiences with it, it won’t be long before district cooling will be in greater use in large data center projects.

 

参考资料:

References:

  1. 清华大学:石兆玉 《供热系统分布式变频循环水泵的设计》

Tsinghua University    Design of Heating System with Distributed Circulating Pumps

2. China National GB 50174-2008 Standard

 

标签:供冷规模  独立供冷  区域供冷

Tag: cooling scale, single cooling, district cooling

Original Chinese Article available on: Modern Data Center Network 现代数据中心网

About the Author and Translators:

The main author, Eric Ye (叶明哲)has worked in the telecommunications rooms and data centers for over 24 years. Eric has published 18 papers on data center technology and application mostly on mechanical cooling plant and technology. He has spoken at data center forums.

The English translation are done by Ms Fiona Fei (DCD China) and James Soh (Newwit Consulting). Editing of the English content and context are by James Soh.

CENTRALIZED DISTRICT COOLING PLANT FOR LARGE SCALE DATA CENTER 大型和超大型数据中心空调水系统供冷规模设计

China Data Centers: Some Kind of Special

Published 11 May 2016

This is the third in a series of posts by me and a colleague on the China Data Center industry. You can find the earlier post to this series on my linkedin under my posts or via https://newwitblog.wordpress.com/

There are few articles that writes about China data center facilities which are energy efficient or significant in a certain way. We hear of this and that latest and largest and greenest in Europe or US but seldom about those in the Asia Pacific. Given that this is part of our series on China Data Centers, we put together a piece about some special China data centers.

While many of those posts are either the latest, the largest thus far, the most energy efficient, but the criteria is seldom clear. Here is our attempt to shortlist and highlight some of what we think are significant.

First off, the focus is China Data Centers, so the facility has to be located within China mainland.

Secondly, it should be of sizable scale, above 2,000 racks or 10,000 sq meters. However, size alone is not considered if energy efficiency measure is not a major consideration by the facility owner when planning, designing or implementing the data center facility.

Most importantly, by special kind of data centers, we look at those data centers that have at least one of the following attributes:

  • Internal energy efficiency measures beyond hot/cold aisle placement of racks and hot/cold aisle containment, e.g. free air cooling
  • Taking advantage of renewable energy
  • Any design or build strategy that makes it more efficient
  • A new business model (it has to be significant and best if its unique to China like the way Alibaba breaks how we shop)

Amongst the selected data center projects that we highlighted here, a couple of common theme emerges

  • The energy efficiency flows to direct energy cost savings through renewable energy power sources in certain parts of China which is a central government/provincial government directed major energy projects. These are evident in the north-west and western regions of China.
  • Energy efficiency measures are taken on seriously by the big users (Baidu, Alibaba, Tencent), and not by third party data center service providers.

For the second point above, the author has seen data center lease contracts whereby the design and implementation is dictated by the big users and implemented by the third party data center service provider (e.g. Watone for Alibaba’s QianDaoHu data center) in new data center build or retrofits. This is a significant trend towards cost savings or squeezing more power to go towards the IT compute/storage/network.

Let us begin on looking at these special kind of data centers, or in fact mega data centers in most cases.

1. Inner Mongolia Hohhot Cloud Data Center Zone (内蒙古云计算基地)

inner-mongolia-cloud-data-center-plan

City/Location:

Hohhot, Inner Mongolia

District:

He Lin Ge Er Economic Zone, Shengle distict and 盛乐园区和鸿盛开发区 (the latter has no English name that the author can find. 鸿盛高新技术产业开发区 is closer to the airport, about 2.9km)

Size, location, owner:

Three government owned carriers (China Telecom, China Unicom, and China Mobile) took the lead in 2012 to build their cloud data center park in Hohhot city.  In taking advantage of cool climate to enjoy free air cooling, the problem of sandstorm is a challenge. Co-generation plant is a feature of this zone project. As part of ensuring speedy delivery of servers, Inspur, the state-owned server manufacturer, has built a server manufacturing plant in the zone. Another privately funded capital investment firm CBC also will build a cloud data center park within the zone. Investment amount announced to date is about 70 Billion RMB.

Background:

This is a central government sponsored project whereby the central government is pushing the three government owned carriers and government research institutions that uses cloud/data center to use Hohhot as one of their main data center hubs in China.

The central government is pushing more government entities to support this Inner Mongolia Cloud Data Center Zone, which seems to do so in support of economic and stability of Hothot and Mongolia itself. The take up by Baidu/Alibaba/Tencent is not significant in this zone, although we heard that take up of a suite or two by each of the aforementioned. The Inner Mongolian government has been said to take up 50% of the initial phase of the data center park built by China Telecom.

Having the three dominant telco carriers set up their network and data centers in this cloud data center zone ensures that network connectivity to the main hubs, in this case mainly Beijing, should not be an issue.

What’s so special, i.e. what are the energy efficiency measures?

Low electricity utility cost. There are large scale solar power supplies built by both government and private entities. At less than half of Beijing utility charges while fibre optics cables by these three telco carriers ensures that that network bandwidth and speed is ensured.

Free air cooling, both direct and indirect are used in the data center facilities by the data center facility owners.

The only issue is that take up by enterprises and the BAT (Baidu, Alibaba, and Tencent) isn’t that “hot”. China Telecom’s Cloud Data Center Park in this Hohhot Cloud Data Center Zone is planned to be a 42 buildings campus. Phase 1 is 4 data center buildings. There is no immediate plan to bring up subsequent phases since the park has been operational since 2013.

The Central Scientific Research Institute has created a Cloud Research Project which not only will have a cloud data center in Hohhot, but also research projects such as how cloud will enable or grow farming industry etc.

2. AliCloud QianDaoHu (Qiandao Lake) Data Center

134619660_14421271821921n

ali-qiandaohu-solar-pannels

City/Location:

ChunAn county, QianDaoHu town, close to HangZhou, Zhejiang, China (青溪新城位于淳安县城千岛湖镇东面)

District:

QianDao Lake district, 千岛湖风景区

Size, location, owner:

30,000 sqm building floor area, 11 storey building (quite rare for a data center building), meant to house 50,000 servers.

This AliCloud data center is built and operated by a third party data center builder and facility operator called Watone Cloud Data (http://www.farvin.com.cn/) for Alibaba.

Background:

Alibaba headquarter is in HangZhou, ZheJiang province. It has multiple data centers in and around Hangzhou and elsewhere in China. It makes use of several energy efficiency measures to design and hopefully operate with a PUE below 1.3 which is previously thought impossible for the latitude of HangZhou.

What’s so special, i.e. what are the energy efficiency measures?

The choice of QianDaoHu is for its cool lake water at depth of about 60 meters, whereby the water temperature is around 13-17 Degree Celsius whole year round. This lake water is used via indirect cooling through heat exchangers. The heat-absorbed water is returned to the lake through a 2.5 km river special dug and with greenery features along its banks which is a good environmental consideration.

Another measure is a 3,000 sqm roof space for solar panels to yield average 300kW of power. Because solar panel generates energy in DC, it works well for Alibaba because their AliRack is meant to take in 240VDC which means more efficiency because they eliminate the need to convert the DC to AC. While this is insufficient for the whole facility, it contributes to cost savings on the energy front.

Alibaba’s Advanced Data Center Module (ADCM) technology is used, which uses DC power input to the server rack. Hot aisle containment is used, and the hot aisle air is scavenged to provide warm air for the office area during the cold season. The racks are mainly AliRack, meant to house 30% more compute power than traditional 19 inch racks. AliRack is one of the outcome of China Open Data Center Alliance, which is the new name for the Project Scorpio that is similar in nature to the Open Compute Project.

By using only DC power on one input to the servers and leaving the other input directly using utility, it ensures minimal energy loss compared to traditional 2xUPS input.

http://www.businesswire.com/news/home/20150908005493/en/AliCloud-Launches-Energy-Efficient-Qiandao-Lake-Data-Center

http://baidu.ku6.com/watch/08914215392542114792.html?page=videoMultiNeed

  3. Tencent Tianjin New District Data Center 腾讯天津数据中心

 tencent-tianjin-night-26154431599

Description of the site:

Tencent, the owner of QQ and Wechat, has a Tianjin New District data center comprising of two data center buildings and two office buildings.

City/Location:

Tianjin City. Tianjin is a city directly under the Central Government.

District:

Tianjin Binhai New District 天津滨海新区

Size, location, owner:

A data center compound owned by Tencent in the Binhai new district economic zone. The compound has two data center buildings (Phase 1 for building #2 and Phase 2 for building #3 ) and two office buildings. 93,000 sqm building floor area. It is said to be able to house 200,000 servers.

Background:

This site is one of the main processing hubs (the other one is in ShenZhen) for Tencent for its QQ, Wechat, QQ/wechat games, videos and financial transaction processing, especially for the northern half of China.

What’s so special, i.e. what are the energy efficiency measures?

Free air cooling. First phase was direct free air cooling, however given the smog plaguing Tianjin, Tencent has switched to use indirect free air cooling for phase 2. Phase 1 is more traditional cooling through raised floor, while Phase 2 is entirely Tencent’s Data Center modules with DC power input, thus having higher power efficiency as the input power only goes through AC-DC conversion and not the traditional UPS for Phase 1 (AC-DC -> DC-AC).

The annual (i.e. 365 days running number) PUE is already 1.3 for both buildings and it is foreseen that Phase 2’s PUE will be even lower as the capacity gets used up (phase 2 is less than 2 years old while phase one was built in 2010). This is amazing when compared to many so called claim PUE figures.

http://www.csdn.net/article/2015-01-20/2823638

http://ndc.cnw.com.cn/ndc-newdatacenter/htm2014/20140921_313138.shtml

 

4. Tencent Shanghai QingPu data center

tencent-qingpu-1450330095107.png

20151224003731_42362.jpg

Description of the site:

Tencent’s QingPu data center complex is built by China Telecom’s Shanghai Telecom subsidiary. It is not clear whether China Telecom operates the data center facility or not.

City/Location:

Shanghai

District:

QingPu district, Shanghai

Size, location, owner:

A 4 x data center buildings plus 1 office building housing command center and IT support services complex. Each data center building is two storeys, each floor occupies about 6,000 sq meters. Each building is to house 900 racks, i.e. total complex is to house 3,600 racks. The racks are arranged in Tencent’s micro-module (微模块).

Background:

Beyond the Tencent North and South processing hubs (Tianjin and Shenzhen), Tencent has been planning and executing large data centers in ShanWei (汕尾), ChongQing (重庆), and Shanghai(上海). This QingPu data center is a sizable data center footprint in Tencent’s capacity expansion plan. And unlike its previous projects, which it have tried lease or self-built, it has adopted a collaborative development approach for QingPu with Shanghai Telecom (a subsidiary of China Telecom).

What’s so special, i.e. what are the energy efficiency measures?

The micro-module approach has the racks cooled by in-row cooling in a cold-aisle containment configuration. The IT racks are power by 1xutility AC and 1xDC UPS supply, which meant electricity energy loss is minimal.

On the roof of each of the data center building is a 3,000 sqm space which will have solar panels that will generate about 300kW of electricity that supplement the utility supply from the 35kV substation.

Furthermore, a 6MVA tri-generation plant is being constructed that will assist in providing lower cost electricity and chilled water during the day.

The target PUE by Tencent for this site (it has not operate for one year yet which is the requirement for annual PUE measurement) is 1.4 which is quite a challenging target, however should be achievable given the various energy efficiency features.

http://news.idcquan.com/news/81946.shtml

http://www.testlab.com.cn/Index/article/id/1105.html

 

5. Ningxia ZhongWei City

zhongwei-windfarm-20131226013311513

zhongwei-amazon-U12587P308T37D52558F956DT20150916165256

Description of the site:

ZhongWei is blessed with lots of sunshine throughout the year and 5 x solar power plants (at least one of it is privately built and run) supplies power to the grid in addition to some wind power plants.  ZhongWei city and Beijing promised direct fibre connectivity between both cities for data center projects that make use of ZhongWei.

City/Location:

ZhongWei City, NingXia Hui Autonomous Region (same status as a province)

District:

ZhongWei City

Size, location, owner:

Three different sizable projects are taking place in ZhongWei city.

The earliest data center project, announced in 2014, is a 3 x Amazon Web Services data centers (the picture with people in the foreground) which are located in a triangle fashion several kilometres apart in three locations within ZhongWei city. However, it is foreseen that it will become operational by 3Q 2016.

Another project is a China Mobile data center complex that was just announced in early 2016.

Last but not least is a Cybernaut invested data center complex, it was mentioned by news media that QiHu360 and Alibaba will lease the Cybernaut facilities. Cybernaut is said to have managed government capital.

Background:

For power, and connectivity, Beijing and ZhongWei city governments are supporting the Western Cloud district concept 西部云基地.

The new projects as announced by Cybernaut and China Mobile seems to indicate that ZhongWei has attracted sizable data center investments.

What’s so special, i.e. what are the energy efficiency measures?

While none of the data center have been fully operational, although the AWS data centers are the first to be constructed and its mentioned to reach 19,200 racks when all the AWS facilities fully occupied, nevertheless the data centers in Zhongwei will enjoy the benefit of cheaper power from renewal energy sources (Solar and wind power), and free air cooling opportunity given the cooler climate when it is not summer.

http://www.10086.cn/aboutus/news/pannounce/nx/201603/t20160330_60977.htm

http://tech.xinmin.cn/2015/09/12/28567371.html

 

6. ZhangBei, Hebei Province

zhangbei-s_25474e44d7604951a782f4454cde021d

Description of the site:

Compared to ZhongWei, this ZhangBei town is fairly new in attracting data center projects. It got noticed because Alibaba has announced to build sizable data center facilities here.

City/Location:

North of ZhangJiaKo City, Hebei Province

District:

ZhangBei county, North of ZhangJiaKo City

Size, location, owner

Alibaba is building at least two data center buildings here in Zhangbei county. The construction and operations of the data center facilities is by Shanghai Athub, a state-owned enterprise that specializes in building and operating data centers.

Background:

Compared to ZhongWei, which announced the AWS project in 2014 but will only become operational by 3Q 2016, the speed of construction and fitting out by Alibaba is fast. Alibaba announced in mid 2015 it will locate a new data center in Zhangbei and the latest news is that the first data center will be operational by end of 2Q 2016.

What’s so special, i.e. what are the energy efficiency measures?

zhangbei-ali-s_5068578e83d143ae96cd6d20c917b287

The data center building is mentioned to be covered by solar panels where it is feasible. It will draw power from utility grid that has one of the lowest cost due to ZhongWei many wind power plants and solar power plants.

We are fairly sure that the Alibaba data center here will feature AliRack, with 1x utility AC supply and 1x DC UPS power supply to the IT racks, using micro-module cold aisle containment.

Furthermore, indirect free air cooling is likely to be used as well to cool the IT racks.

http://tech.xinmin.cn/2015/10/18/28769500.html

http://www.zhangbei.cn/xinwen/zhangbei/xinwen_19059.html

 

7. GuiZhou Big Data Hub Plan

0361b25ae6faa081e3433db269dee2a1

Description of the site:

GuiYang City, Guizhou Province

City/Location:

GuiYang City, GuiZhou Province

District:

Gui’an, GuiYang City

Size, location, owner

Three China carriers and some big data players have announced plans to build big data processing centers (which sits on data centers) in GuiZhou.

Background:

GuiZhou province is announced by China central government to be a big data hub for the western provinces. It received central government support in pushing for state owned companies to place one of their big data processing centers in GuiZhou. Incidentally, GuiYang city in GuiZhou enjoys pretty good climate whereby temperature are moderate in the mid 10s to low 20s in most time of the year which gives new data centers ability to use free air cooling techniques.

While GuiZhou has abundance of power generation capacity, the government still allows and support on-site power co-generation, which is a first. Another first is that Guizhou has a online power trading market that allows heavy power user to purchase power at lower cost, which triggers power generation companies to be more efficient and cost competitive.

 

07.jpg

One of the signature project is the Foxconn tunnel data center as shown in two pictures in this section.

What’s so special, i.e. what are the energy efficiency measures?

GuiYang city in GuiZhou enjoys pretty good climate whereby temperature are moderate in the mid 10s to low 20s in most time of the year which gives new data centers ability to use free air cooling techniques.

Furthermore, the air quality is good such that direct free air cooling can be used which is more efficient and less complex which leads to cost savings in implementing free air cooling.

The Foxconn data center uses containerized data center modules and fans to direct outside air from one end of the tunnel and extract the exhaust air from the other end.

http://www.ce.cn/cysc/tech/gd2012/201605/11/t20160511_11453129.shtml

http://www.datacenterdynamics.com/news/chinas-new-big-data-hub/85687.fullarticle

http://www.datacenterdynamics.com/design-build/foxconn-wants-to-build-your-data-centers/93813.fullarticle

 

Worth Mentioning

Wanda ChengDu ShuangLiu Data Center

This is significant more in the sense that a property company sees big data as beneficial to transform its physical retail business into online business, and not purely a property firm that enters the data center co-location service provider business. Wanda is the largest property company in China. It owns hundred of Wanda malls, Wanda housing projects throughout China. Wanda latest concept is to marry physical shopping with online shopping and to make use of big data analysis to create more value for its retail business through both Wanda itself and with partners. The Wanda ChengDu Shangliu Data Center will house Wanda’s own data center as well as serve as colocation service (total about 1,500 racks) for its partners that includes the 10s of thousands of shops within its mall and also online retailers. It is a gutsy project for a property company to take. This project was completed in end of 2015 and has became operational. http://news.winshang.com/news-534109.html

 

What’s in the series on the China Data Center Market

The China Data Center market series, not necessarily in the order shown, will look at the following topics:

  1. A view on the China Data Center Market – Part 1 of 2
  2. A view on the China Data Center Market – Part 2 of 2
  3. A look at China Data Centers: Some Kind of Special
  4. Technical Advancement in China Data Center Market
  5. China Data Center Market – Foreign Data Center Players
  6. China Data Center Market – China Data Center players
  7. Beijing Data Center market
  8. Shanghai Data Center market
  9. GuangZhou and ShenZhen Data Center market
  10. ChongQing+ChengDu Data Center market
  11. and other China cities that is considered promising growth markets

 

Reference (some of these links are in Chinese):

  1. http://www.datacenterdynamics.com/colo-cloud-/inner-mongolia-an-emerging-region-for-cloud/66676.fullarticle
  2. http://www.wokeji.com/wlw/zxzz/201511/t20151127_1965780.shtml
  3. http://www.chinadcc.org/
  4. http://www.idcun.com/
  5. http://www.businesswire.com/news/home/20150908005493/en/AliCloud-Launches-Energy-Efficient-Qiandao-Lake-Data-Center

 

China Data Centers: Some Kind of Special