The problem with the (use of) PUE in the Data Center industry

Montage-data-centers.jpg

I had mentioned in a previous post on reporting and use of PUE, including the terms iPUE, PUE3, dPUE etc ( https://www.linkedin.com/pulse/data-center-resource-efficiency-pue-isoiec-30134-james-soh-%E8%8B%8F%E6%97%AD%E6%B1%9F).

Like how Tier / Facility Class / Rated are being mentioned fuzzily in the industry, having not make clear whether it is designed according to which standard or certified or not, of which the confusion is not helping the potential clients and the whole industry. Just to clarify, I take no stand against any data center saying that its facility is designed in accordance to a particular standard given that any potential client should and will make detail review and audit of the facility before committing to a co-location deal.

The issue that I like to highlight in this post, is the use of designed PUE (dPUE) instead of PUE in the way it is used in a manner to market or even set policy. dPUE itself is subject to estimation (as per example case in ISO 30134-2) and imprecise. The actual PUE3 versus dPUE can have a huge gap given the IT load profile will normally not ramp up to near 100% for any new data center facility.

This encourages the owner of yet-to-be-built data center to claim a low dPUE. You know, it is an estimate, who is it to say the figure of 1.13 is wrong? You want to check my calculations? Talk to my design consultants who are the ones that work out that number (at the insistence of me to assume the best case situation to come up with a low dPUE).

The announced ban by Beijing for new data center with PUE of 1.5 or above really meant designed PUE. Given that it is a designed PUE, a lot can go into estimating a low dPUE. Who is going to shut-off the power after the data center facility is designed, equipment selected, built and operating at well below full capacity thus yielding a bad actual interim PUE? There are many ways to go about to make the dPUE figure works to your advantage. See reference 1.

You may ignore ancillary power usage or give a very low predicted power usage in the mechanical load or cite the most ideal power efficient chiller in the design but choose a not the most efficient chiller when you decide to purchase the actual equipment. Or you decide to base your dPUE on PUE1 or PUE2 way of calculating the dPUE which makes it look slightly better. They all add (or minus) up.

pue-at-design-load-chart-2

Credit: CGG Facilities. http://www.ccgfacilities.com/insight/detail.aspx?ID=18

From my experience of operating and auditing more than a dozen data centers, I have seen very crude designed PUE estimation and some better ones.

The thing is that the designed PUE always looks too good and it stems from:

  • Not including some of the data center infrastructure losses
  • Not including electricity losses in the cables (3%)
  • Tolerance of installed equipment performing to factory specifications
  • Estimation using PUE1 situation, i.e. at UPS output whereas PUE2 or PUE3 is the recommended way
  • Different environmental conditions over 12 months in a real data center will be sub-optimal

A friend of mine who works in the data center co-location service provider laments that their honesty has given them a lower category in a green data center award versus others in the same city that claim lower dPUE figures and got higher awards. It may not be completely due to the lower dPUE figures, but it play a part.

The clients are not fools and the data center colocation service provider that claims such low dPUE will find it tougher to negotiate co-location service contracts as the power bill recovery in some countries are tied to the actual PUE but related to the dPUE when closer to full utilization. This will eat into their profits.

Ultimately, it is the real PUE3 that measures over a period of 365 days at current client IT power load that matters, and a 100% leased out co-location data center which meant full endorsement by the clients. Nothing speaks better that ka-chin at the cash registers, no amount of bill board outside will take money out of wallets of potential clients. It is how the design, equipment selection, measurement and reporting, running a tight operations, continuous monitoring and enhancement, people that all combines into having a well-run and well respected data center facility with a happy clientele that grows the co-location business. Playing with dPUE gets some attention, but delivering the service consistently and having clients that take up more of your data center space is the indicator of healthy data center business.

It is my hope that awards in energy efficient data center shall be based on actual PUE, rather than designed PUE.

Reference:

  1. http://www.ccgfacilities.com/insight/detail.aspx?ID=18
  2. https://www.greenbiz.com/article/new-efficiency-standard-challenges-data-center-status-quo
  3. http://www.datacenterknowledge.com/archives/2009/07/13/pue-and-marketing-mischief/
  4. ISO/IEC 30134-2    Part 2, Power Usage Effectiveness (“PUE”) – http://www.iso.org/iso/home/store/catalogue_tc/catalogue_tc_browse.htm?commid=654019
The problem with the (use of) PUE in the Data Center industry

Hyperscale, 3rd party colocation service providers and the enterprise data center

sdr

Published 22 January 2017

Last November, I attended the DataCenterDynamics Zettastructure conference in London. There was a number of workshops on Open Compute Project (OCP) and one particular topic stands out – how OCP will impact the third party colocation players in Europe. To me, by extension, the same issue is faced by data centers in Asia when considering OCP type of racks.

On OCP website, it says “The Open Compute Project is …. More efficient, flexible and scalable”. The question is, to whom? At the moment, they are meant for the hyperscale data centers, i.e. used by Facebook, Yahoo!, Microsoft and such.

One benefit cited by OCP vendors is the speed to implement the compute/storage capacity, which meant that the compute/storage capacity arrives on site and ready to plug in. There should not be any on rack-on/rack-off work needed other than to plug the power in.

In the United States, Facebook, Yahoo!, Microsoft have large facilities (be it first party or third party custom-built site) are designed and built to accommodate hyperscale deployment and these sites accommodate the OCP racks without major issue.

The thing is, most sites in the rest of the world is not planned, designed nor implemented to accommodate thousands of OCP racks. The workshop where I participated in have colocation service providers asking the OCP data center project members what is the average power draw of average OCP racks, so that their private suite or colocation hall can accommodate some limited quantity of OCP racks.

When I talk to data center engineers from the Baidu-Alibaba-Tencent trio, they said their Project Scorpio (now called Open Data Center Committee – ODCC) racks are designed to fit into the top few data center facilities in data centers in 1st and 2nd tier China cities, on average putting 7kW cap per rack power capacity when going into third party colocation facility. This philosophy meant their asset light data center deployment with hot/cold aisle containment deployment of the Scorpio racks can go as planned in nearby every city that they wanted to deploy compute/storage capacity.

The other issue with OCP / ODCC racks are that these are mainly designed for hyperscale data center usage, and the largest users of IT hardware, meaning the enterprises are so called “missing out” on the benefits of quick deployment of IT capacity. Data centers in Asia, be it colocation space or enterprise data centers/computer rooms, are mostly around 5 to 6kW per rack in most of Asia (reference 4, 5 and 6).

Be it Baidu-Alibaba-Tencent, or Facebook, Yahoo!, Microsoft, these OCP / ODCC racks will not benefit the enterprises unless they accommodate demand of enterprise data center. Currently, the enterprise IT side do not see much benefit of OCP / ODCC, as they don’t look at their need of compute/storage on the scale that the current clients of OCP / ODCC. However, I believe this will change. Enterprise IT talk about software / app deployment too, and compute/storage last and this create pressure on the data center folks to quickly get ready space/rack and the IT capacity folks procure server/storage/network to add to current pool. Until the OCP / ODCC vendors think in terms of the way of the enterprise IT, which I predict they will, the enterprise data center market will not warm up the the OCP / ODCC vendors.

However, this is where I think the OCP vendors ill not limit their offerings to the Internet giants. They will need to consider when designing their hardware in consideration of the enterprise market because it is much larger than the Internet giants, such as designing their racks (which includes compute/storage/network gear) to be in stepped load of say 6, 8, 10 kW, and in terms of how enterprise IT will use them, i.e. on a per rack or per project basis or per enterprise private cloud basis. A new OCP vendor that I spoke to in London said that given the competition and the limited customer pool (of hyperscale data center), they want to sell to the enterprises. Sooner or later, we will see some sort of OCP / ODCC racks that are designed for deployment by enterprise into enterprise data centers and also third party colocation data centers.

 

Reference:

  1. http://www.opencompute.org/about/
  2. http://www.opendatacenter.cn/
  3. http://searchdatacenter.techtarget.com/feature/Hyperscale-data-center-means-different-hardware-needs-roles-for-IT
  4. http://www.datacenterdynamics.com/content-tracks/power-cooling/watts-up/94463.fullarticle
  5. http://ww2.frost.com/news/press-releases/australian-data-centre-market-offers-sizeable-growth-opportunities-says-frost-sullivan/
  6. http://asia.colt.net/services/data-centre/about-colt-data-centres/tdc2/

 

 

Hyperscale, 3rd party colocation service providers and the enterprise data center