Data Centers That Scale
You hear the buzz, 150 watts per square foot, 200 watts per square foot, more than 300 watts per square foot... Is it real? If so, what does it mean in terms of resources? Are data center server, application, and communication systems at risk in the event of even a single mechanical or electrical systems failure?
Data Centers That Scale
Data Centers That Scale
Data Centers That Scale
Data Centers That Scale
It is a topic data center operators cannot avoid. Servers continue getting denser, and the ability to power and cool large, dense systems impleMentations has given us interesting challenges. With good planning we can certainly overcome those challenges; however we also need to understand the true cost of higher watts per square foot on both real estate budget and risk.
Let's look at a 10,000 square foot data center. The task is to understand the space requireMents to build infrastructure needed to support a 100watt, 150watt, and 200watt/sqft facility. To set the task, we will assume the 10,000 sqft space is gross, with no space lost to common areas, columns, or other obstructions. For this discussion we will also not account for the space required to support emergency power generators or cooling towers.
Cooling a High Density Data Center
Data center cooling is potentially the biggest concern of all. While we may be able to add redundancy to cooling towers, it is very difficult to add redundancy to Air handling units. Physically you could potentially add a +1 cooling unit in a data center space; however the unit would need to immediately take over for individual CRACs in a location sensitive environment. Unless you can move a 20 or 30ton CRAC unit on demand, you have exposure.
With a raised floor that exposure is reduced, as the intention is to pressure the raised floor area with cold Air that will be blown up into the Supply side of server equipment. Having a standby or backup CRAC unit could contribute to overall floor pressure. For plenum HVAC equipment on a VCT (solid) floor, this is much more difficult, as nearly all high density installations with plenum Air handling units will have custom designs, including custom ducting connected to the units.
At >150 watts/sqft you will have very little time to respond once the unit has failed, as Supply sides of units will have no directed cold air. In addition, hot air return systems may also fail, causing sTAGnation in hot areas that will further support hot air recirculation.
This risk can best be minimized through aggressive preventive maintenance schedules and having adequate temporary cooling units on hand in the event of a failure or emergency.
Cooling is calculated in terms of British Thermal Units (BTUs) - or the amount of heat which can be removed from a space with assistance of heating, ventilating, and air conditioning equipment (HVAC). To calculate cooling tonnage, use the following formula:
1 Watt = 3.412 BTU
12,000 BTU = 1 Ton of cooling capacity
If you have a group of high density servers, to calculate the cooling requirement you can use the following guidelines:
Example
1 Server = 2000 watts
40 servers = 80,000 watts
80,000 Watts * 3.412 = 272960 (BTU)
272960 BTU / 12,000 = 22.74 tons cooling requirement
Another example, if you have a 100 sqft cage, and have built your cage out to 175W/sqft, you would have the following cooling requirement:
100 * 175w = 17,500w
17,500w * 3.412 = 58,710 (BTU)
58,710 / 12,000 = ~5 tons cooling
Space Requirements for Mechanical Equipment
Higher density data center spaces come at a cost, in electricity and in space needed for both mechanical (HVAC) and electrical distribution. If we look at the space requirements for air handling units, using an Emerson 30ton unit as an example, the space needed to support this unit is about 94 square feet. The unit itself is about 3ft x 10ft (30sqft). Adding space for access and maintenance (3ft along the edges, and 4 ft in front of the unit for maintenance and access) brings the total to 94.
So, on the mechanical side, for every 30 tons of cooling needed you will contribute at least 94 sqft to cooling. If you need a +1 redundancy in your cooling requirement, you will lose another 94sqft for each redundant unit planned.
Let's put this into an example - just accounting for space needed to support HVAC equipment. We'll make the assumption water piping to support condenser or chilled water loops is overhead or under raised floor.
10,000 sqft at 200w/sqft
2,000,000 watts requiring cooling
2,000,000 * 3.412 = 6,824,000 BTUs
6,824,000 / 12,000 = 568 tons cooling
568 /30 (using 30ton CRAC units) = 19 units
19* 94 (sqft/unit) = 1786 sqft required for CRAC units
The cost in electricity is summed up as:
30ton CRAC unit w/2 compressors = 110amps at 480v for peak use
30ton CRAH unit w/25 HP Fan motor = 23amps at 480v
600ton cooling tower = 50amps at 480v
600ton water chillers if needed for chilled water system = 1200 amps at 480v
Electrical Systems and Distribution
Our data center is also going to require both primary and emergency power systems to bring us up to 200 watts/sqft. Data center power systems include the following components:
- Switchgear needed to distribute primary utility power presented by the Supplying power company- Either buss duct or "pipe and wire" distribution from switchgear to facility- Automatic transfer switches to connect either utility power or emergency backup power to facility- Uninterruptible Power Supply (UPS) to provide temporary battery power to facility- Switchgear to distribute 480v to mechanical equipment and UPS- Transformers to break (in the USA) 480v to 208/120v- Distribution panels to distribute 208/120v to individual user breakers
As a guide, 480V panels require 42" spacing due to the high power, potential for arc flash potential, and safe maintenance zone.
To accommodate the HVAC (CRAC or CRAH) equipment, UPSs, switchgear, transformers, and automatic transfer equipment you can plan on the following metrics (using CRG West experience):
· 100w/sqft
- CRAH or CRAC units @94sqft (10 units required) = 940sqft
- Electrical equipment = 700sqft
- 10,000 sqft data center M&E requirement = 1640sqft
· 150w/sqft
- CRAH or CRAC units @94sqft (15 units required) = 1410sqft
- Electrical equipment = 1000sqft
- 10,000 sqft data center M&E requirement = 2410sqft
· 200w/sqft
- CRAH or CRAC units @94sqft (20 units required) = 1880sqft
- Electrical equipment = 1400sqft
- 10,000 sqft data center M&E requirement = 3280sqft
Another way to look at this component is if you are planning to use 10,000sqft as your total usable space. Then you will lose an increasingly large amount of server-usable space as you increase the watts/sqft density within the space. At that point you need to determine if the loss of usable data center space with high watts/sqft is worth the increased density.
This calculation is only for data center-facing equipment. The actual cooling towers, water chillers, and emergency power generation equipment (including diesel fuel tanks), if included in your space planning requirement, will reduce the space efficiency in any data center location to around 40%. Each component of added redundancy increases the requirement for M&E equipment, as does the density requirement of watts/sqft.
Of course you can increase efficiency through use of more scientific and efficient data center designs, including hot/cold row design, heat curtains, directed heat exhaust and dropped ceilings - however there is a point that you will reach the pure physics of how much heat can be removed, regardless of design. While there are now designs incorporating chilled water into individual racks, and other rack-based cooling and heat extraction designs, most companies cannot afford the cost of building that infrastructure into their construction.
The Risk of Failure
The load of BTUs on temperature (F) is calculated as 1BTU=the amount of energy required to raise the temperature of one pound of water by one degree Fahrenheit. Thus, if you have an area served by a 30ton HVAC unit, able to cool and extract energy at 360,000 BTUs, you would lose that cooling and heat removal capacity in the event of a unit failure.
This adds fuel to the debate on raised floor versus flat VCT floor. In a raised floor you are pressurizing the sub-floor with the cooling capacity offered on how many tons of cooling is available to data center space. The cold air is forced under the floor, forcing cold air up through grills or openings in the floor, hopefully into the supply side of server or data center equipment.
In the VCT environment you are ducting air into custom designs to reinforce high performance cooling. Potentially even worse, are individual rack cooling units that may be providing dedicated cooling to individual racks.
In a raised floor environment of 10,000sqft, at 200 watts/sqft, as mentioned above, you would have around 20 x 30ton cooling units available to remove heat and cool the room. This is a total of about 7.2 million BTUs of heat removal capacity. If you lose one unit, you will lose about 5% of your total under floor pressurization and heat removal capacity. Not good, and may produce some warm spots in the data center, but percenTAGe-wise it is not catastrophic.
On the other hand, if you are using CRAC/CRAH units in a VCT environment with directed airflow, loss of a single unit comes at a much higher price tag of potentially 360,000 BTUs of heat being generated in a localized area. The effect is similar to if you had 1055 100watt light bulbs being used in a very small, localized area - the rate of heat buildup in that localized area would be extreme, with little recourse for corrective action other than to immediately position temporary cooling units in the area until the primary CRAC unit is repaired and returned to service.
This also should raise the design point that CRAC/CRAH units should never share a common electrical source. If one source of power is disrupted, you do not want to lose 100% of your cooling capacity. In addition, cooling systems should always be connected to emergency power, as it will do very little good in a high density data center to have equipment operating without support cooling.
Summary
Technically it is possible to solve just about all data center design challenges, even with dense server and other equipment continuing to push the amount of energy per piece of equipment to higher and higher levels. However, high density comes at a price. A price in how much real estate is required to support high density and redundant power systems, high density cooling requirements, and potentially the added cost of raised floor data center areas.
Even when you have designed a data center capable of supporting high density equipment, there is a high risk that failure in any part of the cooling systems will result in potentially unacceptable amounts of rapid heat buildup in localized areas - which will eventually result in catastrophic failure of Computer and communications equipment.
Data Centers That Scale