Internet data centers are power hogs even as we see them as saving energy locally by offsite-ing data to the cloud. No need for local servers,
Though our online activity uses no paper, it still consumes quite a lot of energy. Data centers account for much of this energy use. These warehouse-sized buildings contain arrays or “farms” of servers, which are essentially souped-up computers that have many uses, including storing data and supporting all the activity on the internet. They are the hardware behind the proverbial “cloud.”
Like the personal computers we all use, servers require electricity to function. Since internet users can call upon them to provide information at any time, they must remain on 24/7. Furthermore, as with any form of electrical activity, the functioning of this large number of servers packed together in a small area can result in overheating, making the need for cooling an additional energy cost for data center managers.
According to data center provider vXchnge, U.S. data centers alone use over 90 billion kilowatt-hours of electricity annually—about what 34 coal-powered plants generating 500 megawatts each produce. ComputerWorld magazine reports that the energy consumption of data centers worldwide will likely account for 3.2 percent of global carbon emissions by 2025—about as much as the airline industry—and as much as 14 percent by 2040.
In light of all this, finding ways to cut energy use has become a big priority in the industry. One of the simplest strategies is to locate data centers in cool climates, and use outdoor air to counter excessive heating. Alternate options include cooling inlet air by running it underground, or using a nearby water source for liquid cooling. Another issue is separating hot air produced by servers from the colder air used to cool them—no easy task if the servers are all housed together. But there are plenty of cheap solutions. Google, for example, uses low cost dividers from meat lockers for this purpose.
Another way data centers can reduce cooling costs is to design servers that can operate at high temperatures without overheating. Recent research shows that servers can operate at much higher temperatures than initially believed without compromising safety or efficiency. But not all data centers are comfortable letting their servers run hot. Other ways to make server farms more efficient include optimizing grid-to-server electrical conversions and reducing the energy required by “sleeping” servers.
The good news is the industry is making strides in the right direction. Apple, Facebook and Google all power 100 percent of their data center and other operations with renewables, albeit through the purchase of “renewable energy credits” akin to carbon offsets that air travelers can buy to keep their carbon footprints in check. Microsoft is moving toward 70 percent renewable energy by 2023, while laggard Amazon still only gets about half its data center power from renewables. And Switch, one of the largest U.S.-based data center companies, transitioned all of its facilities to run on nothing but renewables in 2016, including the nation’s largest data center in Reno, Nevada.
CONTACTS: “How to Improve Data Center Power Consumption & Energy Efficiency,” vxchnge.com/blog/power-hungry-the-growing-energy-demands-of-data-centers; “Why data centres are the new frontier in the fight against climate change,” bit.ly/data-center-emissions; “Amazon is breaking its renewable energy commitments, Greenpeace claims,” bit.ly/amazon-laggard.
EarthTalk® is produced by Roddy Scheer & Doug Moss for the 501(c)3 nonprofit EarthTalk. See more at https://emagazine.com. To donate, visit https://earthtalk.org. Send questions to: [email protected].
Viewers are encouraged to subscribe and join the conversation for more insightful commentary and to support progressive messages. Together, we can populate the internet with progressive messages that represent the true aspirations of most Americans.
wzrd1 says
There are a few other strategies for server heat output management. Using large DC power supplies leave one with a pair of power supplies for multiple servers, conserving heat output by the lower number of power supplies per server (compared to two power supplies per server supplied by AC power).
Another popular method is virtualizing one’s servers, having one monster 8U unit running dozens of server instances of virtual servers. The mythical DNC server hardware was just such an instance and the FBI has the entire server image with databases, so as usual, Trump is full of sh – erm, hot air.
Heat management is typically conducted per rack, usually with the cooled air entering the bottom and the hot air exhausted from the top into receiving ductwork.
Add in no monitors in a rack, too much valuable real estate being chewed up and energy is saved. Besides, using a monitor won’t allow you to manage a virtual machine.
Proper data centers will also have some large evaporative cooling towers for the HVAC used to cool those chip cookers. Trust me, in one small facility with a mere 60 servers, the room could and did hit 120 degrees when the HVAC failed and that’s some serious abuse for the network switches and servers alike. I’d end up changing at least one cooling fan per month when the HVAC failed and weekly have to reseat at least one RAID drive in an array! That happened frequently enough that I ended up cobbling an environmental condition sensor via SMTP requests for our main Catalyst 6509 switch’s backplane temperature (no budget for real equipment).
There are a wide variety of other thermal control techniques, enough so that it’s it own specialty.