Energy and the Information Infrastructure Part 1: Bitcoins & Behemoth Datacenters

Energy and the Information Infrastructure Part 1: Bitcoins & Behemoth Datacenters
Andy Tullis/The Bulletin via AP, File

A note about this series:

It has been said that data is the new oil. Both domains are critical to society, entailing massive infrastructures and globe-spanning businesses. Of course one produces energy, while the other uses it. It may seem inconceivable that a single smartphone is responsible for as much electricity demand as a refrigerator, but that’s what the physics dictates. This series explores the energy realities ‘buried’ in the hardware in our age of ubiquitous computing and communications.

Information has an energy cost. Even the Pony Express had real-world energy costs; every horse along its iconic 1,900-mile network of stations consumed biofuel equivalent to 100 gallons of oil annually. The Pony Express achieved a record-setting ability to move mail in just 10 days over such a vast distance. That translates, in today’s terms, to about a 1 kilobyte per second data rate.

It’s not news that electricity now enables a billion-fold higher data rate propelling an infinitely vaster volume of information on hundreds of millions of miles of fiber and wireless networks. But it took Satoshi Nakamoto, the pseudonymous bitcoin founder, to bring global attention to the scale of the ‘hidden’ energy costs of processing and moving data.

Creating bitcoins, i.e., a virtual currency, requires physical warehouses full of compute power in order to solve complex equations. Those microprocessors expend roughly as much energy to create a single bitcoin as the energy needed to mine a single ounce of gold. (Or in Pony Express terms, about one-horse-month’s worth of energy per bitcoin.)

While bitcoin mining ignited debate in the engineering community, it’s clear that the collective electricity consumed by the world’s so-called bitcoin mines now rivals the kilowatt-hours consumed by the state of New Jersey. The biggest concentration of bitcoin mines: Inner Mongolia, powered by a cheap coal-fired grid.

Even after creation, every bitcoin transaction is enabled and validated on a blockchain network using myriad distributed computers — all-consuming energy, each time. The last time the underlying transport costs of a currency were so important was in the 12th century when the Knights Templar hauled physical gold in carts.

But just as gold mining is today dwarfed by the scale of global mining for all other minerals, so too is bitcoin mining dwarfed by the gargantuan scale of data processed for everything else.

Every 30 seconds the global Internet transports and processes a greater quantity of data than is held by the Library of Congress. Every byte in that tsunami of data requires power. The hardware used to create, transport and store data constitutes the world’s newest — and before long, one of the biggest — energy-consuming infrastructures.

It’s difficult to visualize just how big the business of data has become. Neither analogies to libraries worth of data, nor impressive trillion-dollar stock valuations, directly illuminate the sheer scale of the underlying hardware. But at the beating heart of the World Wide Web’s virtuality, we find something familiar: huge buildings, so-called datacenters. Datacenters are, eponymously, buildings where data is stored, processed and massaged. For real estate firms that track and monetize such buildings, they’re just another form of real estate filled with digital hardware instead of people and office furniture.

But the term “datacenter” is insufficiently evocative; it’s a word that came into common use in the 1970s with the proliferation of rooms full of computers. Massive “office centers” came to be called skyscrapers in the 1880s when technology unleashed that new class of real estate. Today’s datacenters bear as much resemblance to ‘70s computer rooms as a skyscraper does to an H&R Block office at a local strip mall. But “datacenter” is, so far, the term we’re stuck with.

The world’s biggest datacenter, located near Reno, Nevada, has twice the square footage under its roof as does the world’s tallest skyscraper, the Burj Khalifa in Dubai. The latter is configured to house about 100,000 bio-processors (a.k.a. humans) each of which, not incidentally, generates about 100 watts of heat. Meanwhile the former houses some 200,000 silicon processers each of which both consumes over 100 watts of electricity and generates 100 watts of heat, 24x7. For a glimpse at the aggregate energy implications of datacenters, compare some numbers around these real estate plays.

The two classes of buildings cost roughly the same to build, but far more datacenters are being built. And a square foot of a datacenter rents for five time as much as a skyscraper (per CBRE, a global real estate firm). That explains why datacenters are such an attractive focus for real estate investors. And, to the subject at hand, a square foot of datacenter inhales more than 100 times more electricity than does a square foot of skyscraper. Datacenter firms thus, more often talk about their buildings in terms of megawatts. Still, continuing in standard real-estate square-footage terms, consider:

At the top of the respective food chains, we find that the 10 biggest datacenters in the world have more square footage collectively than do the top 10 skyscrapers. In total, there are more than 5,000 enterprise-class datacenters in the world, compared to about 1,500 enterprise-class ‘office centers’, i.e., skyscrapers taller than 40 stories. (FYI: the Burj Khalifa is 163 stories and there are fewer than 150 skyscrapers globally equal to or taller than the 102-story Empire State building.) And there are another eight million small datacenters in the world; again in real estate terms, think of those as digital corner stores and strip malls.

Illustrative of what’s coming next, earlier this year a new entrant into this market telegraphically named EdgeCore Internet Real Estate, launched with a $2 billion war chest (anchored in Singapore sovereign wealth money) to build out Burj Khalifa class datacenters. One of its first million-square-foot buildings is planned for Northern Virginia where more than a dozen similar firms have already installed the planet’s largest concentration of mega-class datacenters. That region, in the penumbra of Washington D.C., is to datacenters what Hong Kong is to skyscrapers, the latter with world’s largest concentration of enterprise-class “office centers.”

But EdgeCore, like all datacenter real-estate firms, talks more typically in power terms with its goal to build out about one thousand megawatts of digital load. That’s just one firm. So, it should be no surprise to learn that, while there are no official reporting mechanisms for precise tracking, datacenters are variously estimated to already collectively consume 2 percent of the world’s electricity.

This means datacenters use more energy than global aviation. Or, in electric car terms: that’s about 50-fold more kilowatt-hours than the world’s 3 million EVs consume. Put differently, in a future world when there are 50 times more EVs, they’ll use as much electricity as today’s datacenters; but by then the proliferation of datacenters will have blown well past the global EV demand for kWh.

Even these surprising trends understate what’s coming. The ascent of artificial intelligence (AI) promises a fundamental shift in power use because AI is deeply compute-intensive. In the ‘old’ world of mere software, CPUs often perform tasks then throttle down. But in the AI world of GPUs and entirely new classes of dedicated AI silicon, microprocessors operate more like Formula 1 cars, flat out all the time. To gauge what that implies in power terms, look to the smartest computer on the planet today.

This past June, when the new Summit supercomputer at the Oak Ridge National Labs clocked in at 200 petaflops, America regained the title of having the planet’s most powerful machine, besting the 93 petaflops of the former #1 in China. Summit is, at core, an AI machine on steroids. Both its software and hardware pave the way to a future of distributed AI supercomputing that will proliferate inside ‘conventional’ datacenters. For the power cognoscenti, Summit’s power-per-square-foot demand is another 20-fold greater than conventional ‘iron’ creates in today’s datacenters.

And that’s not the whole story. None of the above accounting includes the energy used in the wired and wireless networks to move data to and from the datacenters, or for other related hardware, or in manufacturing all the equipment used throughout the infrastructure. We explore those issues later in this series. For now, suffice to observe that, despite their scale, datacenters are in fact the iconic “tip of the iceberg” and account for only about one-fifth to one-third of the total electricity consumption of everything associated with information tech.

Apropos the introduction to this series: One can arithmetically translate all global digital energy use into per-person, per smartphone terms. With the 3.5 billion smartphones responsible for about 60 percent of all Internet traffic, the math yields each smartphone’s average annual electricity use about the same as a high-efficiency household refrigerator.

The nub of the story is that there is now an entirely new and unprecedented kind of energy-consuming infrastructure. The jury is out on just how much more energy it will eventually consume. For now, credit Greenpeace with observing that the “internet will likely be the largest single thing we build as a species.”

Energy of course is only one of several lenses through which to view society’s infrastructures and operations, but like oxygen and food, it is utterly essential. Perhaps the best scholarship and data available on the history, evolution and nature of energy in service of society is found in economics professor Roger Fouquet’s seminal book “Heat, Power and Light: Revolutions in Energy Services.” Fouquet maps from the years 1500 to 2000, the nature of how humanity has used energy in terms of four core “energy services” — heat, transport, power, and light. Until very recently, essentially all energy use could be accounted for within those four services.

For most of the 500-year history Fouquet mapped, energy for domestic and industrial heat utterly dominated the picture. It wasn’t until 1850 that society used as much energy for transportation as was used to keep warm — by then, the keeping warm part had risen some 500 percent. And then it took until 1950 for illumination to use as much energy as transportation had in 1850. (By 1950, transportation energy use had risen about 500 percent as well compared to 1850.)

Society has not seen a new “energy service” vector arrive for two centuries, until now. Fouquet’s analysis doesn’t include energy in service of information. Fouquet could, in theory, have calculated energy for information services across those same five centuries. The energy cost to make paper in a single book, while far less now than centuries ago, is still equivalent to the fuel consumed driving a Prius 10 miles. There were also energy costs associated with building the libraries that housed the books, etc. But to be fair to Fouquet, those numbers were so tiny compared to energy for heating that they’d disappear from visibility.

History’s ignition point with regard to the energy cost of information becoming visible can be traced to 1946. The world’s first, and then only, datacenter was ENIAC’s room full of 20,000 burning hot vacuum tubes, which demanded 150 kW. But the proliferation of the new data infrastructures didn’t begin to explode until the Internet’s expansion started at the end of the 20th century — i.e., it began when Fouquet’s history ends. Now the power level of a single ENIAC is found in every dozen square feet inside the billions of square feet of datacenters.

There is no dispute that a new “energy service” has arrived. The core question is whether in fact the trajectory will look like all others in history. Thus it’s relevant to note what Fouquet describes regarding his scholarship on all other energy services:

A key idea in the book is the tendency for markets to find ways of consuming energy more efficiently, producing ever-cheaper energy services and, in the long run, consuming greater amounts of energy - with significant effects on wellbeing and the environment.

The odds are the “information service” trajectory will look the same as for other services Fouquet mapped. By 2050, society will likely use more energy for data than was used for illumination in 1950.

It is the second part of Fouquet’s latter point — that rising use of every energy service has benefits for humanity’s “wellbeing” as well as relevance to the environment — that animates Greenpeace’s public campaign to shame datacenter operators into subsidizing more wind and solar. It is understandable that big datacenter operators, whether Amazon, Apple, Google or Facebook, are targets for activists with opinions about how electricity should be created to serve billions of smartphones. It’s a familiar pattern seen also in how Exxon, Shell and BP are often targeted for how energy is used by billions of vehicles. But how electricity is produced is entirely irrelevant to understanding just how much energy data consumes, and will yet consume.

In 2008, contemporaneous with the arrival of the Great Recession, the Department of Energy released a report on datacenter energy use in America, concluding that those buildings accounted for about 2 percent of U.S. electricity consumption. In an update published two years ago, the researchers concluded that total U.S. datacenter energy demand had risen a meager 4 percent between 2010 and 2014, having grown 24 percent from 2005 to 2010, and 90 percent from 2000 to 2005. Their conclusion was that data energy use has entered a kind of new normal; i.e., that data use will now proliferate, but datacenter energy costs will stay essentially flat or even decline.

We will soon see whether that is in fact what happens. In the meantime, herein a counter forecast based on some things we know to be true.

Start with the tendency of forecasters to ignore the obvious elephant in the room: the recession. Trends seen during a recession are not representative of anything except in how recessions impact people and businesses. From 2008 until 2014 capital investments in hardware of all kind were either stagnant or slowed. Serious capital spending on digitalization in most parts of the economy has only now begun. Federal Reserve data shows, for example, that manufacturing spending on info tech was both below the historic rate and flat during the recession. The recession thus tamped down even what seemed like an already amazing growth in digital traffic. With the economic recovery, one would expect data traffic to take off, as it has. Thus we note that CBRE’s tracking shows datacenter capital spending in 2017 was double that seen in 2016 and eclipsed the prior three years of spending combined. Others that follow these trends, from pwc to Cisco, point to a torrid pace in datacenter construction.

Meanwhile, during that recession-infused period of modest expansion, there were three significant things that datacenter operators were able to do to improve energy efficiency. All were essentially one-time gains and all centered on what can only be called the mature old-world aspects of power.

First, engineers optimized the cooling systems that draw the heat away from the silicon (and, collaterally, learned they can run the silicon a little hotter than thought in early days, thus also reducing cooling needs). Second, power management of the central processing units (CPUs) progressed from the first era of, in effect, being simply “on” to now being dynamically proportional to demand. And third, software allowed a so-called “virtualization” of all the CPUs — i.e., rather than have dedicated computing associated with specific tasks, everything gets shared (virtualized) so that there are nearly no (one cannot go to zero) idle computers burning power. Further gains are of course possible in all three of these areas but, as with all conventional energy systems, progress is now on an asymptote. The proverbial “low hanging fruit” has been harvested.

Still, there is another notable efficiency gain coming. It’s in the hyper-centralization of computing into so-called hyperscale datacenters replacing more expensive and inefficient smaller distributed datacenters: this is, in short, The Cloud. (Perhaps “hyperscale” is the new word, or maybe cloud-centers, that will come into common usage, much as “skyscraper” did a century ago.)

Hyperscale datacenters are the ‘ginormous’ billion-dollar buildings, the Burj Kafils of the data world. It bears noting that Google’s engineers anticipated, perhaps stimulated, this trend with the 2009 publication of a seminal research document: “The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines.In other words, a one-million-square-foot datacenter is not a building full of computers, but really a single monstrous multi-megawatt-class computer. Those Google engineers observed a decade ago:

Energy efficiency is a key cost driver for WSCs [warehouse-scale computers], and we expect energy usage to become an increasingly important factor of WSC design. The current state of the industry is poor: the average real-world datacenter and the average server are far too inefficient, mostly because efficiency has historically been neglected and has taken a backseat relative to reliability, performance, and capital expenditures.

Now that the data world is a full generation past the engineering circa 2009, WSCs are being built out using state-of-the-art efficiency. There are only a handful of Burj Kafils in existence and planned, but there are already about 400 hyperscale datacenters in the world, a total expected to reach 500 in two more years. About one-half of all datacenter compute power will soon reside in hyperscale buildings.

As the data processing now done in millions of the extant distributed datacenters migrates into hyperscale central ‘utility’ buildings, the far greater energy efficiency of the latter will of course reduce energy used for those existing data levels. (It’s impossible to ignore — and we’ll return to this as well, later — that the same pundits who laud utility-scale efficiency for computing, seem not to notice the physics of scale efficiency also attends to electricity generation.) Thus at the macro level, there is one central question: Will the demand for data now grow faster than the rate of improvement in energy efficiency of data-processing?

The answer to that question, at the high-level of abstraction rather than engineering-specific, is found in Fouquet’s conclusion quoted above: more efficiency leads to more net energy use. And, we note, that there are no forecast improvements to underlying efficiency that match the 300-percent increase in data generation Cisco now expects is coming over the next five years alone.

On top of that, there is another data-centric trend that is an artifact of our time that was not part of the energy calculus a decade ago. Even as small, underperforming and low-efficiency datacenters are finding their tasks moved into the highly efficient Cloud, we now see an explosion of activity in so-called “edge computing.” The latter is a new term for, well, small distributed datacenters.

Edge computing, however, is driven not by economics (centralized Cloud will always be cheaper), but by physics. The “edge” refers to the placement of small-scale datacenters on the edges of networks, i.e., close to where the data are generated for applications that require near instantaneous computer processing. Included, for example, are any virtual and augmented reality systems, high-performance games, autonomous vehicles, and any artificial intelligence (AI) application that interfaces with physical systems. In the latter case, real-time analysis and control can be vital for safety in emerging cyber-physical systems. It’s just not feasible to quickly move gargantuan data sets to far-off low-cost Cloud facilities. The speed of light is, amazingly, just not fast enough.

In the indelicate but accurate words of the CEO of IO, a start-up edge computing company; “Speed of light sucks.” There is no small irony to the fact that each IO edge data center uses an ENIAC-scale of power of 150 kW per machine. While IO is planning 100 edge data centers by 2020, they’re far from alone in this play. Giant AT&T, for example, is fully committed to edge datacenters and has a network of tens of thousands of cell towers that are, by definition, on the “edge” of networks. Similarly, giant HPE recently announced a $4 billion commitment to next-generation edge hardware.

In sum, the datacenter world is not a mature industry on a glideslope of slow growth, but rather at a turning point comparable to that which took place circa 1998 in the first era of warehouse-sized, instead of room-sized datacenters. We will eventually conclude this series, as noted earlier, with a look at the issue of how all that electricity is produced. Though the physics of microprocessors is indifferent to whether the energizing electrons originate from the photovoltaic effect or the electromagnetic effect.

Nonetheless, it’s impossible to avoid recent claims by some tech giants, such as Apple and Facebook, that their datacenters are running entirely on wind and solar. The physics reality is that all datacenters run on a local utility grid. And since electricity is fungible, it means the electricity actually energizing CPUs is precisely pro rata whatever energizes that grid, whether 70 percent from hydro in Washington, 50 percent from coal in Iowa, or 50 percent natural gas in Florida and California. That also means that today’s global information services derive electricity from the same fuel mix as the global average: 40 percent coal, 22 percent natural gas, 16 percent hydro, 11 percent nuclear, 4 percent wind, 4 percent oil, and about 1 percent solar.

Of course, regulatory legerdemain allows one to pretend that a purchase of wind or solar power somewhere else is, by fiat not fact, attached to a particular building. This is no different than the once popular idea of paying to plant a tree somewhere as an indulgence for taking a trip on an aircraft. No amount of vigorous PR changes the fact the aircraft necessarily burns aviation fuel. And no similar ‘credit’ changes the fact of a datacenter’s 24x7 need for power.

The next wave of demand to power data is coming. And it’s likely coming far faster than changes in how that power is supplied.

Mark P. Mills is a senior fellow at the Manhattan Institute and a McCormick School of Engineering Faculty Fellow at Northwestern University, and author of “Work in the Age of Robots, just published by Encounter Books. Support for the research in this series came in part from the Forbes School of Business & Technology at Ashford University, where Mills serves on the Advisory Board.

Comment
Show comments Hide Comments

Related Articles