Energy and the Information Infrastructure Part 5: Robots Eat Too: Huang’s Law & The Voracious Appetite of Artificial Intelligence

Energy and the Information Infrastructure Part 5: Robots Eat Too: Huang’s Law & The Voracious Appetite of Artificial Intelligence
AP Photo/Jens Meyer

A note about this series:

A half-century after the terms “machine learning” and “artificial intelligence” were first coined, the age of AI is now in sight, emerging from three symbiotic silicon infrastructures: ubiquitous sensors generating massive data, high-performance communication networks, and the post-Moore’s Law era where Huang’s Law propels exascale computing. It’s no coincidence that this emulates the human architecture: a distributed sensory apparatus, labyrinthine nervous system, and a brain. And, as with human intelligence, AI energy use is a feature not a bug.

Click here for Parts 1, 2, 3 and 4

Google Images

 Will future bureaucrats impose CAFÉ-like fuel efficiency standards on the engines of artificial intelligence (AI)?

After all, computing necessarily uses energy. And we know that AI computers under the hood of a useful robocar won’t have extension cords. If you do the math, when the all-robocar future does arrive, the energy used just by all those automotive AI ‘brains’ will itself outstrip the fuel used by all the cars on California roads today. That is, as they say, not nothing.

Or, put differently, the energy needed by silicon sensors and logic to navigate cars will degrade a vehicle’s propulsion fuel mileage by at least 10%, likely more. Measured in Detroit rather than Silicon Valley terms, that means a robocar’s brain will burn fuel at the equivalent rate of 150 mpg. That may sound impressive but it’s at least 1,000 times less efficient than the fuel use of the average natural, if addled brain.

With the emergence of AI as an entirely new class of computing, one can ask interesting questions: How much energy will it take to drive to work, or compose a symphony, or discover a new drug? We’re not talking about the electricity to power an electric car, a concert hall, or laboratory. Instead the question is about the energy needed to power the brains.

Biophysics provides an answer with regard to natural as opposed to artificial intelligence. Of course the convention when it comes to human intelligence is to count dollars not kilowatt-hours as the measure of time-based costs of putting neurons to work. (Truth be told, that’s the convention that matters for AI too.) The astoundingly efficient wetware cossetted within the human cranium runs on just 12 watts.

Thus, your brain doesn’t consume a hundredth of a kilowatt-hours (kWh) while you drive to work; in car terms, the brain’s getting at least 150,000 mpg. That assumes, by the way, that fully half of your brain is devoted to navigation (doubtless an overestimate) while the other half multi-tasks, for better or worse, listening to podcasts, chatting with passengers, or just daydreaming. As for the brain of a composer writing the score for a movie, that’s burns about 10 kWh. And we can count the collective burn by the brains of a team of a dozen researchers working, say, five years to discover a new drug; it’s something like 1,000 kWh. These are all trivial quantities of energy. But we humans are eager to amplify all those tasks, and many more, with silicon.

The voracious appetite of “cognitive” silicon is not news to AI engineers. FedEx’s project to develop an autonomous wheeled robot for local delivery builds on Dean Kamen’s (human-guided) iBot, a sophisticated self-propelled stair-climbing wheelchair. But as Kamen recently noted, the energy appetite of silicon navigation to convert that platform into a delivery drone will “reduce the range substantially.” We can expect fuel-burning autonomous local delivery robots will be deployed on sidewalks -- they’re already common in warehouses -- long before autonomous cars dominate highways.

Yes, we are aware of pundits proposing that robocars will nonetheless save energy. For the energy-obsessed, that’s the technology’s principle rationale. Sure, robots can drive more efficiently. But citizens will (one day) use robocars because of new conveniences, comfort, and safety, all attributes that increase energy use: e.g., more frequent and faster trips in vehicles that are bigger (think sleeping) and heavier. Fevered analyses purporting to find energy savings assume inverse behaviors: slower driving and shared rides in smaller vehicles. Won’t happen. Then consider that robocars will enable millions to ‘drive’ who otherwise can’t (young, old, infirm, etc). Analyses based on how free people actually behave with robocar-class conveniences find it “induces” (energy-using) behaviors. An EIA analysis estimates autonomy would induce a rise in U.S. vehicle-miles nearly equal to twice those driven by all Californians today. But we digress.

As for AI’s fuel appetite when performing cognitive tasks of a higher order than driving, we know that energy is gobbled at epic levels when supercomputing is employed to begin plumbing the depths of nature and simulate molecular behavior “in silico” (an actual term, by the way). In order to accelerate discovery, a research team running, say, a total of just a dozen supercomputer simulations would use the energy equivalent of flying a jumbo jet to Asia. In a future with tens of thousands of AI supercomputers continually simulating and emulating nearly everything, the collective fuel use will easily match all global aviation. (Today’s conventional Internet already does: see Part 1 in this series.)

But as much shock and awe as there is already over the promise and perils of AI – we’ve lost track of the number of books on AI, of both flavors – we’re still in early days of deploying the technology at scale. And as we build out AI infrastructure, the energy consequences will be elevated: as efficient as computing has become and will yet become, it is still a massive fuel hog compared to ‘natural’ intelligence. Facebook has already flagged AI as a “major culprit” in the annual doubling of its datacenter power consumption. And that’s just to deploy today’s nascent AI to perform economically useful but trivial social media and advertising missions.

So far, the energy implications remain largely out sight, the focus of engineers and the “infrastructure masons” of the 21st century. That’s not surprising. Not a soul worried about the eventual aggregate energy cost of the Internet circa 1979 when the number of “hosts” first blew past 100,000. (There are over one billion hosts now.) Similarly, in 1958 when Pan Am began passenger jet service, or in 1919 Ford introduced the Model T, no one discussed, much less forecast, the ultimate fuel use entailed by global flying and driving. In all cases, the relentless pursuit of engine efficiency was critical to unlocking the commercial viability of new infrastructures that offered profoundly new capabilities for society. (See Part 3 in this series.)

While our intent here is to explore the physics of AI, a brief digression about psychology may be in order since polls reveal a lot of Americans seem to fear ‘the machine’. For that we can blame the culture, mythology and Hollywood’s long love affair with robots running amok. Then there’s the entire industry of pundits and consultants claiming AI mean the end of work. For a rebuttal of such claims, see my 2018 book “Work In The Age Of Robots.”

Of course computers outperform people in many tasks. Cars also outperform people and horses. Labor-saving is why humanity keeps inventing machines. But to make an important point about AI -- cars are not artificial horses anymore than jets are artificial birds, or AI is actually “artificial intelligence.” In the Venn diagrams of real-world functionalities, these all overlap but are profoundly different. Of course AI will, as have all machines over history, change the nature of work, and overall employment and the economy will keep growing.

Meanwhile, back to the subject at hand, AI’s (eventual) fuel appetite. We note that there is another industry of analysts obsessed with seeing energy use as a problem. But the fuel use of infrastructures – from highways and airways, to illumination and agriculture -- is a measure of the value and ubiquity of the services those systems provide. With 4 billion people now connected to the Internet, that particular infrastructure is, essentially, now built out and it’s fuel use will rise organically instead of explosively as happens in early days. AI infrastructure, on the other hand, is in early days and is as different from the computing we know, as the Internet was different from the telecommunications infrastructure that preceded it.

The essential difference between conventional computing and AI was best captured nearly 60 years ago by a prescient MIT computer scientist, J.C. Licklider, who wrote: “Present-day computers are designed primarily to solve preformulated problems or to process data according to predetermined procedures.” The future, he forecast, would “bring the computing machine effectively into the formulative parts of technical problems” and the “processes of thinking that must go on in ‘real time,’ time that moves too fast to permit using computers in conventional ways.”

Put differently, there is a profound difference between a computer program that can add up columns in a spreadsheet, no matter how big and complex, and software that is continuously aware of and responding to (or simulating) information or images of physical events in the real world, in real time. That difference manifests itself both in the nature of the underlying silicon logic chips, and the structure of systems using those new kinds of “machine learning” chips instead of conventional “information processing” ones.

A key indicator of the transition from the era of task-based computing to cognitive computing is in the names and performance metrics for the respective silicon processing units. Rather than a central processing unit (CPU) measured by the speed in GHz of linearly executed logic operations, AI is anchored in graphic processing units (GPUs and similarly named alternatives) where speed is measured in terms of running massively parallel comparisons of, say, graphic images. A CPU may look at as many as 1,000 images per second, while a GPU can blow past 25,000 images per second. GPUs are also 100 times more energy efficient per image processed, but the 100W CPU is replaced by a 300 W to 500 W GPU. And, because the real world is a prolific producer of data, the result is a net rise in power needed. (Itself, a very old story; again, see Part 3 in this series.)

Thus a trunk full of GPUs and CPUs (the latter still needed for critical non-image logic and control functions) can easily balloon to a 2,000 W load that must run constantly, not least because, self-evidently, a “sleep” mode is not desirable with a navigation computer. That 2 kW load constitutes 10 to 20% of the power an engine needs to keep a car rolling at cruise.

Meanwhile, a building full of 17,000 GPUs and 8,000 CPUs, known as a supercomputer, consumes 12 MW. That’s enough power to drive a cruise ship. The 12 MW Summit, currently the world’s most powerful supercomputer, came on-line last year enabling America to recapture from China the crown for the #1 machine. Once there’s a general-purpose supercomputer, we’ll see far more of them than cruise ships.

But, setting aside the enthusiasms of Elon Musk and his acolytes about robocars arriving any day now, and the serial hyperbole about AI taking all the jobs, the promises (never mind the perils) of AI are further in the future than claimed in popularized accounts. For example, when it comes to running a physics-based model of even the simplest molecular behaviors, the best supercomputers churn for hours, even days to simulate mere seconds of reality. In order to simulate “virtual” human organs to test drugs in silico, we need machines a thousand-fold more powerful.

As for robocars, we don’t have the space here to do justice to the current hyperbole. Those who follow technical journals instead of the popular press are well aware of the manifold challenges remaining before silicon can emulate human-level vision, navigation and decision-making in all conditions. Serious studies, rather than breathless prose, find that today’s robovision is often wrong, confused or easily spoofed. Even optimists that are technically knowledgeable, as opposed to headline seeking pundits, don’t see practical autonomy for over 15 years. (For a particularly good perspective on all this, read roboticist Rodney Brooks’ essay, “The Big Problem with Self-Driving Cars Is People.”)

Quite aside from the software challenges and the added expense of the sensors alone – a proper, full suite still costs more than the car itself -- consider the interesting data storage issue. Both for on-going “machine learning” and for liability reasons, navigation data generated by a robocar will likely be stored somewhere. A single car will generate several petabytes of data a year which, as one analyst pointed out, will cost “fifty times as much as the vehicle” using today’s digital memory. The real world creates a lot of information. In due course, memory will get cheaper and the AI engines smarter, but we’re not there yet.

Even so, we do know one thing about technology progress in general and with computing in particular: silicon capabilities advance relentlessly, perhaps not the way popularized by Hollywood, but meaningfully nonetheless. It’s inevitable that tomorrow’s AI will make today’s clunky Siri and Alexa and episodically lethal robocars look positively primitive. We don’t have to speculate about near-term leaps in AI capability. What is next in the underlying logic components, and the machines that will be built with them, is already on view.

Underway now: a land-rush of more than two dozen new companies each developing “deep learning” chips that are the post-Moore’s-Law class of AI-specific silicon. Credit goes to Nvidia for kicking off the race for AI chip dominance. One of AI’s pioneers, Yann LeCun who started at Bell Labs but does research at Facebook now, sees a chip “renaissance” in pursuit of silicon that can “learn” rather than merely calculate. For manufacturers, this represents a rare turning point, a “third wave” in the silicon age that began with discrete transistors in the 1950s before the second wave with microprocessors in the early 1970s two years after Intel was founded. Next: the age of AI processors.

The historical precedents are instructive. Today’s binary-logic-based computer chips find their origins with the 1937 master’s thesis of American mathematician, Claude Shannon – the ‘father’ of information theory. But it took silicon engineers a few decades to begin to take full advantage of the math. Fast-forward a half-century to 1986 when Geoffrey Hinton, an English Canadian cognitive psychologist and computer scientist, co-authored a seminal paper on the concept of a learning algorithm. Hinton, widely seen as a “godfather” of AI, would also have to wait a few decades for the silicon hardware to catch up. Now it has.

Manufacturing AI chips will likely account for 80% of all the growth in semiconductor logic sales over the coming half-dozen years, becoming a nearly $70 billion annual market – just for the chips. No surprise that conventional CPU vendor, Intel, is planning its own Nvidia competition with its Nervana GPU chip, and why they spent $15 billion to buy Mobileye, the Israeli Nvidia competitor.

One British upstart, Graphcore, has raised $200 million and suggests another name for AI chips: Intelligence Processing Units (IPUs). Graphcore has named its general purpose IPU the Colossus, which, at 300 W with a 125 teraflop capability, was the kind of a speed seen only supercomputers could boast about a decade ago. (Colossus, for those who are not students of history, was the name of the world’s first electronic computer, built in 1943 in England -- two years ahead of America’s ENIAC, the latter usually referred to as the first.)

And, while most eyes have been on Google’s tribulations with regulators, their engineers have been busy developing their own in-house-designed AI chip – Google calls its “inference” and learning engine a Tensor Processing Unit (TPU). As a sign of the times, Google introduced water cooling for its TPUs, something that datacenter operators have been trying to avoid for years, even if common in supercomputers. Despite the fact that TPUs and GPUs are far more efficient per image/idea processed, the sheer volume of real-time processing invariably leads to both more peak power and more always-on power.

The new generation of both GPU-class silicon, combined with the still essential hypertrophied CPUs, will show up not only in end-use or so-called “edge” devices everywhere, but also shortly inside the next great leap in supercomputers. The Department of Energy has now issued contracts to three teams to each build an exascale supercomputer; silicon from chipmakers Nvida, Intel and AMD are each on one of the machines.

Only a half-dozen years ago, engineers worried that the next leap in supercomputing – a global race wherein China is now a world-class player – would result in a single machine demanding 500 MW. Such was the math to get to an exaflop using the technology of the then top-of-class 5 MW 20 petaflop supercomputer. If such had remained the case, ubiquitous supercomputing would have been impossible because such power demands would prohibit proliferation.

But, courtesy of on-going silicon efficiency gains, in particular from GPUs, a more than 10x jump in compute horsepower from today’s leader will be achieved with a ‘mere’ three to four-fold rise in electric power demand. Frontier, the first exascale machine, will likely clock in at ‘only’ 40 MW. And that won’t be the end of it. The race to the next plateau, zetascale machines, is already being plotted. Perhaps those will ‘only’ need 100 MW each.

Meanwhile, in the here-and-now, you don’t have to wait to queue up for processor time on Frontier; Google has just announced that you can now rent by-the-drink a 100-petaflop capability in their cloud using its TPU-driven servers. Until just over a year ago, 100 petaflops constituted the top performance of the (then) world’s fastest supercomputer, in China. (America’s Summit recaptured the crown with 200 petaflops in June 2018). We thus see the beginning of the democratization of true general-purpose AI in the scaling that will come from the competitive rush by all Cloud providers to offer AI as a service.

We already know why so many businesses and researchers lust after access to such astounding capabilities. It’s not about speech and facial recognition, which are considered “low precision” tasks for AI and thus require relatively little compute horsepower. But simulating driving and flying conditions, whether to train or navigate autonomous vehicles, or reading x-rays or simulating drug interactions in human cells, are high precision tasks. And the latter require petaflops as a starting point, and exaflops to be useful.

With exaflops for everyone the era of AI will finally get into full swing. And never mind better forecasts of natural events from weather to earthquakes, you can bet that finance ‘quants’ will eagerly pursue precision AI in their Sisyphean hope to beat the “invisible hand” of the market. Aside from finance, the earliest applications are likely to come in research in general. New classes of analytic instruments, from microscopes to telescopes, generate petabytes of data that are nearly impossible to ‘process’ without petascale power.

We already see companies using AI to design new classes of metal alloys optimized for next-generation 3D metal printers. There’s also a rising tide of AI startups bringing radical gains in production and economic efficiency to manufacturing, and to the oil & gas sector. (The latter I first wrote about in Shale 2.0 and here, where in full disclosure, I’m a partner in venture fund for such.) IBM recently teamed with Nvidia to demonstrate that GPUs enabled oil reservoir simulation with one-tenth the power and one-hundredth the floor space previously required by supercomputers.

The brass ring for AI opportunities is doubtless in healthcare, medical research and diagnostics. Estimates of a $20 billion AI healthcare market by 2024 are almost certainly an egregious under count. Microsoft has relentlessly and brilliantly integrated its medical/industrial Virtual Reality tools with its Cloud-based “Cognitive Services.” To appreciate how big a deal all this will be for medicine, physician and scientists Eric Topol’s new book, “Deep Medicine,” is the place to go.

At the high-level, Gartner forecasts that the overall global value of business activities that are AI-derived – remember that Amazon is essentially CPU-derived – will reach over $4 trillion a year in just a few more years, and grow from there. Given physics of energy, one could measure that economic growth in kilowatt-hours per dollar. But that’s the wrong metric, even though we have become a society obsessed with measuring how much energy we consume.

For computing cognoscenti, the iconic measure of progress is Moore’s Law, named for Gordon Moore, cofounder of Intel, who first observed the growth rate of transistor density per CPU. It was Moore’s Law that brought the power of a 1980s mainframe down into handhelds that sip electrons. Moore’s Law simultaneously delivered datacenters at the other end of the spectrum, warehouse-scale buildings filled with silicon and inhaling power at steel-mill levels. In recent years there’s been a lot of talk in the trades about the end of Moore’s Law scaling; it’s slowing to be sure, but far from done. However, we’re now at the beginning of yet another, different age of Moore-like scaling, this time for AI chips.

It bears noting that scaling laws are the norm both in nature and in engineered systems. In fact, relevant to our discussion, consider the rate at which vacuum tubes improved. Invented by John Fleming in 1904 while working for the Marconi Radio company, the vacuum tube was the enabling component of the first electronic age, giving us the radio (1920 saw the first commercial radio station), stereos, TVs and the first computers. ENIAC’s room full of 17,000 vacuum tubes gobbled 170 kW.

As it happens, vacuum tube’s improved at rate identical to Moore’s Law scaling, but with a different metric. Moore’s Law counts transistor density. The metric for vacuum tubes, power density (in effect, electron density) saw a doubling every two years for 70 years; that’s the same as the Moore’s Law rate for transistor density CPUs over the past 50 years. And like the CPU, the ever more power-dense vacuum tube yielded a proliferation of both end-use devices (portable radios and stereos) and more massive central machines (broadcast stations). History has not assigned a name to the rate of vacuum tube progress, so I propose we call it, in hindsight, Smil’s Law – because my friend, energy polymath Vaclav Smil, was first to point this out.

From Smil’s Law to Moore’s Law, what comes next? Since there is also no name for the scaling law for the progress of AI engines, permit us to propose one here as well. In honor of the engineer who, as CEO of Nvidia, ignited the AI era, what comes next will follow Huang’s Law.

Back in 1993, when Jen-Hsun “Jensen” Huang co-founded Nvidia, he foresaw video games as having the “most computationally challenging” problems. Conventional CPUs were just not up to the data-intensity and velocity required for visual rendering in ways that both looked real and moved realistically. It was the revolutionary Nvidia GPU that could render bits into virtualizations of reality. And GPUs that make virtual reality seem real turn out to be precisely what’s needed for AI’s learning and inference to simulate actual reality. Just as Nvidia is not the only nor the first GPU company, neither was Intel the only or first CPU maker. But history has a way of crediting the company that, in hindsight, provided the spark.

The capabilities of Nvidia’s GPUs, as well as those of its competitors of similar and differently named AI engines, have been scaling at a rate that follows Smil’s and Moore’s Laws. But, again, we need a new metric. Perhaps it will be some measure of inference speed per chip because for AI to be useful it ultimately has to deliver, in real time, conclusions about the real world based on evidence (data) and reasoning; i.e., the textbook definition of “inference.”

And, as with CPUs, the inexorable improvement in power efficiency – far more inferences per second per watt – will end up delivering explosive growth at both ends of the spectrum. We can expect datacenters to adopt AI-centric exascale supercomputing ‘pods’, each 20 to 40 MW, as the norm not the exception. They’ll do that because every researcher, every physician, every business, and eventually every citizen, will want access to the services that AI can provide.

All this will be on top of the already explosive growth in conventional datacenter power demands. And when it comes to the aggregate power appetite of datacenters, note that published estimates we read regarding today’s Internet energy use are actually based on the state of the Cloud nearly a half-dozen years ago – that’s ancient history in both dog and Internet years. The emergence of “machine learning” services as a new kind of hardware-centric enterprise is entirely absent from today’s forecasts about Cloud energy use. Demand for AI-class services will grow faster than the underlying reduction in power per AI-inference – just as happened with CPU-centric services.

Moore’s Law CPUs also gave us the proliferation of smartphones. Perhaps only one person at the dawn of the CPU age actually imagined that over two billion people would one day each own a handheld device with the compute power of 10,000 mainframes. In 1974, only three years after Intel’s IPO, standing for an interview in a noisy computer room, science fiction writer Arthur C. Clarke predicted such a collapse in size and the consequent ubiquity of computing we see today. (You can see the interview on YouTube of course).

Odds are that Clarke actually read what Gordon Moore first wrote a few years earlier, in 1965, about the rate of computer chip progress. Clarke also predicted the emergence of datacenters, but that wasn’t particularly prescient given that he was standing inside a proto version of such a thing in that interview. So, if Clarke were alive today, what would he say Huang’s Law might unleash?

No imagination is needed to forecast more, bigger, smarter datacenters; it’s already happening and will be accelerated by AI pods. Both innovation and profits will also accelerate from researchers, hospitals and businesses connected to AI-infused datacenters.

As access to AI in the Cloud becomes easier, courtesy of ever more ubiquitous, seamless and blazingly fast wireless networks, smartphones will of course get much smarter. But that’s already happening too. There is already a bubble-inducing rush of investors and startups chasing AI-centric features for smartphones; it’s kind of the forecast de jure. But what entirely new product or service might we see on the edges, echoing the earlier impacts of Moore’s and Smil’s Laws?

History provides guidance. We know what Moore’s Law has wrought on the edge: smartphones. The Smil’s Law vacuum tube analog: home radios.

By 1927, one-third of all household spending on furniture went to radio sets. (That ratio, not coincidentally, is roughly the same as today’s share of household spending on the wireless Web.) As for velocity, the trajectory was every bit as fast as smartphones: by 1927, ten times as many homes had radios as just six years earlier, a share that jumped 10-fold again before WWII broke out, when nearly every home had a radio. Radio broadcast stations, the functional equal of today’s datacenters, proliferated by the thousands. (Much of this history is delightfully recounted, and more, in Bill Bryson’s 2013 book, “One Summer: America 1927.”)

Also like Moore’s Law, as vacuum tube technology followed the Smil’s Law curve, it propelled wealth and worries in those days too. Stock in RCA -- a fusion of Intel and Facebook of its day -- was the most heavily traded on the NYSE and rose over 10,000% in the half-dozen years leading up to the 1929 market crash. Newspaper advertisers and their money rushed into radio. And the government, eager to control the new monopolists of radio waves, created in 1926 the Federal Radio Commission (latter to become the FCC). Politicians embraced radio quickly. As Governor of New York, Franklin Roosevelt instituted a weekly “fireside chat” radio broadcast in 1929 to talk directly to the people, bypassing pesky reporters. People back then gushed about how, with radio, “isolation had been broken, knowledge and leisure were being flattened.” Today we have headlines about how “Facebook Has Flattened Human Communication.”

All of the characteristic hyperbole and anxieties of tech-driven disruption were much debated then too, including worries about the pernicious impact of intrusive radio broadcasts on social norms, and on the ease and velocity of propagation of fake news. In one of histories most amusing (in hindsight at least) unintended examples of the latter, a 1938 radio show by famous broadcaster Orson Welles, reading from H.G. Wells “The War Of The Worlds,” caused actual panic across America because of the ‘news’ of a Martian invasion. As they say, plus ça change…

All this history is relevant for both perspective and insight on what may come next in the age of AI propelled at Huang’s Law rates. Conventional wisdom says the next ‘big thing’ that AI will enable is the self-driving car, a robocar. Of course it will. Nearly all the AI chip vendors, especially Nvidia, are in a full-on war for performance and market share; thus, VCs are creating “Unicorns” valued at billion dollar levels.

Set aside our earlier noted cautions about the unsolved and significant real world, safety (not to mention integration and liability) issues, it is nonetheless clear that robocars will arrive in due course. A robocar, however, isn’t nearly as revolutionary as going from newspapers to radio, or from mainframes to smartphones. A robocar is still a car. The silicon brain will stimulate more driving, create more conveniences, add more safety – in short, similar to the progress already in play with the CPU-class silicon that’s been invading the automotive platform for decades.

If Clarke were alive today, an interview about the future of AI wouldn’t be in Waymo’s offices, but likely the headquarters of Boston Dynamics. There you find dog-like robots with eerie biomimicry capability, or walking and back-flipping anthropomorphic robots exhibiting the kind of capabilities that have long been the epitome of what imagineers and scifi writers have speculated about for eons. AI will, finally, unleash the age of robots.

Robots will help not just in warehouses, but in dangerous labor-centric tasks (for end-of-work worriers; there’s now a permanent shortage in that labor pool), agriculture, inspection, security, emergencies, in health care, or even as companions for the elderly and – inevitably – for entertainment too.

Consider a couple of bellwethers of the inevitability of this transformation. Boston Dynamics says they plan to start commercial sales this year of their dog-like autonomous robot called Spot Mini. Check out the video; it’s a quantum leap beyond clunky, awkward toy-like simulacrums of robots.

And second, Qualcomm, a serious company interested in real-world near-term markets, just released a standardized developer kit, its Robotics RB3 platform, to help engineers design robots of all kinds for any application. Qualcomm’s chips are already inside myriad autonomous machines. And, as with analogous developer kits for smartphones, you can expect designs will use onboard AI systems along with connectivity to more powerful Cloud-based AI. That’s hardly new architecture; it’s what happens when you use your smartphone for navigation or to ask a question.

Not all robots will mimic humans and animals, and especially not the ones that come first. The burgeoning field of autonomous machines incudes wheeled and flying drones, and cobots anchored on production lines or in operating rooms assisting surgeons. (All of that is a subject for another series.) But before we see the first consumer robot that will be equivalent to an RCA radio, or iPhone, FedEx or its competitor Amazon will likely bring to market consumer-facing delivery bots.

The robot revolution is inevitable now that the engines of AI are on a Huang’s Law curve. Perhaps history will see 2018 as the turning point. It was the year of the first general purpose AI-centric supercomputer, Summit, chock-a-block packed with Nvidia GPUs. And it was the year last year Nvidia introduced its Turing class GPU with a blazing fast if unfathomable 2 petaflop capability: hence the need for new metrics. The impacts and growth to come will, in hindsight, seem as obvious later as that of the 2007 iPhone, or the 1927 radio do now.

It bears noting at this point those dystopian fears that computers are soon to be smarter than people date from the dawn of computing. Actually, the idea dates to antiquity with the Greek myths of intelligent automatons. Even in pre-ENIAC days, at the 1939 New York World’s Fair, we saw Westinghouse’s stunt robot walking around with a recorded voice that would say: “My brain is bigger than yours.” (Westinghouse wanted to show off its automated switchgear used for electric grid controls; and yes Virginia, “smart grids” are very old news.) It was soon after that when Alan Turing, one of the pioneers of modern computing, famously speculated about machine intelligence; hence the “Turing Test.”

The truth, however, is that it is hard to imagine exactly how the era of symbiotic and cognitive computing will unfold, whether from AI infused remotely in the Cloud or as ambulatory digital assistants. But one can confidently predict that, as has happened before, there will be entirely new classes of businesses and companies -- and jobs. The implications are bullish for the United States, by the way, given that so many American universities, innovators and firms are at the epicenter of every aspect of the AI infrastructure revolution.

Which brings us full circle to the derivative energy implications.

Untethered robots, whether winged, wheeled or walking, are bound by the same energy architecture as we mere humans. While about 20% of a body’s on-board fuel system is devoted to the brain, and another 10 to 15% to the digestive system to convert raw fuel into useful calories, the majority of the fuel budget is consumed by locomotion. Transport uses more energy than thinking. A similar brain-to-transport energy ratio will hold for both Cloud-based and ambulatory AI machines.

For Cloud-based exascale AI is itself stationary, there is nonetheless a transport cost associated with getting the data that is the digital fuel AI feeds on. Energy is consumed by the network and sensor systems that collect and move data from the field to the brain. Already with today’s zetabyte level of data transport, global telecom networks use a comparable, if not greater amount of energy than the datacenters themselves. (See Part 2 in this series.) And data traffic is on track to grow at least 100-fold in the coming decade or two.

As for ambulatory AI, the transport costs of robots is of course obvious. As with the human body, propulsion is the energy hog. How much fuel will be consumed by all the power systems of robots with wheels, wings and legs that will be used to deliver packages, assist construction workers and nurses, or accompany an infirm or elderly person home?

There’s a quiet race in the materials and mechanical science domains to develop breakthrough propulsion and actuators. But every existing and imaginable artificial system is far less efficient than nature’s bio-system. A robot moving a pound, never mind thinking about where to take that pound, will use more energy than a human.

All of that, as with robocars as well, will constitute a net new source of energy demand the world has never seen. How much? One might imagine – Clarke would have -- hundreds of millions, perhaps billions, of different classes of self-propelled automatons. That future of AI-infused Clouds and AI-embedded robots will add far more energy demand to the world than global aviation.

P.S.

Because data generation is growing faster than we have words to describe it, in Part 4 of this series we explored the nomenclature for the post zettabyte era noting the delicious possibility of new numerical prefixes like brontobytes and geobytes, one thousand and one million fold, respectively, bigger than yottabytes -- the latter the last official big number prefix that’s one million zettas. Apparently the official process may be underway to come up with acceptable nomenclature. With predictable bureaucratic pedantry, one proposal officially submitted to the Paris-based International Bureau of Weights and Measurements (IBWM), suggests turgid names like “ronna-bytye” or “quecca-byte,” rather than the already popular and delicious brontobyte and geobyte. Let’s hope the IBWM sees an uprising of the imaginative preserving the possibility of playful names. 

Mark P. Mills is a senior fellow at the Manhattan Institute and a McCormick School of Engineering Faculty Fellow at Northwestern University, and author of “Work in the Age of Robots, just published by Encounter Books. Support for the research in this series came in part from the Forbes School of Business & Technology at Ashford University, where Mills serves on the Advisory Board.

Comment
Show comments Hide Comments

Related Articles