1. The First Law of Zero: Computing

Chunka Mui
13 min readJul 24, 2024

--

The Laws of Zero

Seven key technological drivers of mankind’s progress are heading toward zero cost. We think of these cost trends as the Laws of Zero. If something is free, you can throw as many of those resources as you want at a problem. We may struggle with the idea of a cost heading to zero (ZERO!!!), but the trends toward rapid declines in cost are real and powerful, and will remove many constraints. If we can understand what zero cost really does, we can design a remarkably different and better future.

As we continue the weekly serialization of “ A Brief History of a Perfect Future ,” by Paul Carroll, Tim Andrews and me, we outline the Laws of Zero in general and dive into the basis for the First Law of Zero, Computing. Yes, there will always be costs for computing. But the cost will be so low and the capability so high that, if we’re planning from today’s perspective, we can almost imagine that computing power is free and that we can throw a nearly infinite amount of it at any problem we want to address in that Future Perfect

Enjoy, and please help us spread the messages of hope, agency, and responsibility by liking, commenting, and sharing this series with your network.

PART ONE: THE LAWS OF ZERO

While people often cite the drastic improvement James Watt made to the steam engine in 1765 as the beginning of the industrial revolution and the march to modernity, we don’t often think about a key intellectual force behind all this progress: the concept of zero.

These days, we use the number zero all the time, but history shows that the notion of counting nothing was far from obvious. Almost all counting systems started with the number one; why are you counting if there’s nothing there? Look at how rarely zero was discovered (some say “invented”) and at how long it took to flesh out the concept. The idea has arisen only three times among all the world’s civilizations.

First were the Sumerians, in roughly 2000 BC. They eventually spread a nascent form of the idea, but they didn’t spread it that far — Greek and Roman engineers used the concept[1], but even the ancient Greeks’ great mathematicians never codified it. The Mayans followed in about 350 AD, but their idea never went beyond meso-America. Finally came India, where the idea percolated for some time. Aryabhata did pioneering work in the early 500s, and Brahmagupta codified the use of zero, including a symbol for it, in 628 AD. This time, the idea took. It spread to the Arab world, whose mathematical system has become the basis for what we do today. (Hence, “Arabic numerals.”) In the ninth century, zero allowed for the invention of al-jibr, which the English-speaking world knows as “algebra” and which spread during the Islamic conquests in subsequent centuries, including into Europe. The inventor of algebra also gave us the algorithm, a shortcut for solving a problem that’s so important in today’s computing world.

Imagine if we were still stuck with the Roman system, without a zero. Divide MDCXLIV by XLVIII. Go ahead. We’ll wait. Or, imagine the language of today’s digital world, the binary code represented by ones and zeros — but without the zeros.

As recently as 400 years ago, the notion of zero was still in its infancy. It turns out that the concept of nothing, while intuitive at one level, is hard to come to grips with in all its mathematical implications. René Descartes came along in the early 1600s and helped us visualize zero through the graphs we all use today, with x- and y-axes that meet at (0,0) and extend in both positive and negative directions — the graphs are known as Cartesian coordinates, in his honor. Then, in the second half of the 1600s, Isaac Newton and Gottfried Leibniz began independently imagining a form of mathematics that, among other things, explored the idea of dividing by zero. Not really dividing by zero, because that makes no sense — but what if you divided by numbers that got smaller and smaller and smaller, heading toward zero; what sorts of results would you get? There was certainly one key result: calculus. While dreaded by many high school and college students everywhere, calculus provided a new system for understanding the physical world.

So, when James Watt made his crucial improvement to the steam engine in 1765, about a century after calculus was invented, the intellectual horsepower was already developing in ways that would unleash the physical horsepower that Watt’s engines provided. Zero provided the anchor for basic calculations, and calculus was available for engineering, architecture, and complex calculations in so many other arenas, including those that have driven our biggest businesses and our economies. From Roman times through the start of the industrial revolution, economic growth had been so slow that an average European was only about twice as wealthy as an average subject of Julius Caesar had been, some two millennia earlier. In the 250 years since the start of the industrial revolution, that average European’s wealth jumped 13-fold[2] — and that calculation doesn’t adequately account for some things that can’t really be quantified, like indoor plumbing, the benefits of electricity, or the joy of the latest gadget from Apple. The intellectual structure based on the concept of zero played a major role in those leaps forward.

Zero will also play a key role in the Future Perfect.

Certain key drivers of mankind’s progress are heading toward zero cost. (Remember, your mileage may vary, and we’ll explain the caveats shortly.) If something is free, you can throw as many of those resources as you want at a problem. We may struggle with the idea of a cost heading to zero (ZERO!!!), but if we can understand what zero cost really does, we can design a remarkably different and better future.

We think of these cost trends as the Laws of Zero. The first few — for computing, communication, and information — will seem familiar, because we’ve all felt the trends for years or even decades. The fourth basically combines the first three laws with biological advances that will put genomics on a curve headed toward zero cost, with profound implications for our health. The final three laws are more speculative — we’re not likely to get nearly as close to zero as with energy, water, and transportation — but the trends toward rapid declines in cost are real and powerful and will make a lot of the considerations for even things like time and distance go away.

CHAPTER 1: The First Law of Zero: Computing

In 1965, a journal commissioned a forecast on progress in electronics, and, as unlikely as it might seem, that scientific paper largely laid the base for today’s digital world. In the paper, Intel co-founder Gordon Moore merely observed that the number of transistors in an integrated circuit was doubling every year. (The number of transistors is a rough proxy for the processing power of a chip.) Moore speculated that the pace could continue for at least a decade. That was enough.

What has come to be known as Moore’s law became the metronome for the semiconductor industry. That scientific paper went from being an observation to being almost a mandate, a self-fulfilling prophecy. The semiconductor industry would produce doubling after doubling after doubling, and anyone whose product or, eventually, business tapped into the growing power of electronics was on notice to be ready. The pace of the doubling has varied — Moore amended the time span to every two years, and Intel later put it at 18 months — but the power had been unleashed. Intel’s chips went from a few dozen transistors when Moore made his observation to billions today; Intel no longer even provides a count. That one observation became a road map for the entire computing world. Whatever the exact pace, the exponential improvement in power meant that capability was headed toward infinity, and the cost of a unit of computing power was headed toward zero .

The ability to throw computing power at any problem has driven the Information Age, putting that smartphone in your pocket that powers all your apps and provides your instant connections to friends and associates, as well as your access to all the information you can imagine. You aren’t really asking Siri or Alexa for help. You’re asking Gordon Moore.

While computing obviously isn’t free, it looks almost free from any historical distance. The latest smartphones contain more processing power than the top-end, multimillion-dollar supercomputers circa 1990 — which required a special export license from the U.S. government, because giving a foreign organization access to even one was thought to endanger national security. On Intel’s first microprocessor, in 1971, the 2,300 transistors cost $1 apiece; today, transistors cost less than a millionth of a penny each. That’s an improvement of a factor of more than 330 billion in price/performance in less than five decades. The same kind of improvement has occurred in anything with a chip in it.

How powerful are exponentials? A classic video, made in 1977 by IBM, demonstrated the concept by showing a picnic filmed from overhead at a distance of one meter (100 meters) and then panning out. By the time the “camera” gets to 1012 meters, it’s outside our solar system. By 1016, the “camera” is a light year away. Then, the “camera” zooms back in — way in. By the time it gets to the 10–10 level (0.0000000001 meter), the “camera” is showing something an angstrom wide — about the width of two hydrogen atoms.[3] With just 27 steps in an exponent, you can go from a distance of two atoms to a light year.

The exponential improvement in computing power will continue, too. Yes, the pace of Moore’s law has slowed. Some say it’s even stopped because the need to keep increasing the density of the electronics on chips is running up against some fundamental principles of physics.[4] But there are still miles of runway for improvement. Among other things, while most devices have run off general-purpose chips, because they’ve had so much speed to burn, special-purpose processors for phones, laptops, tablets, etc. will be developed that can offload work from the central processor and run applications far faster than is possible today.[5] In addition, while chips have been designed as essentially flat surfaces, a third dimension can be added that would allow for layers of transistors and multiply the number on a single processors[6] Meanwhile, moving so much of the work of computing to the “cloud” allows for removing all sorts of software bottlenecks that slow computing at the moment.[7]

Whole new forms of computing, such as quantum computers, promise improvements far beyond Moore’s Law for some uses.[8] [9] Google already claims it’s achieved “quantum supremacy,” referring to using the bizarro-world characteristics of quantum physics to build a machine that can solve a problem no conventional computer could have solved in any reasonable time. Google says its machine performed a calculation in 200 seconds that would have taken the world’s most powerful conventional supercomputer 10,000 years.[10]

Other “laws” kick in, too, that pour gasoline on the increase in computing power. You may have heard of “network effects,” based on Metcalfe’s law.[11] It says the value of a network increases in proportion to the square of the number of users. In practical terms, that means that the internet wasn’t terribly useful when the first four computer systems were connected in late 1969[12] but that the billions of devices connected to the network today make the internet incredibly powerful. And network effects will continue to amplify the raw power of computers in innumerable ways as billions more devices connect with each other. By 2050, trillions of devices will be connected in a network, making the so-called Internet of Things (IoT) millions of times more important than it already is. (And it’s already incredibly important.)

Something called Bell’s law will also contribute. Formulated by our longtime friend and colleague Gordon Bell, the developer of the first minicomputer, the law roughly says a new form of computing will appear every decade. There were mainframes in the 1960s, minicomputers in the 1970s, personal computers in the 1980s, cellphones in the 1990s, early smartphones and tablets in the 2000s, and wildly impressive smartphones in the 2010s. In the 2020s, we’re likely moving into the Internet of Things, building on ever-smaller connected devices and on AI-driven voice input assistants such as Alexa, Google Home, and Siri, which not only take commands but can act as sensors in a myriad of ways, including detecting illnesses and providing home security. Progress won’t stop there, either. Robots could extend our presence: Just slap on some virtual reality goggles and “inhabit” a robot in your kid’s, parent’s or friend’s room (only with permission, of course). Computing could be implanted in our bodies: A chip right below the jaw and near the ear could capture our voices while vibrating in ways that our ears would easily pick up as sound. There’s even talk of chip implants that would plug directly into our brains and give us instant access to essentially all the world’s information. People may turn into a form of centaur, except that, instead of being half-human and half-horse, we’ll be half-person and half-computer.[13] Jeopardy will never be the same.

While artificial intelligence isn’t a device, you could say AI is the next form of computing. The improvement in just one part of AI — machine learning (ML) — is exponential and accelerating, because, within boundaries set by programmers, the machine trains itself at its own, wildly fast speed. For example, an AI developed by Google’s DeepMind went from knowing nothing about the game of Go to quickly beating the world champion. While AI is still a long way from many kinds of human thinking, including the Holy Grail known as “general intelligence,” AI is already approaching human reasoning in areas like speech recognition, image recognition, and understanding/generating written language. ML gets better with more/faster processors and storage produced by Moore’s law, and there’s no upper limit on the number/speed of processors or storage that can be applied.

Just as we can imagine exoskeletons making superhumans, ML can make super-intelligent reasoning possible. If you remember the movie Short Circuit , where an AI robot read a gazillion books and watched a gazillion shows to become super smart very quickly, you’re not that far off track.

To get a feel for how good AI is already, look at Open AI’s GPT-3, which can mimic human writing. For instance, a human provided this to the AI as a prompt:

“In a shocking finding, scientists discovered a herd of unicorns living in a remote, previously unexplored valley in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.”

The AI was asked to continue the story, and it added:

“They also were found to have perfectly coiffed hair and wore what appeared to be Dior makeup.

‘We were shocked to discover the unicorns,’ said anthropologist Daniel St. Maurice. ‘They were like nothing we had ever seen before. We had heard legends of the unicorns, but never thought they actually existed.’

When the scientists first arrived in the valley, the unicorns were surprised and startled by the presence of humans but were also excited. The unicorns welcomed the researchers and explained that they had been waiting for them for a very long time.”[14]

Here’s where the combination of AI and computing power gets really interesting: The gains can be recursive. Computationally intensive techniques are allowing for much faster and better design for the processors and everything else about electronics. And we’re already at a “lather, rinse, repeat” stage. The underlying electronics get so much faster that they allow for huge improvements in artificial intelligence, which improve the design of the underlying electronics, which….

The notion is that we shape our tools, then our tools shape us[15] — and the effects will reach well beyond computers themselves. Bell Textron recently used virtual reality to iterate through countless possibilities and design a helicopter in less than six months, a process that previously took five to seven years.[16] Simulations for materials design that used to take a year have been shortened to 15 minutes; this creates opportunities to greatly accelerate breakthroughs in areas such as optics, aerospace, and energy storage, where the ability to design new components has greatly outpaced the ability to design the materials that are required to make them.[17]

The AI won’t stop at physical design, either — the AI will be able to design the AI. Sam Altman, CEO of Open AI, describes this sort of super-recursion as “Moore’s Law for Everything.” AI gains the potential to keep adding and adding and adding computer-based intelligence to any task, driving the cost of performing that task toward that magic number: zero.

Yes, there will always be costs for computing — Apple et al. will all make sure we pay — but the cost will be so low and the capability so high that, if we’re planning from today’s perspective, we can almost imagine that computing power is free and that we can throw a nearly infinite amount of it at any problem we want to address in that Future Perfect.

“Hey, Siri, please text Gordon Moore and say, ‘Thank you.’”

Other parts of this serialization:

A Brief History of a Perfect Future: Inventing the world we can proudly leave our kids by 2050 by Chunka Mui, Paul Carroll and Tim Andrews

Introduction: Inventing the Future

Part One: The Laws of Zero

Chapter 1 — The First Law of Zero: Computing

Chapter 2 — The Second Law of Zero: Communication

Chapter 3 — The Third Law of Zero: Information

Chapter 4 — The Fourth Law of Zero: Genomics

Chapter 5 — The Fifth Law of Zero: Energy

Chapter 6 — The Sixth Law of Zero: Water

Chapter 7 — The Seventh Law of Zero: Transportation

Part Two: The Future Histories

Chapter 8 — Electricity

Chapter 9 — Transportation

Chapter 10 — Health Care

Chapter 11 — Climate

Chapter 12 — Trust

Chapter 13 — Government Services

Coda What is the Future Isn’t Perfect?

Part Three: Jumpstarting the Future (Starting Now)

Chapter 14 — What Individuals Can Do

Chapter 15 — What Companies Can Do

Chapter 16— What Governments Can Do

Prologue: Over to You

Footnotes:

[1] While the Greeks and Romans didn’t have a name for zero, the idea was implicit in the abacus they’d use for calculations — the counting on an abacus starts from nothing. The idea of zero was also implicit in Greek geometry, just not expressed.
[2] “Resource Revolution: How to Capture the Biggest Business Opportunity in a Century,” by Stefan Heck and Matt Rogers, with Paul Carroll; p. 7.
[3] https://www.youtube.com/watch?v=0fKBhvDjuy0
[4] https://www.forbes.com/sites/forbestechcouncil/2018/03/09/moores-law-is-dying-so-where-are-its-heirs/#325db9977a7b
[5] https://www.forbes.com/sites/kenkam/2018/04/23/how-moores-law-now-favors-nvidia-over-intel/#1f63a8165e42
[6] https://www.cnet.com/news/intel-3d-chip-stacking-could-get-you-to-buy-a-new-pc/
[7] https://www.wired.com/2014/06/google-kubernetes/
[8] https://www.weforum.org/agenda/2016/09/7-innovations-that-could-shape-the-future-of-computing
[9] https://www.economist.com/leaders/2016/03/12/the-future-of-computing
[10] https://www.theverge.com/2019/10/23/20928294/google-quantum-supremacy-sycamore-computer-qubit-milestone
[11] Named after Bob Metcalfe, the principal inventor at Xerox PARC of Ethernet, an early and very important networking technology.
[12] UCLA, Stanford Research Institute, University of Utah and UC-Santa Barbara
[13] https://www.weforum.org/agenda/2016/12/by-2030-this-is-what-computers-will-do/
[14] https://www.facebook.com/17043549797/posts/in-a-shocking-finding-scientists-discovered-a-herd-of-unicorns-living-in-a-remot/10159146577144798/
[15] The idea is often attributed to Marshall McLuhan, who never quite said this, and is more accurately attributed to Winston Churchill, who said, “We shape our buildings, and then our buildings shape us” or Henry David Thoreau, who said, “Men have become the tools of their tools.”
[16] https://www.roadtovr.com/bell-says-latest-helicopter-was-designed-10-times-faster-with-vr/amp/
[17] https://interestingengineering.com/machine-learning-slashes-tech-design-process-by-a-whole-year

Originally published at https://www.linkedin.com.

--

--

Chunka Mui
Chunka Mui

Written by Chunka Mui

Futurist and Innovation Advisor. I try to carry out Alan Kay’s exhortation that “the best way to predict the future is to invent it.”