When will the technological singularity occur?
Category: Society
Published: April 3, 2013
By: Christopher S. Baird, author of The Top 50 Science Questions with Surprising Answers and physics professor at West Texas A&M University
If real-world science is used instead of wishful thinking, then a technological singularity will never occur. The technological singularity is a hypothetical event where advances in technology become essentially infinite. If the power of technology doubles every two years, then in two years it will be twice as powerful as today, in four years it will be four times as powerful, then 8, 16, 32, 64, etc. This exponential growth supposedly means that the power of technology will eventually become effectively infinite. While the concept of infinite technological power is attractive and makes for some fun science fiction, it is not very scientifically sound. There are four basic problems with reaching a technological singularity:
- Technology is not currently doubling every two years
- Even if technology were doubling every two years, there is no guarantee this trend will continue
- Science places external limits on technology
- Technological advances are driven by humans and the human intelligence has limits
Let us look at each problem in detail.
1. Technology is not currently doubling every year
Proponents of the technological singularity often reference the exponential growth in computer chip power as evidence that technology growth is exponential. Known informally as Moore's law, the number of transistors in a CPU has indeed doubled every two years for the past forty years. Since the transistor is the fundamental processing unit of a computer chip, Moore's law states roughly that the power of computers is doubling every two years. While this exponential growth is true for raw computing power, that does not make it true for every other kind of technology. Proponents may claim that since every other field of technology can be designed on a computer, exponential computer growth automatically leads to exponential growth in all other areas of technology. There are two problems with this line of reasoning.
First, raw computing power does not magically turn into technological advances on its own. It takes a human to use the raw processing power and create innovation. For instance, giving an architect a computer that is twice as fast as his old computer will not make him design a bridge that is twice as strong. The innovations in bridge design are primarily a result of human ingenuity and not just faster computers. If the average computer user opens his computer's Task Manager and takes a look at his current CPU usage, he would discover he is using about 5% of his CPU power (go ahead right now and check for yourself on your own computer). Actions such as browsing the internet, writing in a word processor, updating the calendar, and sketching up the design of a new product are not very computationally intensive. As a result, adding more computer processing power is not going to affect the productivity of most users who already don't use a large portion of their current processing power. For the average user, faster computers will just mean higher resolution movies and more realistic looking video games, but little gain in productivity. For scientists and engineers who develop technology by running numerical simulations, improved processing power does have a significant impact. But even in these cases, it takes human ingenuity to take advantage of the improved processing power. It takes a human to know how to build the simulations, how to run the simulations correctly, and how to interpret the results.
The second problem with the hypothesis that exponential computer growth drives exponential growth in all other technologies is that it contradicts the evidence. For instance, the average number of U.S. patents per million people per year is actually declining, according to a paper by Jonathan Huebner. Around 1915, there were 350 U.S. patents issued per million U.S. citizens per year. By 1995, that number had dropped to 250. Each new patent represents a technological advance, so the fact that there are still new patents every year means that technology is still advancing. The rate of technological advance, however, is declining according to patent numbers and is not exponentially increasing. Looking at the patents trend line in the figure, we see that the rate of innovation growth in the internet age (1980-) pales in comparison to that of the golden age of railroads and telegraphs (1840-1880).
If we use the height of the tallest building as an indicator of technological power, we still don't get exponential growth. Plotting the height of the tallest building standing in the word in a certain year versus the year gives a roughly linear growth trend, not exponential. Similarly, plotting the airplane speed records for the last 60 years shows that the trend is one of declining growth, to the point of being stagnant, as shown in the figure. The highest speed an airplane has attained has not changed in 37 years. For just about any technological indicator we can think up, the advance in technology is either linear or stagnating, not exponential.
2. Even if technology were doubling every two years, there is no guarantee this trend will continue.
In our complex world trends rarely continue at their current rates. For instance, a spike in world population growth in 1980 lead experts to warn that the world would be dangerously overpopulated in 50 years if trends continued. Of course, trends did not continue and the world population growth rate has been plummeting ever since. Trends shift all the time when confronted with external pressures or when pressing up against fundamental limits. All it takes is a war or a plague to freeze technological advances, or even send a civilization back to the stone age.
3. Science places external limits on technology
Fundamental limits have a way of halting exponential growth. For instance, bridges have a natural vibrational resonance. If the wind vibrates a bridge on its resonance, the bridge's oscillations grow exponentially in time. If this trend continued, the bridge would soon buckle to the moon and back. Of course, the trend does not continue. The bridge's oscillations grow exponentially until a fundamental limit is reached: the strength of the bridge. The bridge shatters to pieces long before its oscillations reach the moon, and long before any type of vibrational singularity is reached (which would imply an infinitely stretched bridge). In the real world, every growth pattern eventually runs up against some fundamental limit. For instance, the efficiency of power generators will never advance to the point that they are creating energy out of nothing because fundamental laws of science forbid free energy. There are two kinds of limits: fundamental and resource-driven. Fundamental science limits exist because the laws of science can't be changed. Space ships can never go faster than the speed of light, no matter how clever the engineers. The gravity of a planet can never be turned off. A single elevator shaft can never be made infinitely tall because eventually its cable is not strong enough to support its own weight. Beyond fundamental laws, the lack of available resources also places limits on technology. For instance, if you tried to build a skyscraper out of solid gold that reached the moon, you would use up all of the gold in earth's crust long before reaching the moon. Aside from the limits of raw physical resources, there are also limits on the amount of time, money, and energy a society is able to devote to a project. Building a bridge from New York to Paris is physically possible and requires no more raw resources than is readily available. But such a project will probably never be completed because the building of 3600 miles of towers and spans is more than any country can afford.
4. Technological advances are driven by humans and the human intelligence has limits
While computers can greatly accelerate the speed of raw calculations, they cannot think creatively. Innovation is driven by human intelligence and creativity; not by raw processing power. Every piece of software running on a computer had to first be designed and programmed by a human. Every physical law that a computer code is simulating had to first be derived by humans. For example, computerized wind tunnel simulations can help an airplane designer optimize the aerodynamics of his plane without needing to build hundreds of prototypes. But the laws of aerodynamics had to first be discovered by a human and inputted into the computer before it could run its simulations. A computer can't do anything new. It just does faster what a human could do with a pencil and paper or a real wind tunnel. If a computer does something clever, it's because a clever human designed it to do that. Because technology advances are driven by humans intelligently using tools, such advances are limited by the human brain.
Consider technological advances to be like apples on a tall tree. The low hanging fruit is easily and quickly picked. But the higher and higher levels of fruit are increasingly harder to reach. Virtually everyone who has finished high school can understand and apply a breakthrough from the 1600's such as Newton's law of gravity. But very few people can understand and apply a breakthrough from the early 1900's, such as Einstein's gravitational field equations. Developing an even more advanced theory of gravity than Einstein's would require first understanding Einstein's theory. But that is already beyond the intelligence of most people. In fact, that is even beyond the capability of many physics Ph.D.'s. Hundreds of years ago, a significant scientific discovery could be made by one hobbyist tinkering at home, such as Benjamin Franklin's discovery of the electron's charge. Today, significant scientific discoveries require a team of thousands of scientists spending billions of dollars using tools the size of a city, such as the discovery of the Higgs Boson particle at the LHC. The limits of human intelligence have forced scientists to specialize in increasingly narrow fields and work in ever larger groups in order to make discoveries. Four hundred years ago, a student could gain the world's then-total knowledge on physics by spending a year with Newton. Two hundred years ago, a student could gain the world's then-total knowledge on physics by going to school for a few years. Today, a student who graduates with a Ph.D. in physics knows very little about all of the physics that can be learned. It takes a physicist several years of post-doctoral experience to get completely up to speed with the world's current knowledge, even in his narrow field of specialization. Many physics doctoral programs don't even require General Relativity courses. This means that a hundred-year-old theory of gravity is beyond the capability of many physics Ph.D.'s. The advancement of human knowledge obeys the law of diminishing returns. Every new discovery requires ever more money, effort, and years of education.
From just about every angle you can examine the issue, we are not approaching a technological singularity and never will. Technology will continue to advance steadily, and by advancing, it will change our lives. But such changes have been happening gradually for hundreds of years. We don't have the flying cars, jet packs, hotels on the moon, or underwater cities that our grandparents envisioned we would, and it's not likely that our grandchildren will have these things either. Apart from gadgets that make it easier to be entertained and communicate, our daily life is nearly the same as those that lived 50, or even 100 years ago (as far as technology is concerned). We still live in wood or brick houses, go to work, play sports, read books to the kids, walk the dog, eat salads and pasta, get sick, fall in love, grow gardens, and repair the house in much the same way humans have been doing for hundreds of years. And we will still largely do these things in another hundred years. As exciting as futuristic technological fantasies may be, they do not justify ignoring reality.