It’s a very wide mix of explanations about computation, biology, physics, creativity, politics, and philosophy. The explanations are pretty deep, so they explain an infinite number of phenomena. So they stand at the beginning of infinity.
The book resolved many questions for me. For example:
- What is optimism?
- How much a human can understand?
- How far we can go exploring the Universe?
- How to make a successful meme?
- Why don’t we have AI yet?
- Can the political system ever be fair?
- Is “dark forest” theory real?
The explanations are not ideal, that’s for sure. But it’s something.
The first and several last chapters are safe to skip, but the rest is golden.
In this book I argue that all progress, both theoretical and practical, has resulted from a single human activity: the quest for what I call good explanations. Though this quest is uniquely human, its effectiveness is also a fundamental fact about reality at the most impersonal, cosmic level – namely that it conforms to universal laws of nature that are indeed good explanations.
Conjectures are the products of creative imagination. But the problem with imagination is that it can create fiction much more easily than truth. As I have suggested, historically, virtually all human attempts to explain experience in terms of a wider reality have indeed been fiction, in the form of myths, dogma and mistaken common sense – and the rule of testability is an insufficient check on such mistakes.
Suppose for the sake of argument that you thought of the axis-tilt theory yourself. It is your conjecture, your own original creation. Yet because it is a good explanation – hard to vary – it is not yours to modify. It has an autonomous meaning and an autonomous domain of applicability. You cannot confine its predictions to a region of your choosing. Whether you like it or not, it makes predictions about places both known to you and unknown to you, predictions that you have thought of and ones that you have not thought of. Tilted planets in similar orbits in other solar systems must have seasonal heating and cooling – planets in the most distant galaxies, and planets that we shall never see because they were destroyed aeons ago, and also planets that have yet to form. The theory reaches out, as it were, from its finite origins inside one brain that has been affected only by scraps of patchy evidence from a small part of one hemisphere of one planet – to infinity. This reach of explanations is another meaning of ‘the beginning of infinity’. It is the ability of some of them to solve problems beyond those that they were created to solve.
The axis-tilt theory is an example: it was originally proposed to explain the changes in the sun’s angle of elevation during each year. Combined with a little knowledge of heat and spinning bodies, it then explained seasons. And, without any further modification, it also explained why seasons are out of phase in the two hemispheres, and why tropical regions do not have them, and why the summer sun shines at midnight in polar regions – three phenomena of which its creators may well have been unaware.
- Explanation Statement about what is there, what it does, and how and why.
- Reach The ability of some explanations to solve problems beyond those that they were created to solve.
- Creativity The capacity to create new explanations.
- Empiricism The misconception that we ‘derive’ all our knowledge from sensory experience.
- Theory-laden There is no such thing as ‘raw’ experience. All our experience of the world comes through layers of conscious and unconscious interpretation.
- Inductivism The misconception that scientific theories are obtained by generalizing or extrapolating repeated experiences, and that the more often a theory is confirmed by observation the more likely it becomes.
- Induction The non-existent process of ‘obtaining’ referred to above.
- Principle of induction The idea that ‘the future will resemble the past’, combined with the misconception that this asserts anything about the future.
- Realism The idea that the physical world exists in reality, and that knowledge of it can exist too.
- Relativism The misconception that statements cannot be objectively true or false, but can be judged only relative to some cultural or other arbitrary standard.
- Instrumentalism The misconception that science cannot describe reality, only predict outcomes of observations.
- Justificationism The misconception that knowledge can be genuine or reliable only if it is justified by some source or criterion.
- Fallibilism The recognition that there are no authoritative sources of knowledge, nor any reliable means of justifying knowledge as true or probable.
- Background knowledge Familiar and currently uncontroversial knowledge.
- Rule of thumb ‘Purely predictive theory’ (theory whose explanatory content is all background knowledge).
- Problem A problem exists when a conflict between ideas is experienced.
- Good/bad explanation An explanation that is hard/easy to vary while still accounting for what it purports to account for.
- The Enlightenment (The beginning of) a way of pursuing knowledge with a tradition of criticism and seeking good explanations instead of reliance on authority.
- Mini-enlightenment A short-lived tradition of criticism.
- Rational Attempting to solve problems by seeking good explanations; actively pursuing error-correction by creating criticisms of both existing ideas and new proposals.
- The West The political, moral, economic and intellectual culture that has been growing around the Enlightenment values of science, reason and freedom.
Appearances are deceptive. Yet we have a great deal of knowledge about the vast and unfamiliar reality that causes them, and of the elegant, universal laws that govern that reality. This knowledge consists of explanations: assertions about what is out there beyond the appearances, and how it behaves. For most of the history of our species, we had almost no success in creating such knowledge. Where does it come from? Empiricism said that we derive it from sensory experience. This is false. The real source of our theories is conjecture, and the real source of our knowledge is conjecture alternating with criticism. We create theories by rearranging, combining, altering and adding to existing ideas with the intention of improving upon them. The role of experiment and observation is to choose between existing theories, not to be the source of new ones. We interpret experiences through explanatory theories, but true explanations are not obvious. Fallibilism entails not looking to authorities but instead acknowledging that we may always be mistaken, and trying to correct errors. We do so by seeking good explanations – explanations that are hard to vary in the sense that changing the details would ruin the explanation. This, not experimental testing, was the decisive factor in the scientific revolution, and also in the unique, rapid, sustained progress in other fields that have participated in the Enlightenment. That was a rebellion against authority which, unlike most such rebellions, tried not to seek authoritative justifications for theories, but instead set up a tradition of criticism. Some of the resulting ideas have enormous reach: they explain more than what they were originally designed to. The reach of an explanation is an intrinsic attribute of it, not an assumption that we make about it as empiricism and inductivism claim.
Some people become depressed at the scale of the universe, because it makes them feel insignificant. Other people are relieved to feel insignificant, which is even worse. But, in any case, those are mistakes. Feeling insignificant because the universe is large has exactly the same logic as feeling inadequate for not being a cow. Or a herd of cows. The universe is not there to overwhelm us; it is our home, and our resource. The bigger the better.
The primary function of the telescope’s optics is to reduce the illusion that the stars are few, faint, twinkling and moving. The same is true of every feature of the telescope, and of all other scientific instruments: each layer of indirectness, through its associated theory, corrects errors, illusions, misleading perspectives and gaps. Perhaps it is the mistaken empiricist ideal of ‘pure’, theory-free observation that makes it seem odd that truly accurate observation is always so hugely indirect. But the fact is that progress requires the application of ever more knowledge in advance of our observations.
It may seem strange that scientific instruments bring us closer to reality when in purely physical terms they only ever separate us further from it. But we observe nothing directly anyway. All observation is theory-laden. Likewise, whenever we make an error, it is an error in the explanation of something. That is why appearances can be deceptive, and it is also why we, and our instruments, can correct for that deceptiveness. The growth of knowledge consists of correcting misconceptions in our theories. Edison said that research is one per cent inspiration and ninety-nine per cent perspiration – but that is misleading, because people can apply creativity even to tasks that computers and other machines do uncreatively. So science is not mindless toil for which rare moments of discovery are the compensation: the toil can be creative, and fun, just as the discovery of new explanations is.
Principle of Mediocrity: People are significant in the cosmic scheme of things.
Spaceship Earth: The Earth’s biosphere is incapable of supporting human life.
As the physicist Stephen Hawking put it, humans are ‘just a chemical scum on the surface of a typical planet that’s in orbit round a typical star on the outskirts of a typical galaxy’. The proviso ‘in the cosmic scheme of things’ is necessary because the chemical scum evidently does have a special significance according to values that it applies to itself, such as moral values. But the Principle says that all such values are themselves anthropocentric: they explain only the behaviour of the scum, which is itself insignificant.
…
Cold, dark and empty. That unimaginably desolate environment is typical of the universe – and is another measure of how untypical the Earth and its chemical scum are, in a straightforward physical sense. The issue of the cosmic significance of this type of scum will shortly take us back out into intergalactic space. But let me first return to Earth, and consider the Spaceship Earth metaphor, in its straightforward physical version.
This much is true: if, tomorrow, physical conditions on the Earth’s surface were to change even slightly by astrophysical standards, then no humans could live here unprotected, just as they could not survive on a spaceship whose life-support system had broken down. Yet I am writing this in Oxford, England, where winter nights are likewise often cold enough to kill any human unprotected by clothing and other technology. So, while intergalactic space would kill me in a matter of seconds, Oxfordshire in its primeval state might do it in a matter of hours – which can be considered ‘life support’ only in the most contrived sense. There is a life-support system in Oxfordshire today, but it was not provided by the biosphere. It has been built by humans. It consists of clothes, houses, farms, hospitals, an electrical grid, a sewage system and so on. Nearly the whole of the Earth’s biosphere in its primeval state was likewise incapable of keeping an unprotected human alive for long. It would be much more accurate to call it a death trap for humans rather than a life-support system. Even the Great Rift Valley in eastern Africa, where our species evolved, was barely more hospitable than primeval Oxfordshire. Unlike the life-support system in that imagined spaceship, the Great Rift Valley lacked a safe water supply, and medical equipment, and comfortable living quarters, and was infested with predators, parasites and disease organisms. It frequently injured, poisoned, drenched, starved and sickened its ‘passengers’, and most of them died as a result.
Principle of Mediocrity and Spaceship Earth converge. They share a conception of a tiny, human-friendly bubble embedded in the alien and uncooperative universe. The Spaceship Earth metaphor sees it as a physical bubble, the biosphere. For the Principle of Mediocrity, the bubble is primarily conceptual, marking the limits of the human capacity to understand the world. Those two bubbles are related, as we shall see. In both views, anthropocentrism is true in the interior of the bubble: there the world is unproblematic, uniquely compliant with human wishes and human understanding. Outside it there are only insoluble problems.
Everything that is not forbidden by laws of nature is achievable, given the right knowledge.
In some environments in the universe, the most efficient way for humans to thrive might be to alter their own genes. Indeed, we are already doing that in our present environment, to eliminate diseases that have in the past blighted many lives. Some people object to this on the grounds (in effect) that a genetically altered human is no longer human. This is an anthropomorphic mistake. The only uniquely significant thing about humans (whether in the cosmic scheme of things or according to any rational human criterion) is our ability to create new explanations, and we have that in common with all people. You do not become less of a person if you lose a limb in an accident; it is only if you lose your brain that you do. Changing our genes in order to improve our lives and to facilitate further improvements is no different in this regard from augmenting our skin with clothes or our eyes with telescopes.
Human reach is essentially the same as the reach of explanatory knowledge itself.
Setting up self-sufficient colonies on the moon and elsewhere in the solar system – and eventually in other solar systems – will be a good hedge against the extinction of our species or the destruction of civilization, and is a highly desirable goal for that reason among others. As Hawking has said:
I don’t think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I’m an optimist. We will reach out to the stars.
Daily Telegraph, 16 October 2001
But even that will be far from an unproblematic state. And most people are not satisfied merely to be confident in the survival of the species: they want to survive personally. Also, like our earliest human ancestors, they want to be free from physical danger and suffering. In future, as various causes of suffering and death such as disease and ageing are successively addressed and eliminated, and human life spans increase, people will care about ever longer-term risks.
An unproblematic state is a state without creative thought. Its other name is death.
Thus fallibilism alone rather understates the error-prone nature of knowledge-creation. Knowledge-creation is not only subject to error: errors are common, and significant, and always will be, and correcting them will always reveal further and better problems. And so the maxim that I suggested should be carved in stone, namely ‘The Earth’s biosphere is incapable of supporting human life’ is actually a special case of a much more general truth, namely that, for people, problems are inevitable.
So let us carve that in stone:
Problems are inevitable
A complementary and equally important truth about people and the physical world is that problems are soluble. By ‘soluble’ I mean that the right knowledge would solve them.
Problems are soluble
- Person An entity that can create explanatory knowledge.
- Anthropocentric Centred on humans, or on persons.
- Fundamental or significant phenomenon: One that plays a necessary role in the explanation of many phenomena, or whose distinctive features require distinctive explanation in terms of fundamental theories.
- Principle of Mediocrity ‘There is nothing significant about humans.’
- Parochialism Mistaking appearance for reality, or local regularities for universal laws.
- Spaceship Earth ‘The biosphere is a life-support system for humans.’
- Constructor A device capable of causing other objects to undergo transformations without undergoing any net change itself.
- Universal constructor A constructor that can cause any raw materials to undergo any physically possible transformation, given the right information.
This issue was: given that the gods have created the world, do they care what happens in it? Socrates’ pupil Aristodemus had argued that they do not. Another pupil, the historian Xenophon, recalled Socrates’ reply:
SOCRATES: Because our eyes are delicate, they have been shuttered with eyelids that open when we have occasion to use them . . . And our foreheads have been fringed with eyebrows to prevent damage from the sweat of the head . . . And the mouth set close to the eyes and nostrils as a portal of ingress for all our supplies, whereas, since matter passing out of the body is unpleasant, the outlets are directed hindwards, as far away from the senses as possible. I ask you, when you see all these things constructed with such show of foresight, can you doubt whether they are products of chance or design?
ARISTODEMUS: Certainly not! Viewed in this light they seem very much like the contrivances of some wise craftsman, full of love for all things living.
SOCRATES: And what of the implanting of the instinct to procreate; and in the mother, the instinct to rear her young; and in the young, the intense desire to live and the fear of death?
ARISTODEMUS: These provisions too seem like the contrivances of someone who has determined that there shall be living creatures.
Socrates was right to point out that the appearance of design in living things is something that needs to be explained. It cannot be the ‘product of chance’. And that is specifically because it signals the presence of knowledge. How was that knowledge created?
However, Socrates never stated what constitutes an appearance of design, and why. Do crystals and rainbows have it? Does the sun, or summer? How are they different from biological adaptations such as eyebrows.
If a tiger is placed in a habitat in which its colouration makes it stand out more instead of less, it takes no action to change the colour of its fur, nor would that change be inherited if it did. That is because nothing in the tiger ‘knows’ what the stripes are for. So how would any Lamarckian mechanism have ‘known’ that having fur that was a tiny bit more striped would slightly improve the animal’s food supply? And how would it have ‘known’ how to synthesize pigments, and to secrete them into the fur, in such a way as to produce stripes of a suitable design?
The fundamental error being made by Lamarck has the same logic as inductivism. Both assume that new knowledge (adaptations and scientific theories respectively) is somehow already present in experience, or can be derived mechanically from experience. But the truth is always that knowledge must be first conjectured and then tested.
A common misconception about Darwinian evolution is that it maximizes ‘the good of the species’. That provides a plausible, but false, explanation of apparently altruistic behaviour in nature, such as parents risking their lives to protect their young, or the strongest animals going to the perimeter of a herd under attack – thereby decreasing their own chances of having a long and pleasant life or further offspring. Thus, it is said, evolution optimizes the good of the species, not the individual. But, in reality, evolution optimizes neither.
To see why, consider this thought experiment. Imagine an island on which the total number of birds of a particular species would be maximized if they nested at, say, the beginning of April. The explanation for why a particular date is optimal will refer to various trade-offs involving factors such as temperature, the prevalence of predators, the availability of food and nesting materials, and so on. Suppose that initially the whole population has genes that cause them to nest at that optimum time. That would mean that those genes were well adapted to maximizing the number of birds in the population – which one might call ‘maximizing the good of the species’. Now suppose that this equilibrium is disturbed by the advent of a mutant gene in a single bird which causes it to nest slightly earlier – say, at the end of March. Assume that when a bird has built a nest, the species’ other behavioural genes are such that it automatically gets whatever cooperation it needs from a mate. That pair of birds would then be guaranteed the best nesting site on the island – an advantage which, in terms of the survival of their offspring, might well outweigh all the slight disadvantages of nesting earlier. In that case, in the following generation, there will be more March-nesting birds, and, again, all of them will find excellent nesting sites. That means that a smaller proportion than usual of the April-nesting variety will find good sites: the best sites will have been taken by the time they start looking. In subsequent generations, the balance of the population will keep shifting towards the March-nesting variants. If the relative advantage of having the best nesting sites is large enough, the April-nesting variant could even become extinct. If it arises again as a mutation, its holder will have no offspring, because all sites will have been taken by the time it tries to nest.
Thus the original situation that we imagined – with genes that were optimally adapted to maximizing the population (‘benefiting the species’) – is unstable.
From the point of view of both the species and all its members, the change brought about by this period of its evolution has been a disaster. But evolution does not ‘care’ about that. It favours only the genes that spread best through the population.
Neo-Darwinism does not refer, at its fundamental level, to anything biological. It is based on the idea of a replicator (anything that contributes causally to its own copying). For instance, a gene conferring the ability to digest a certain type of food causes the organism to remain healthy in some situations where it would otherwise weaken or die. Hence it increases the organism’s chances of having offspring in the future, and those offspring would inherit, and spread, copies of the gene.
Ideas can be replicators too. For example, a good joke is a replicator: when lodged in a person’s mind, it has a tendency to cause that person to tell it to other people, thus copying it into their minds. Dawkins coined the term memes (rhymes with ‘dreams’) for ideas that are replicators. Most ideas are not replicators: they do not cause us to convey them to other people. Nearly all long-lasting ideas, however, such as languages, scientific theories and religious beliefs, and the ineffable states of mind that constitute cultures such as being British, or the skill of performing classical music, are memes (or ‘memeplexes’ – collections of interacting memes). I shall say more about memes in Chapter 15.
- Evolution (Darwinian) Creation of knowledge through alternating variation and selection.
- Replicator An entity that contributes causally to its own copying.
- Neo-Darwinism Darwinism as a theory of replicators, without various misconceptions such as ‘survival of the fittest’.
- Meme An idea that is a replicator.
- Memeplex A group of memes that help to cause each other’s replication.
- Spontaneous generation Formation of organisms from non-living precursors.
- Lamarckism A mistaken evolutionary theory based on the idea that biological adaptations are improvements acquired by an organism during its lifetime and then inherited by its descendants.
- Fine-tuning If the constants or laws of physics were slightly different, there would be no life.
- Anthropic explanation ‘It is only in universes that contain intelligent observers that anyone wonders why the phenomenon in question happens.’
The evolution of biological adaptations and the creation of human knowledge share deep similarities, but also some important differences. The main similarities: genes and ideas are both replicators; knowledge and adaptations are both hard to vary. The main difference: human knowledge can be explanatory and can have great reach; adaptations are never explanatory and rarely have much reach beyond the situations in which they evolved. False explanations of biological evolution have counterparts in false explanations of the growth of human knowledge. For instance, Lamarckism is the counterpart of inductivism. William Paley’s version of the argument from design clarified what does or does not have the ‘appearance of design’ and hence what cannot be explained as the outcome of chance alone – namely hard-to-vary adaptation to a purpose. The origin of this must be the creation of knowledge. Biological evolution does not optimize benefits to the species, the group, the individual or even the gene, but only the ability of the gene to spread through the population. Such benefits can nevertheless happen because of the universality of laws of nature and the reach of some of the knowledge that is created. The ‘fine-tuning’ of the laws or constants of physics has been used as a modern form of the argument from design. For the usual reasons, it is not a good argument for a supernatural cause. But ‘anthropic’ theories that try to account for it as a pure selection effect from an infinite number of different universes are, by themselves, bad explanations too – in part because most logically possible laws are themselves bad explanations.
Furthermore, everyday events are stupendously complex when expressed in terms of fundamental physics. If you fill a kettle with water and switch it on, all the supercomputers on Earth working for the age of the universe could not solve the equations that predict what all those water molecules will do – even if we could somehow determine their initial state and that of all the outside influences on them, which is itself an intractable task.
Fortunately, some of that complexity resolves itself into a higher-level simplicity. For example, we can predict with some accuracy how long the water will take to boil. To do so, we need know only a few physical quantities that are quite easy to measure, such as its mass, the power of the heating element, and so on. For greater accuracy we may also need information about subtler properties, such as the number and type of nucleation sites for bubbles. But those are still relatively ‘high-level’ phenomena, composed of intractably large numbers of interacting atomic-level phenomena. Thus there is a class of high-level phenomena – including the liquidity of water and the relationship between containers, heating elements, boiling and bubbles – that can be well explained in terms of each other alone, with no direct reference to anything at the atomic level or below. In other words, the behaviour of that whole class of high-level phenomena is quasi-autonomous – almost self-contained. This resolution into explicability at a higher, quasi-autonomous level is known as emergence.
Emergent phenomena are a tiny minority. We can predict when the water will boil, and that bubbles will form when it does, but if you wanted to predict where each bubble will go (or, to be precise, what the probabilities of its various possible motions are – see Chapter 11), you would be out of luck. Still less is it feasible to predict the countless microscopically defined properties of the water, such as whether an odd or an even number of its electrons will be affected by the heating during a given period.
The behaviour of high-level physical quantities consists of nothing but the behaviour of their low-level constituents with most of the details ignored. This has given rise to a widespread misconception about emergence and explanation, known as reductionism: the doctrine that science always explains and predicts things reductively, i.e. by analysing them into components. Often it does, as when we use the fact that inter-atomic forces obey the law of conservation of energy to make and explain a high-level prediction that the kettle cannot boil water without a power supply. But reductionism requires the relationship between different levels of explanation always to be like that, and often it is not. For example, as I wrote in The Fabric of Reality:
Consider one particular copper atom at the tip of the nose of the statue of Sir Winston Churchill that stands in Parliament Square in London. Let me try to explain why that copper atom is there. It is because Churchill served as prime minister in the House of Commons nearby; and because his ideas and leadership contributed to the Allied victory in the Second World War; and because it is customary to honour such people by putting up statues of them; and because bronze, a traditional material for such statues, contains copper, and so on. Thus we explain a low-level physical observation – the presence of a copper atom at a particular location – through extremely high-level theories about emergent phenomena such as ideas, leadership, war and tradition.
There is no reason why there should exist, even in principle, any lower-level explanation of the presence of that copper atom than the one I have just given. Presumably a reductive ‘theory of everything’ would in principle make a low-level prediction of the probability that such a statue will exist, given the condition of (say) the solar system at some earlier date. It would also in principle describe how the statue probably got there. But such descriptions and predictions (wildly infeasible, of course) would explain nothing. They would merely describe the trajectory that each copper atom followed from the copper mine, through the smelter and the sculptor’s studio and so on … In fact such a prediction would have to refer to atoms all over the planet, engaged in the complex motion we call the Second World War, among other things. But even if you had the superhuman capacity to follow such lengthy predictions of the copper atom’s being there, you would still not be able to say ‘Ah yes, now I understand why they are there’. [You] would have to inquire into what it was about that configuration of atoms, and those trajectories, that gave them the propensity to deposit a copper atom at this location. Pursuing that inquiry would be a creative task, as discovering new explanations always is. You would have to discover that certain atomic configurations support emergent phenomena such as leadership and war, which are related to one another by high-level explanatory theories. Only when you knew those theories could you understand why that copper atom is where it is.
Whenever a high-level explanation does follow logically from low-level ones, that also means that the high-level one implies something about the low-level ones. Thus, additional high-level theories, provided that they were all consistent, would place more and more constraints on what the low-level theories could be. So it could be that all the high-level explanations that exist, taken together, imply all the low-level ones, as well as vice versa. Or it could be that some low-level, some intermediate-level and some high-level explanations, taken together, imply all explanations. I guess that that is so.
Thus, one possible way that the fine-tuning problem might eventually be solved would be if some high-level explanations turned out to be exact laws of nature. The microscopic consequences of that might well seem to be fine-tuned. One candidate is the principle of the universality of computation, which I shall discuss in the next chapter. Another is the principle of testability, for, in a world in which the laws of physics do not permit the existence of testers, they also forbid themselves to be tested. However, in their current form such principles, regarded as laws of physics, are anthropocentric and arbitrary – and would therefore be bad explanations. But perhaps there are deeper versions, to which they are approximations, which are good explanations, well integrated with those of microscopic physics like the second law of thermodynamics is.
In the example above we want to calculate if 641 is prime using computer made of dominos.
At that point, the reductive (to high-level physics) explanation would be, in summary, ‘That domino did not fall because none of the patterns of motion initiated by knocking over the “on switch” ever include it.’ But we knew that already. We can reach that conclusion – as we just have – without going through that laborious process. And it is undeniably true. But it is not the explanation we were looking for because it is addressing a different question – predictive rather than explanatory – namely, if the first domino falls, will the output domino ever fall? And it is asking at the wrong level of emergence. What we asked was: why does it not fall? To answer that, Hofstadter then adopts a different mode of explanation, at the right level of emergence:
The second type of answer would be, ‘Because 641 is prime.’ Now this answer, while just as correct (indeed, in some sense it is far more on the mark), has the curious property of not talking about anything physical at all. Not only has the focus moved upwards to collective properties … these properties somehow transcend the physical and have to do with pure abstractions, such as primality.
Certainly you can’t derive an ought from an is, but you can’t derive a factual theory from an is either. That is not what science does. The growth of knowledge does not consist of finding ways to justify one’s beliefs. It consists of finding good explanations. And, although factual evidence and moral maxims are logically independent, factual and moral explanations are not.
- Levels of emergence Sets of phenomena that can be explained well in terms of each other without analysing them into their constituent entities such as atoms.
- Natural numbers The whole numbers 1, 2, 3 and so on.
- Reductionism The misconception that science must or should always explain things by analysing them into components (and hence that higher-level explanations cannot be fundamental).
- Holism The misconception that all significant explanations are of components in terms of wholes rather than vice versa.
- Moral philosophy Addresses the problem of what sort of life to want.
Reductionism and holism are both mistakes. In reality, explanations do not form a hierarchy with the lowest level being the most fundamental. Rather, explanations at any level of emergence can be fundamental. Abstract entities are real, and can play a role in causing physical phenomena. Causation is itself such an abstraction.
Be that as it may, with the Enlightenment, parochialism and all arbitrary exceptions and limitations began to be regarded as inherently problematic – and not only in science. Why should the law treat an aristocrat differently from a commoner? A slave from a master? A woman from a man? Enlightenment philosophers such as Locke set out to free political institutions from arbitrary rules and assumptions. Others tried to derive moral maxims from universal moral explanations rather than merely to postulate them dogmatically. Thus universal explanatory theories of justice, legitimacy and morality began to take their place alongside universal theories of matter and motion. In all those cases, universality was being sought deliberately, as a desirable feature in its own right – even a necessary feature for an idea to be true – and not just as a means of solving a parochial problem.
The jump to universality in digital computers has left analogue computation behind. That was inevitable, because there is no such thing as a universal analogue computer. That is because of the need for error correction: during lengthy computations, the accumulation of errors due to things like imperfectly constructed components, thermal fluctuations, and random outside influences makes analogue computers wander off the intended computational path. This may sound like a minor or parochial consideration. But it is quite the opposite. Without error-correction all information processing, and hence all knowledge-creation, is necessarily bounded. Error-correction is the beginning of infinity.
Genes are replicators that can be interpreted as instructions in a genetic code. Genomes are groups of genes that are dependent on each other for replication. The process of copying a genome is called a living organism. Thus the genetic code is also a language for specifying organisms. At some point, the system switched to replicators made of DNA, which is more stable than RNA and therefore more suitable for storing large amounts of information.
- The jump to universality The tendency of gradually improving systems to undergo a sudden large increase in functionality, becoming universal in some domain.
All knowledge growth is by incremental improvement, but in many fields there comes a point when one of the incremental improvements in a system of knowledge or technology causes a sudden increase in reach, making it a universal system in the relevant domain. In the past, innovators who brought about such a jump to universality had rarely been seeking it, but since the Enlightenment they have been, and universal explanations have been valued both for their own sake and for their usefulness. Because error-correction is essential in processes of potentially unlimited length, the jump to universality only ever happens in digital systems.
Some abilities of humans that are commonly included in that constellation associated with general-purpose intelligence do not belong in it. One of them is self-awareness – as evidenced by such tests as recognizing oneself in a mirror. Some people are unaccountably impressed when various animals are shown to have that ability. But there is nothing mysterious about it: a simple pattern-recognition program would confer it on a computer. The same is true of tool use, the use of language for signalling (though not for conversation in the Turing-test sense), and various emotional responses (though not the associated qualia). At the present state of the field, a useful rule of thumb is: if it can already be programmed, it has nothing to do with intelligence in Turing’s sense. Conversely, I have settled on a simple test for judging claims, including Dennett’s, to have explained the nature of consciousness (or any other computational task): if you can’t program it, you haven’t understood it.
In any case, we should expect AI to be achieved in a jump to universality, starting from something much less powerful. In contrast, the ability to imitate a human imperfectly or in specialized functions is not a form of universality.
Artificial evolution makes us think that if we have variation and selection, then evolution (of adaptations) will automatically happen. But neither is necessarily so. In both cases, another possibility is that no knowledge at all will be created during the running of the program, only during its development by the programmer.
That is why I doubt that any ‘artificial evolution’ has ever created knowledge. I have the same view, for the same reasons, about the slightly different kind of ‘artificial evolution’ that tries to evolve simulated organisms in a virtual environment, and the kind that pits different virtual species against each other.
To test this proposition, I would like to see an experiment of a slightly different kind: eliminate the graduate student from the project. Then, instead of using a robot designed to evolve better ways of walking, use a robot that is already in use in some real-life application and happens to be capable of walking. And then, instead of creating a special language of subroutines in which to express conjectures about how to walk, just replace its existing program, in its existing microprocessor, by random numbers. For mutations, use errors of the type that happen anyway in such processors (though in the simulation you are allowed to make them happen as often as you like). The purpose of all that is to eliminate the possibility that human knowledge is being fed into the design of the system, and that its reach is being mistaken for the product of evolution. Then, run simulations of that mutating system in the usual way. As many as you like. If the robot ever walks better than it did originally, then I am mistaken. If it continues to improve after that, then I am very much mistaken.
We do not know why the DNA code, which evolved to describe bacteria, has enough reach to describe dinosaurs and humans. And, although it seems obvious that an AI will have qualia and consciousness, we cannot explain those things. So long as we cannot explain them, how can we expect to simulate them in a computer program? Or why should they emerge effortlessly from projects designed to achieve something else? .
- Quale (plural qualia) The subjective aspect of a sensation.
- Behaviourism Instrumentalism applied to psychology. The doctrine that science can (or should) only measure and predict people’s behaviour in response to stimuli.
The field of artificial (general) intelligence has made no progress because there is an unsolved philosophical problem at its heart: we do not understand how creativity works. Once that has been solved, programming it will not be difficult. Even artificial evolution may not have been achieved yet, despite appearances. There the problem is that we do not understand the nature of the universality of the DNA replication system.
The best explanation of anything eventually involves universality, and therefore infinity.
Turing initially set up the theory of computation not for the purpose of building computers, but to investigate the nature of mathematical proof. Hilbert in 1900 had challenged mathematicians to formulate a rigorous theory of what constitutes a proof, and one of his conditions was that proofs must be finite: they must use only a fixed and finite set of rules of inference; they must start with a finite number of finitely expressed axioms, and they must contain only a finite number of elementary steps – where the steps are themselves finite. Computations, as understood in Turing’s theory, are essentially the same thing as proofs: every valid proof can be converted to a computation that computes the conclusion from the premises, and every correctly executed computation is a proof that the output is the outcome of the given operations on the input.
Now, a computation can also be thought of as computing a function that takes an arbitrary natural number as its input and delivers an output that depends in a particular way on that input. So, for instance, doubling a number is a function. Infinity Hotel typically tells guests to change rooms by specifying a function and telling them all to compute it with different inputs (their room numbers). One of Turing’s conclusions was that almost all mathematical functions that exist logically cannot be computed by any program. They are ‘non-computable’ for the same reason that most logically possible reallocations of rooms in Infinity Hotel cannot be effected by any instruction by the management: the set of all functions is uncountably infinite, while the set of all programs is merely countably infinite. (That is why it is meaningful to say that ‘almost all’ members of the infinite set of all functions have a particular property.) Hence also – as the mathematician Kurt Gödel had discovered using a different approach to Hilbert’s challenge – almost all mathematical truths have no proofs. They are unprovable truths.
It also follows that almost all mathematical statements are undecidable: there is no proof that they are true, and no proof that they are false. Each of them is either true or false, but there is no way of using physical objects such as brains or computers to discover which is which. The laws of physics provide us with only a narrow window through which we can look out on the world of abstractions.
So, a computation or a proof is a physical process in which objects such as computers or brains physically model or instantiate abstract entities like numbers or equations, and mimic their properties. It is our window on the abstract. It works because we use such entities only in situations where we have good explanations saying that the relevant physical variables in those objects do indeed instantiate those abstract properties.
Consequently, the reliability of our knowledge of mathematics remains for ever subsidiary to that of our knowledge of physical reality. Every mathematical proof depends absolutely for its validity on our being right about the rules that govern the behaviour of some physical objects, like computers, or ink and paper, or brains. So, contrary to what Hilbert thought, and contrary to what most mathematicians since antiquity have believed and believe to this day, proof theory can never be made into a branch of mathematics. Proof theory is a science: specifically, it is computer science.
So there is something special – infinitely special, it seems – about the laws of physics as we actually find them, something exceptionally computation-friendly, prediction-friendly and explanation-friendly. The physicist Eugene Wigner called this ‘the unreasonable effectiveness of mathematics in the natural sciences’. For the reasons I have given, anthropic arguments alone cannot explain it. Something else will.
This problem seems to attract bad explanations. Just as religious people tend to see Providence in the unreasonable effectiveness of mathematics in science, and some evolutionists see the signature of evolution, and some cosmologists see anthropic selection effects, so some computer scientists and programmers see a great computer in the sky. For instance, one version of that idea is that the whole of what we usually think of as reality is merely virtual reality: a program running on a gigantic computer – a Great Simulator. On the face of it, this might seem a promising approach to explaining the connections between physics and computation: perhaps the reason the laws of physics are expressible in terms of computer programs is that they are in fact computer programs. Perhaps the existence of computational universality in our world is a special case of the ability of computers (in this case the Great Simulator) to emulate other computers – and so on.
But that explanation is a chimera. An infinite regress. For it entails giving up on explanation in science. It is in the very nature of computational universality that, if we and our world were composed of software, we would have no means of understanding the real physics – the physics underlying the hardware of the Great Simulator. A different way of putting computation at the heart of physics, and to resolve the ambiguities of anthropic reasoning, is to imagine that all possible computer programs are running. What we think of as reality is just virtual reality generated by one or more of those programs. Then we define ‘common’ and ‘uncommon’ in terms of an average over all those programs, counting programs in order of their lengths (how many elementary operations each contains). But again that assumes that there is a preferred notion of what an ‘elementary operation’ is. Since the length and complexity of a program are entirely dependent on the laws of physics, this theory again requires an external world in which those computers run – a world that would be unknowable to us.
Both those approaches fail because they attempt to reverse the direction of the real explanatory connection between physics and computation. They seem plausible only because they rely on that standard mistake of Zeno’s, applied to computation: the misconception that the set of classically computable functions has an a-priori privileged status within mathematics. But it does not. The only thing that privileges that set of operations is that it is instantiated in the laws of physics. The whole point of universality is lost if one conceives of computation as being somehow prior to the physical world, generating its laws. Computational universality is all about computers inside our physical world being related to each other under the universal laws of physics to which we (thereby) have access.
- One-to-one correspondence Tallying each member of one set with each member of another.
- Infinite (mathematical) A set is infinite if it can be placed in one-to-one correspondence with part of itself.
- Infinite (physical) A rather vague concept meaning something like ‘larger than anything that could in principle be encompassed by experience’.
- Countably infinite Infinite, but small enough to be placed in one-to-one correspondence with the natural numbers.
- Measure A method by which a theory gives meaning to proportions and averages of infinite sets of things, such as universes.
- Singularity A situation in which something physical becomes unboundedly large, while remaining everywhere finite.
- Multiverse A unified physical entity that contains more than one universe.
- Infinite regress A fallacy in which an argument or explanation depends on a sub-argument of the same form which purports to address essentially the same problem as the original argument.
- Computation A physical process that instantiates the properties of some abstract entity.
- Proof A computation which, given a theory of how the computer on which it runs works, establishes the truth of some abstract proposition.
We can understand infinity through the infinite reach of some explanations. It makes sense, both in mathematics and in physics. But it has counter-intuitive properties, some of which are illustrated by Hilbert’s thought experiment of Infinity Hotel. One of them is that, if unlimited progress really is going to happen, not only are we now at almost the very beginning of it, we always shall be. Cantor proved, with his diagonal argument, that there are infinitely many levels of infinity, of which physics uses at most the first one or two: the infinity of the natural numbers and the infinity of the continuum. Where there are infinitely many identical copies of an observer (for instance in multiple universes), probability and proportions do not make sense unless the collection as a whole has a structure subject to laws of physics that give them meaning. A mere infinite sequence of universes, like the rooms in Infinity Hotel, does not have such structure, which means that anthropic reasoning by itself is insufficient to explain the apparent ‘fine-tuning’ of the constants of physics. Proof is a physical process: whether a mathematical proposition is provable or unprovable, decidable or undecidable, depends on the laws of physics, which determine which abstract entities and relationships are modelled by physical objects. Similarly, whether a task or pattern is simple or complex depends on what the laws of physics are.
People in 1900 did not consider the internet or nuclear power unlikely: they did not conceive of them at all.
No good explanation can predict the outcome, or the probability of an outcome, of a phenomenon whose course is going to be significantly affected by the creation of new knowledge. This is a fundamental limitation on the reach of scientific prediction, and, when planning for the future, it is vital to come to terms with it. Following Popper, I shall use the term prediction for conclusions about future events that follow from good explanations, and prophecy for anything that purports to know what is not yet knowable. Trying to know the unknowable leads inexorably to error and self-deception. Among other things, it creates a bias towards pessimism. For example, in 1894 the physicist Albert Michelson made the following prophecy about the future of physics:
The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote … Our future discoveries must be looked for in the sixth place of decimals.
Albert Michelson, address at the opening of the Ryerson Physical Laboratory, University of Chicago, 1894.
The science-fiction author Greg Bear has written some exciting novels based on the premise that the galaxy is full of civilizations that are either predators or prey, and in both cases are hiding.
This would solve the mystery of Fermi’s problem. But it is implausible as a serious explanation. For one thing, it depends on civilizations becoming convinced of the existence of predator civilizations in space, and totally reorganizing themselves in order to hide from them, before being noticed – which means before they have even invented, say, radio.
Hawking’s proposal also overlooks various dangers of not making our existence known to the galaxy, such as being inadvertently wiped out if benign civilizations send robots to our solar system, perhaps to mine what they consider an uninhabited system. And it rests on other misconceptions in addition to that classic flaw of blind pessimism. One is the Spaceship Earth idea on a larger scale: the assumption that progress in a hypothetical rapacious civilization is limited by raw materials rather than by knowledge. What exactly would it come to steal? Gold? Oil? Perhaps our planet’s water? Surely not, since any civilization capable of transporting itself here, or raw materials back across galactic distances, must already have cheap transmutation and hence does not care about the chemical composition of its raw materials. So essentially the only resource of use to it in our solar system would be the sheer mass of matter in the sun. But matter is available in every star. Perhaps it is collecting entire stars wholesale in order to make a giant black hole as part of some titanic engineering project. But in that case it would cost it virtually nothing to omit inhabited solar systems (which are presumably a small minority, otherwise it is pointless for us to hide in any case); so would it casually wipe out billions of people? Would we seem like insects to it? This can seem plausible only if one forgets that there can be only one type of person: universal explainers and constructors. The idea that there could be beings that are to us as we are to animals is a belief in the supernatural.
Moreover, there is only one way of making progress: conjecture and criticism. And the only moral values that permit sustained progress are the objective values that the Enlightenment has begun to discover. No doubt the extraterrestrials’ morality is different from ours; but that will not be because it resembles that of the conquistadors.
Nor would we be in serious danger of culture shock from contact with an advanced civilization: it will know how to educate its own children (or AIs), so it will know how to educate us – and, in particular, to teach us how to use its computers.
The question about the sources of our knowledge has always been asked in the spirit of: ‘What are the best sources of our knowledge – the most reliable ones, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?’ I propose to assume, instead, that no such ideal sources exist – no more than ideal rulers – and that all ‘sources’ are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question: ‘How can we hope to detect and eliminate error?’
‘Knowledge without Authority’ (1960)
The question ‘How can we hope to detect and eliminate error?’ is echoed by Feynman’s remark that ‘science is what we have learned about how to keep from fooling ourselves‘.
Ideas have consequences, and the ‘who should rule?’ approach to political philosophy is not just a mistake of academic analysis: it has been part of practically every bad political doctrine in history. If the political process is seen as an engine for putting the right rulers in power, then it justifies violence, for until that right system is in place, no ruler is legitimate; and once it is in place, and its designated rulers are ruling, opposition to them is opposition to rightness. The problem then becomes how to thwart anyone who is working against the rulers or their policies. By the same logic, everyone who thinks that existing rulers or policies are bad must infer that the ‘who should rule?’ question has been answered wrongly, and therefore that the power of the rulers is not legitimate, and that opposing it is legitimate, by force if necessary. Thus the very question ‘Who should rule?’ begs for violent, authoritarian answers, and has often received them. It leads those in power into tyranny, and to the entrenchment of bad rulers and bad policies; it leads their opponents to violent destructiveness and revolution.
Thus, systems of government are to be judged not for their prophetic ability to choose and install good leaders and policies, but for their ability to remove bad ones that are already there.
The Principle of Optimism
All evils are caused by insufficient knowledge.
Optimism is, in the first instance, a way of explaining failure, not prophesying success. It says that there is no fundamental barrier, no law of nature or supernatural decree, preventing progress. Whenever we try to improve things and fail, it is not because the spiteful (or unfathomably benevolent) gods are thwarting us or punishing us for trying, or because we have reached a limit on the capacity of reason to make improvements, or because it is best that we fail, but always because we did not know enough, in time. But optimism is also a stance towards the future, because nearly all failures, and nearly all successes, are yet to come.
Pessimism has been endemic in almost every society throughout history. It has taken the form of the precautionary principle, and of ‘who should rule?’ political philosophies and all sorts of other demands for prophecy, and of despair in the power of creativity, and of the misinterpretation of problems as insuperable barriers. Yet there have always been a few individuals who see obstacles as problems, and see problems as soluble. And so, very occasionally, there have been places and moments when there was, briefly, an end to pessimism. As far as I know, no historian has investigated the history of optimism, but my guess is that whenever it has emerged in a civilization there has been a mini-enlightenment: a tradition of criticism resulting in an efflorescence of many of the patterns of human progress with which we are familiar, such as art, literature, philosophy, science, technology and the institutions of an open society. The end of pessimism is potentially a beginning of infinity. Yet I also guess that in every case – with the single, tremendous exception (so far) of our own Enlightenment – this process was soon brought to an end and the reign of pessimism was restored.
- Blind optimism (recklessness, overconfidence) Proceeding as if one knew that bad outcomes will not happen.
- Blind pessimism (precautionary principle) Avoiding everything not known to be safe.
- The principle of optimism All evils are caused by insufficient knowledge.
- Wealth The repertoire of physical transformations that one is capable of causing.
Optimism (in the sense that I have advocated) is the theory that all failures – all evils – are due to insufficient knowledge. This is the key to the rational philosophy of the unknowable. It would be contentless if there were fundamental limitations to the creation of knowledge, but there are not. It would be false if there were fields – especially philosophical fields such as morality – in which there were no such thing as objective progress. But truth does exist in all those fields, and progress towards it is made by seeking good explanations. Problems are inevitable, because our knowledge will always be infinitely far from complete. Some problems are hard, but it is a mistake to confuse hard problems with problems unlikely to be solved. Problems are soluble, and each particular evil is a problem that can be solved. An optimistic civilization is open and not afraid to innovate, and is based on traditions of criticism. Its institutions keep improving, and the most important knowledge that they embody is knowledge of how to detect and eliminate errors. There may have been many short-lived enlightenments in history. Ours has been uniquely long-lived.
The knowledge that you seek – objective knowledge – is hard to come by, but attainable. That mental state that you do not seek – justified belief – is sought by many people, especially priests and philosophers. But, in truth, beliefs cannot be justified, except in relation to other beliefs, and even then only fallibly. So the quest for their justification can lead only to an infinite regress – each step of which would itself be subject to error.
HERMES: But that is not what I asked. I asked what is here before your eyes. In reality.
SOCRATES: All right. Before my eyes, in reality, there is – a small room. Or, if you want a literal reply, what is before my eyes is – eyelids, since I expect that they are shut. Yet I see from your expression that you want even more precision. Very well: before my eyes are the inside surfaces of my eyelids.
HERMES: And can you see those? In other words, is it really ‘easy to see’ what is before your eyes?
SOCRATES: Not at the moment. But that is only because I am dreaming.
HERMES: Is it only because you are dreaming? Are you saying that if you were awake you would now be seeing the inside surfaces of your eyelids?
SOCRATES: [carefully] If I were awake with my eyes still closed, then yes.
HERMES: What colour do you see when you close your eyes?
SOCRATES: In a room as dimly lit as this one – black.
HERMES: Do you think that the inside surfaces of your eyelids are black?
SOCRATES: I suppose not.
HERMES: So would you really be seeing them?
SOCRATES: Not exactly.
HERMES: And if you were to open your eyes, would you be able to see the room?
SOCRATES: Only very vaguely. It is dark.
HERMES: So I ask again: is it true that, if you were awake, you could easily see what was before your eyes?
SOCRATES: All right – not always. But nevertheless, when I am awake, and with my eyes open, and in bright light –
HERMES: But not too bright, I suppose?
SOCRATES: Yes, yes. If you want to keep quibbling, I must accept that when one is dazzled by the sun one may see even less well than in the dark. Likewise one may see one’s own face behind a mirror where there is in reality only empty space. One may sometimes see a mirage, or be fooled by a pile of crumpled clothes that happens to resemble a mythical creature –
HERMES: Or one may be fooled by dreaming of one …
SOCRATES: [Smiles.] Quite so. And, conversely, whether sleeping or waking, we often fail to see things that are there in reality.
HERMES: You have no idea how many such things there are.
HERMES: Indeed, most guesses are not new knowledge. Although guesswork is the origin of all knowledge, it is also a source of error, and therefore what happens to an idea after it has been guessed is crucial.
SOCRATES: So – let me combine that insight with what I know of criticism. A guess might come from a dream, or it might just be a wild speculation or random combination of ideas, or anything. But then we do not just accept it blindly or because we imagine it is ‘authorized’, or because we want it to be true. Instead we criticize it and try to discover its flaws.
HERMES: Yes. That is what you should do, at any rate.
SOCRATES: Then we try to remedy those flaws by altering the idea, or dropping it in favour of others – and the alterations and other ideas are themselves guesses. And are themselves criticized. Only when we fail in these attempts either to reject or to improve an idea do we provisionally accept it.
HERMES: That can work. Unfortunately, people do not always do what can work.
SOCRATES: But am I really to accept that I myself – the thinking being that I call ‘I’ – has no direct knowledge of the physical world at all, but can only receive arcane hints of it through flickers and shadows that happen to impinge on my eyes and other senses? And that what I experience as reality is never more than a waking dream, composed of conjectures originating from within myself?
HERMES: Do you have an alternative explanation?
SOCRATES: No! And the more I contemplate this one, the more delighted I become. (A sensation of which I should beware! Yet I am also persuaded.) Everyone knows that man is the paragon of animals. But if this epistemology you tell me is true, then we are infinitely more marvellous creatures than that. Here we sit, for ever imprisoned in the dark, almost-sealed cave of our skull, guessing. We weave stories of an outside world – worlds, actually: a physical world, a moral world, a world of abstract geometrical shapes, and so on – but we are not satisfied with merely weaving, nor with mere stories. We want true explanations. So we seek explanations that remain robust when we test them against those flickers and shadows, and against each other, and against criteria of logic and reasonableness and everything else we can think of. And when we can change them no more, we have understood some objective truth. And, as if that were not enough, what we understand we then control. It is like magic, only real. We are like gods.
CHAEREPHON: Socrates, I think I know plenty of Athenians who do not seek improvement! We have many politicians who think they’re perfect. And many sophists who think they know everything.
SOCRATES: But what, specifically, do those politicians believe to be perfect? Their own grandiose plans for how to improve the city. Similarly, each sophist believes that everyone should adopt his ideas, which he sees as an improvement over everything that has been believed before. The laws and customs of Athens are set up to accommodate all these many rival ideas of perfection (as well as more modest proposals for improvement), to subject them to criticism, to winnow out from them what may be the few tiny seeds of truth, and to test out those that seem the most promising. Thus those myriad individuals who can conceive of no improvement of themselves nevertheless add up to a city that relentlessly seeks nothing else for itself, day and night.
CHAEREPHON: Yes, I see.
SOCRATES: In Sparta there are no such politicians, and no such sophists. And no gadflies such as me, because any Spartan who did doubt or disapprove of the way things have always been done would keep it to himself. What few new ideas they have are intended to sustain the city more securely in its current state. As for war, I know that there are Spartans who glory in war, and would love to conquer and enslave the whole world, just as they once set out to conquer their neighbours. Yet the institutions of their city, and the deep assumptions that are built into the minds of even the hotheads, embody a visceral fear of any such step into the unknown. Perhaps it is significant that the statue of Ares that stands outside Sparta represents him chained, so that he will always be there to protect the city. Is that not the same as preventing the god of violence from breaking discipline? From being loosed upon the world to cause random mayhem, with its terrifying risk of change?
But a good plot always rests, implicitly or explicitly, on good explanations of how and why events happen, given its fictional premises.
It is a rather counter-intuitive fact that if objects are merely identical (in the sense of being exact copies), and obey deterministic laws that make no distinction between them, then they can never become different; but fungible objects, which on the face of it are even more alike, can. This is the first of those weird properties of fungibility that Leibniz never thought of, and which I consider to be at the heart of the phenomena of quantum physics.
The effects of a wave of differentiation usually diminish rapidly with distance – simply because physical effects in general do. The sun, from even a hundredth of a light year away, looks like a cold, bright dot in the sky. It barely affects anything. At a thousand light years, nor does a supernova. Even the most violent of quasar jets, when viewed from a neighbouring galaxy, would be little more than an abstract painting in the sky. There is only one known phenomenon which, if it ever occurred, would have effects that did not fall off with distance, and that is the creation of a certain type of knowledge, namely a beginning of infinity. Indeed, knowledge can aim itself at a target, travel vast distances having scarcely any effect, and then utterly transform the destination.
Hence, for instance, an individual electron always has a range of different locations and a range of different speeds and directions of motion. As a result, its typical behaviour is to spread out gradually in space. Its quantum-mechanical law of motion resembles the law governing the spread of an ink blot – so if it is initially located in a very small region it spreads out rapidly, and the larger it gets the more slowly it spreads. The entanglement information that it carries ensures that no two instances of it can ever contribute to the same history. (Or, more precisely, at times and places where there are histories, it exists in instances which can never collide.) If a particle’s range of speeds is centred not on zero but on some other value, then the whole of the ‘ink blot’ moves, with its centre obeying approximately the laws of motion in classical physics. In quantum physics this is how motion, in general, works.
This explains how particles in the same history can be fungible too, in something like an atomic laser. Two ‘ink-blot’ particles, each of which is a multiversal object, can coincide perfectly in space, and their entanglement information can be such that no two of their instances are ever at the same point in the same history. Now, put a proton into the middle of that gradually spreading cloud of instances of a single electron. The proton has a positive charge, which attracts the negatively charged electron. As a result, the cloud stops spreading when its size is such that its tendency to spread outwards due to its uncertainty-principle diversity is exactly balanced by its attraction to the proton. The resulting structure is called an atom of hydrogen.
Historically, this explanation of what atoms are was one of the first triumphs of quantum theory, for atoms could not exist at all according to classical physics. An atom consists of a positively charged nucleus surrounded by negatively charged electrons. But positive and negative charges attract each other and, if unrestrained, accelerate towards each other, emitting energy in the form of electromagnetic radiation as they go. So it used to be a mystery why the electrons do not ‘fall’ on to the nucleus in a flash of radiation. Neither the nucleus nor the electrons individually have more than one ten-thousandth of the diameter of the atom, so what keeps them so far apart? And what makes atoms stable at that size? In non-technical accounts, the structure of atoms is sometimes explained by analogy with the solar system: one imagines electrons in orbit around the nucleus like planets around the sun. But that does not match the reality. For one thing, gravitationally bound objects do slowly spiral in, emitting gravitational radiation (the process has been observed for binary neutron stars), and the corresponding electromagnetic process in an atom would be over in a fraction of a second. For another, the existence of solid matter, which consists of atoms packed closely together, is evidence that atoms cannot easily penetrate each other, yet solar systems certainly could. Furthermore, it turns out that, in the hydrogen atom, the electron in its lowest-energy state is not orbiting at all but, as I said, just sitting there like an ink blot – its uncertainty-principle tendency to spread exactly balanced by the electrostatic force. In this way, the phenomena of interference and diversity within fungibility are integral to the structure and stability of all static objects, including all solid bodies, just as they are integral to all motion.
This mechanism is ubiquitous in quantum physics, and is the general means by which transitions between discrete states happen in a continuous way. In classical physics, a ‘tiny effect’ always means a tiny change in some measurable quantities. In quantum physics, physical variables are typically discrete and so cannot undergo tiny changes. Instead, a ‘tiny effect’ means a tiny change in the proportions that have the various discrete attributes.
This also raises the issue of whether time itself is a continuous variable. In this discussion I am assuming that it is. However, the quantum mechanics of time is not yet fully understood, and will not be until we have a quantum theory of gravity (the unification of quantum theory with the general theory of relativity), so it may turn out that things are not as simple as that. One thing we can be fairly sure of, though, is that, in that theory, different times are a special case of different universes. In other words, time is an entanglement phenomenon, which places all equal clock readings (of correctly prepared clocks – or of any objects usable as clocks) into the same history. This was first understood by the physicists Don Page and William Wooters, in 1983.
We are channels of information flow. So are histories, and so are all relatively autonomous objects within histories; but we sentient beings are extremely unusual channels, along which (sometimes) knowledge grows. This can have dramatic effects, not only within a history (where it can, for instance, have effects that do not diminish with distance), but also across the multiverse. Since the growth of knowledge is a process of error-correction, and since there are many more ways of being wrong than right, knowledge-creating entities rapidly become more alike in different histories than other entities. As far as is known, knowledge-creating processes are unique in both these respects: all other effects diminish with distance in space, and become increasingly different across the multiverse, in the long run.
But that is only as far as is known. Here is an opportunity for some wild speculations that could inform a science-fiction story. What if there is something other than information flow that can cause coherent, emergent phenomena in the multiverse? What if knowledge, or something other than knowledge, could emerge from that, and begin to have purposes of its own, and to conform the multiverse to those purposes, as we do? Could we communicate with it? Presumably not in the usual sense of the term, because that would be information flow; but perhaps the story could propose some novel analogue of communication which, like quantum inference, did not involve sending messages. Would we be trapped in a war of mutual extermination with such an entity? Or is it possible that we could nevertheless have something in common with it? Let us shun parochial resolutions of the issue – such as a discovery that what bridges the barrier is love, or trust. But let us remember that, just as we are at the top rank of significance in the great scheme of things, anything else that could create explanations would be too. And there is always room at the top.
- Fungible Identical in every respect.
- The world The whole of physical reality.
- Multiverse The world, according to quantum theory.
- Universe Universes are quasi-autonomous regions of the multiverse.
- History A set of fungible universes, over time. One can also speak of the history of parts of a universe.
- Parallel universes A somewhat misleading way of referring to the multiverse. Misleading because the universes are not perfectly ‘parallel’ (autonomous), and because the multiverse has much more structure – especially fungibility, entanglement and the measures of histories.
- Instances In parts of the multiverse that contain universes, each multiversal object consists approximately of ‘instances’, some identical, some not, one in each of the universes.
- Quantum The smallest possible change in a discrete physical variable.
- Entanglement Information in each multiversal object that determines which parts (instances) of it can affect which parts of other multiversal objects.
- Decoherence The process of its becoming infeasible to undo the effect of a wave of differentiation between universes.
- Quantum interference Phenomena caused by non-fungible instances of a multiversal object becoming fungible.
- Uncertainty principle The (badly misnamed) implication of quantum theory that, for any fungible collection of instances of a physical object, some of their attributes must be diverse.
- Quantum computation Computation in which the flow of information is not confined to a single history.
The physical world is a multiverse, and its structure is determined by how information flows in it. In many regions of the multiverse, information flows in quasi-autonomous streams called histories, one of which we call our ‘universe’. Universes approximately obey the laws of classical (pre-quantum) physics. But we know of the rest of the multiverse, and can test the laws of quantum physics, because of the phenomenon of quantum interference. Thus a universe is not an exact but an emergent feature of the multiverse. One of the most unfamiliar and counter-intuitive things about the multiverse is fungibility. The laws of motion of the multiverse are deterministic, and apparent randomness is due to initially fungible instances of objects becoming different. In quantum physics, variables are typically discrete, and how they change from one value to another is a multiversal process involving interference and fungibility.
In regard to the unobserved processes between observations, where both Schrödinger’s and Heisenberg’s theories seemed to be describing a multiplicity of histories happening at once, Bohr proposed a new fundamental principle of nature, the ‘principle of complementarity’. It said that accounts of phenomena could be stated only in ‘classical language’ – meaning language that assigned single values to physical variables at any one time – but classical language could be used only in regard to some variables, including those that had just been measured. One was not permitted to ask what values the other variables had. Thus, for instance, in response to the question ‘Which path did the photon take?’ in the Mach–Zehnder interferometer, the reply would be that there is no such thing as which path when the path is not observed. In response to the question ‘Then how does the photon know which way to turn at the final mirror, since this depends on what happened on both paths?’, the reply would be an equivocation called ‘particle–wave duality’: the photon is both an extended (non-zero volume) and a localized (zero-volume) object at the same time, and one can choose to observe either attribute but not both. Often this is expressed in the saying ‘It is both a wave and a particle simultaneously.’ Ironically, there is a sense in which those words are precisely true: in that experiment the entire multiversal photon is indeed an extended object (wave), while instances of it (particles, in histories) are localized. Unfortunately, that is not what is meant in the Copenhagen interpretation. There the idea is that quantum physics defies the very foundations of reason: particles have mutually exclusive attributes, period. And it dismisses criticisms of the idea as invalid because they constitute attempts to use ‘classical language’ outside its proper domain (namely describing outcomes of measurements).
And in Dublin in 1952 Schrödinger gave a lecture in which at one point he jocularly warned his audience that what he was about to say might ‘seem lunatic’. It was that, when his equation seems to be describing several different histories, they are ‘not alternatives but all really happen simultaneously’. This is the earliest known reference to the multiverse. Here was an eminent physicist joking that he might be considered mad. Why? For claiming that his own equation – the very one for which he had won the Nobel prize – might be true.
I have said that empiricism initially played a positive role in the history of ideas by providing a defence against traditional authorities and dogma, and by attributing a central role – albeit the wrong one – to experiment in science. At first, the fact that empiricism is an impossible account of how science works did almost no harm, because no one took it literally. Whatever scientists may have said about where their discoveries came from, they eagerly addressed interesting problems, conjectured good explanations, tested them, and only lastly claimed to have induced the explanations from experiment. The bottom line was that they succeeded: they made progress. Nothing prevented that harmless (self-)deception, and nothing was inferred from it.
Gradually, though, empiricism did begin to be taken literally, and so began to have increasingly harmful effects. For instance, the doctrine of positivism, developed during the nineteenth century, tried to eliminate from scientific theories everything that had not been ‘derived from observation’. Now, since nothing is ever derived from observation, what the positivists tried to eliminate depended entirely on their own whims and intuitions. Occasionally these were even good. For instance, the physicist Ernst Mach (father of Ludwig Mach of the Mach–Zehnder interferometer), who was also a positivist philosopher, influenced Einstein, spurring him to eliminate untested assumptions from physics – including Newton’s assumption that time flows at the same rate for all observers. That happened to be an excellent idea. But Mach’s positivism also caused him to oppose the resulting theory of relativity, essentially because it claimed that spacetime really exists even though it cannot be ‘directly’ observed. Mach also resolutely denied the existence of atoms, because they were too small to observe. We laugh at this silliness now – when we have microscopes that can see atoms – but the role of philosophy should have been to laugh at it then.
in genuine science, one can claim to have measured a quantity only when one has an explanatory theory of how and why the measurement procedure should reveal its value, and with what accuracy.
Suppose you count the number of people coming out as well. If you have an explanatory theory saying that the museum is always empty at night, and that no one enters or leaves other than through the doors, and that visitors are never created, destroyed, split or merge, and so on, then one possible use for the outgoing count is to check the ingoing one: you would predict that they should be the same. Then, if they are not the same, you will have an estimate of the accuracy of your count. That is good science. In fact reporting your result without also making an accuracy estimate makes your report strictly meaningless. But unless you have an explanatory theory of the interior of the museum – which you never see – you cannot use the outgoing count, or anything else, to estimate your error.
Now, suppose you are doing your study using explanationless science instead – which really means science with unstated, uncriticized explanations, just as the Copenhagen interpretation really assumed that there was only one unobserved history connecting successive observations. Then you might analyse the results as follows. For each day, subtract the count of people entering from the count of those leaving. If the difference is not zero, then – and this is the key step in the study – call that difference the ‘spontaneous-human-creation count’ if it is positive, or the ‘spontaneous-human-destruction count’ if it is negative. If it is exactly zero, call it ‘consistent with conventional physics’.
The less competent your counting and tabulating are, the more often you will find those ‘inconsistencies with conventional physics’. Next, prove that non-zero results (the spontaneous creation or destruction of human beings) are inconsistent with conventional physics. Include this proof in your report, but also include a concession that extraterrestrial visitors would probably be able to harness physical phenomena of which we are unaware. Also, that teleportation to or from another location would be mistaken for ‘destruction’ (without trace) and ‘creation’ (out of thin air) in your experiment and that therefore this cannot be ruled out as a possible cause of the anomalies.
When headlines appear of the form ‘Teleportation Possibly Observed in City Museum, Say Scientists’ and ‘Scientists Prove Alien Abduction is Real,’ protest mildly that you have claimed no such thing, that your results are not conclusive, merely suggestive, and that more studies are needed to determine the mechanism of this perplexing phenomenon.
Consequently, as soon as scientists allow themselves to stop demanding good explanations and consider only whether a prediction is accurate or inaccurate, they are liable to make fools of themselves. This is the means by which a succession of eminent physicists over the decades have been fooled by conjurers into believing that various conjuring tricks have been done by ‘paranormal’ means.
Bad philosophy cannot easily be countered by good philosophy – argument and explanation – because it holds itself immune. But it can be countered by progress. People want to understand the world, no matter how loudly they may deny that. And progress makes bad philosophy harder to believe.
- Bad philosophy Philosophy that actively prevents the growth of knowledge.
- Interpretation The explanatory part of a scientific theory, supposedly distinct from its predictive or instrumental part.
- Copenhagen interpretation Niels Bohr’s combination of instrumentalism, anthropocentrism and studied ambiguity, used to avoid understanding quantum theory as being about reality.
- Positivism The bad philosophy that everything not ‘derived from observation’ should be eliminated from science.
- Logical positivism The bad philosophy that statements not verifiable by observation are meaningless.
Before the Enlightenment, bad philosophy was the rule and good philosophy the rare exception. With the Enlightenment came much more good philosophy, but bad philosophy became much worse, with the descent from empiricism (merely false) to positivism, logical positivism, instrumentalism, Wittgenstein, linguistic philosophy, and the ‘postmodernist’ and related movements.
In science, the main impact of bad philosophy has been through the idea of separating a scientific theory into (explanationless) predictions and (arbitrary) interpretation. This has helped to legitimize dehumanizing explanations of human thought and behaviour. In quantum theory, bad philosophy manifested itself mainly as the Copenhagen interpretation and its many variants, and as the ‘shut-up-and-calculate’ interpretation. These appealed to doctrines such as logical positivism to justify systematic equivocation and to immunize themselves from criticism.
Balinski and Young’s Theorem
Every apportionment rule that stays within the quota suffers from the population paradox.
This powerful ‘no-go’ theorem explains the long string of historical failures to solve the apportionment problem. Never mind the various other conditions that may seem essential for an apportionment to be fair: no apportionment rule can meet even the bare-bones requirements of proportionality and the avoidance of the population paradox.
One of the first of the no-go theorems was proved in 1951 by the economist Kenneth Arrow, and it contributed to his winning the Nobel prize for economics in 1972. Arrow’s theorem appears to deny the very existence of social choice – and to strike at the principle of representative government, and apportionment, and democracy itself, and a lot more besides.
This is what Arrow did. He first laid down five elementary axioms that any rule defining the ‘will of the people’ – the preferences of a group – should satisfy, and these axioms seem, at first sight, so reasonable as to be hardly worth stating. One of them is that the rule should define a group’s preferences only in terms of the preferences of that group’s members. Another is that the rule must not simply designate the views of one particular person to be ‘the preferences of the group’ regardless of what the others want. That is called the ‘no-dictator’ axiom. A third is that if the members of the group are unanimous about something – in the sense that they all have identical preferences about it – then the rule must deem the group to have those preferences too. Those three axioms are all expressions, in this situation, of the principle of representative government.
Arrow’s fourth axiom is this. Suppose that, under a given definition of ‘the preferences of the group’, the rule deems the group to have a particular preference – say, for pizza over hamburger. Then it must still deem that to be the group’s preference if some members who previously disagreed with the group (i.e. they preferred hamburger) change their minds and now prefer pizza. This constraint is similar to ruling out a population paradox. A group would be irrational if it changed its ‘mind’ in the opposite direction to its members.
The last axiom is that if the group has some preference, and then some members change their minds about something else, then the rule must continue to assign the group that original preference. For instance, if some members have changed their minds about the relative merits of strawberries and raspberries, but none of their preferences about the relative merits of pizza and hamburger have changed, then the group’s preference between pizza and hamburger must not be deemed to have changed either. This constraint can again be regarded as a matter of rationality: if no members of the group change any of their opinions about a particular comparison, nor can the group.
Arrow proved that the axioms that I have just listed are, despite their reasonable appearance, logically inconsistent with each other. No way of conceiving of ‘the will of the people’ can satisfy all five of them. This strikes at the assumptions behind social-choice theory at an arguably even deeper level than the theorems of Balinski and Young. First, Arrow’s axioms are not about the apparently parochial issue of apportionment, but about any situation in which we want to conceive of a group having preferences. Second, all five of these axioms are intuitively not just desirable to make a system fair, but essential for it to be rational. Yet they are inconsistent.
Virtually all commentators have responded to these paradoxes and no-go theorems in a mistaken and rather revealing way: they regret them. This illustrates the confusion to which I am referring. They wish that these theorems of pure mathematics were false. If only mathematics permitted it, they complain, we human beings could set up a just society that makes its decisions rationally. But, faced with the impossibility of that, there is nothing left for us to do but to decide which injustices and irrationalities we like best, and to enshrine them in law. As Webster wrote, of the apportionment problem, ‘That which cannot be done perfectly must be done in a manner as near perfection as can be. If exactness cannot, from the nature of things, be attained, then the nearest practicable approach to exactness ought to be made.’
But what sort of ‘perfection’ is a logical contradiction? A logical contradiction is nonsense. The truth is simpler: if your conception of justice conflicts with the demands of logic or rationality then it is unjust. If your conception of rationality conflicts with a mathematical theorem (or, in this case, with many theorems) then your conception of rationality is irrational. To stick stubbornly to logically impossible values not only guarantees failure in the narrow sense that one can never meet them, it also forces one to reject optimism (‘every evil is due to lack of knowledge’), and so deprives one of the means to make progress. Wishing for something that is logically impossible is a sign that there is something better to wish for. Moreover, if my conjecture in Chapter 8 is true, an impossible wish is ultimately uninteresting as well.
What voters are doing in elections is not synthesizing a decision of a superhuman being, ‘Society’. They are choosing which experiments are to be attempted next, and (principally) which are to be abandoned because there is no longer a good explanation for why they are best. The politicians, and their policies, are those experiments.
It does not make sense to include everyone’s favoured policies, or parts of them, in the new decision; what is necessary for progress is to exclude ideas that fail to survive criticism, and to prevent their entrenchment, and to promote the creation of new ideas.
- Representative government A system of government in which the composition or opinions of the legislature reflect those of the people.
- Social-choice theory The study of how the ‘will of society’ can be defined in terms of the wishes of its members, and of what social institutions can cause society to enact its will, thus defined.
- Popper’s criterion Good political institutions are those that make it as easy as possible to detect whether a ruler or policy is a mistake, and to remove rulers or policies without violence when they are.
It is a mistake to conceive of choice and decision-making as a process of selecting from existing options according to a fixed formula. That omits the most important element of decision-making, namely the creation of new options. Good policies are hard to vary, and therefore conflicting policies are discrete and cannot be arbitrarily mixed. Just as rational thinking does not consist of weighing the justifications of rival theories, but of using conjecture and criticism to seek the best explanation, so coalition governments are not a desirable objective of electoral systems. They should be judged by Popper’s criterion of how easy they make it to remove bad rulers and bad policies. That designates the plurality voting system as best in the case of advanced political cultures.
During that biological co-evolution, just as in the history of art, criteria evolved, and means of meeting those criteria co-evolved with them. That is what gave flowers the knowledge of how to attract insects, and insects the knowledge of how to recognize those flowers and the propensity to fly towards them. But what is surprising is that these same flowers also attract humans.
Arguments by analogy are fallacies. Almost any analogy between any two things contains some grain of truth, but one cannot tell what that is until one has an independent explanation for what is analogous to what, and why.
So, for example, although religions prescribe behaviours such as educating one’s children to adopt the religion, the mere intention to transmit a meme to one’s children or anyone else is quite insufficient to make that happen. That is why the overwhelming majority of attempts to start a new religion fail, even if the founder members try hard to propagate it. In such cases, what has happened is that an idea that people have adopted has succeeded in causing them to enact various behaviours including ones intended to cause their children and others to do the same – but the behaviour has failed to cause the same idea to be stored in the minds of those recipients. The existence of long-lived religions is sometimes explained from the premise that ‘children are gullible’, or that they are ‘easily frightened’ by tales of the supernatural. But that is not the explanation. The overwhelming majority of ideas simply do not have what it takes to persuade (or frighten or cajole or otherwise cause) children or anyone else into doing the same to other people. If establishing a faithfully replicating meme were that easy, the whole adult population in our society would be proficient at algebra, thanks to the efforts made to teach it to them when they were children. To be exact, they would all be proficient algebra teachers. To.
So, for example, although religions prescribe behaviours such as educating one’s children to adopt the religion, the mere intention to transmit a meme to one’s children or anyone else is quite insufficient to make that happen. That is why the overwhelming majority of attempts to start a new religion fail, even if the founder members try hard to propagate it. In such cases, what has happened is that an idea that people have adopted has succeeded in causing them to enact various behaviours including ones intended to cause their children and others to do the same – but the behaviour has failed to cause the same idea to be stored in the minds of those recipients. The existence of long-lived religions is sometimes explained from the premise that ‘children are gullible’, or that they are ‘easily frightened’ by tales of the supernatural. But that is not the explanation. The overwhelming majority of ideas simply do not have what it takes to persuade (or frighten or cajole or otherwise cause) children or anyone else into doing the same to other people. If establishing a faithfully replicating meme were that easy, the whole adult population in our society would be proficient at algebra, thanks to the efforts made to teach it to them when they were children. To be exact, they would all be proficient algebra teachers. To be a meme, an idea has to contain quite sophisticated knowledge of how to cause humans to do at least two independent things: assimilate the meme faithfully, and enact it. That some memes can replicate themselves with great fidelity for many generations is a token of how much knowledge they contain.
The frequently cited metaphor of the history of life on Earth, in which human civilization occupies only the final ‘second’ of the ‘day’ during which life has so far existed, is misleading. In reality, a substantial proportion of all evolution on our planet to date has occurred in human brains. And it has barely begun. The whole of biological evolution was but a preface to the main story of evolution, the evolution of memes.
- Culture A set of shared ideas that cause their holders to behave alike in some ways.
- Rational meme An idea that relies on the recipients’ critical faculties to cause itself to be replicated.
- Anti-rational meme An idea that relies on disabling the recipients’ critical faculties to cause itself to be replicated.
- Static culture/society One whose changes happen on a timescale longer than its members can notice. Such cultures are dominated by anti-rational memes.
- Dynamic culture/society One that is dominated by rational memes.
Cultures consist of memes, and they evolve. In many ways memes are analogous to genes, but there are also profound differences in the way they evolve. The most important differences are that each meme has to include its own replication mechanism, and that a meme exists alternately in two different physical forms: a mental representation and a behaviour. Hence also a meme, unlike a gene, is separately selected, at each replication, for its ability to cause behaviour and for the ability of that behaviour to cause new recipients to adopt the meme. The holders of memes typically do not know why they are enacting them: we enact the rules of grammar, for instance, much more accurately than we are able to state them. There are only two basic strategies of meme replication: to help prospective holders or to disable the holders’ critical faculties. The two types of meme – rational memes and anti-rational memes – inhibit each other’s replication and the ability of the culture as a whole to propagate itself. Western civilization is in an unstable transitional period between stable, static societies consisting of anti-rational memes and a stable dynamic society consisting of rational memes. Contrary to conventional wisdom, primitive societies are unimaginably unpleasant to live in. Either they are static, and survive only by extinguishing their members’ creativity and breaking their spirits, or they quickly lose their knowledge and disintegrate, and violence takes over. Existing accounts of memes fail to recognize the significance of the rational/anti-rational distinction and hence tend to be implicitly anti-meme. This is tantamount to mistaking Western civilization for a static society, and its citizens for the crushed, pessimistic victims of memes that the members of static societies are.
- Imitation Copying behaviour. This is different from human meme replication, which copies the knowledge that is causing the behaviour.
On the face of it, creativity cannot have been useful during the evolution of humans, because knowledge was growing much too slowly for the more creative individuals to have had any selective advantage. This is a puzzle. A second puzzle is: how can complex memes even exist, given that brains have no mechanism to download them from other brains? Complex memes do not mandate specific bodily actions, but rules. We can see the actions, but not the rules, so how do we replicate them? We replicate them by creativity. That solves both problems, for replicating memes unchanged is the function for which creativity evolved. And that is why our species exists.
He was right in one respect: no alternative red phosphor has been discovered to this day. Yet, as I write this chapter, I see before me a superbly coloured computer display that contains not one atom of europium. Its pixels are liquid crystals consisting entirely of common elements, and it does not require a cathode-ray tube. Nor would it matter if it did, for by now enough europium has been mined to supply every human being on earth with a dozen europium-type screens, and the known reserves of the element comprise several times that amount.
Even while my pessimistic colleague was dismissing colour television technology as useless and doomed, optimistic people were discovering new ways of achieving it, and new uses for it – uses that he thought he had ruled out by considering for five minutes how well colour televisions could do the existing job of monochrome ones. But what stands out, for me, is not the failed prophecy and its underlying fallacy, nor relief that the nightmare never happened. It is the contrast between two different conceptions of what people are. In the pessimistic conception, they are wasters: they take precious resources and madly convert them into useless coloured pictures. This is true of static societies: those statues really were what my colleague thought colour televisions are – which is why comparing our society with the ‘old culture’ of Easter Island is exactly wrong. In the optimistic conception – the one that was unforeseeably vindicated by events – people are problem-solvers: creators of the unsustainable solution and hence also of the next problem. In the pessimistic conception, that distinctive ability of people is a disease for which sustainability is the cure. In the optimistic one, sustainability is the disease and people are the cure.
Since then, whole new industries have come into existence to harness great waves of innovation, and in many of those – from medical imaging to video games to desktop publishing to nature documentaries like Attenborough’s – colour television proved to be very useful after all. And, far from there being a permanent class distinction between monochrome- and colour-television users, the monochrome technology is now practically extinct, as are cathode-ray televisions. Colour displays are now so cheap that they are being given away free with magazines as advertising gimmicks. And all those technologies, far from being divisive, are inherently egalitarian, sweeping away many formerly entrenched barriers to people’s access to information, opinion, art and education.
Static societies eventually fail because their characteristic inability to create knowledge rapidly must eventually turn some problem into a catastrophe. Analogies between such societies and the technological civilization of the West today are therefore fallacies. Marx, Engels and Diamond’s ‘ultimate explanation’ of the different histories of different societies is false: history is the history of ideas, not of the mechanical effects of biogeography. Strategies to prevent foreseeable disasters are bound to fail eventually, and cannot even address the unforeseeable. To prepare for those, we need rapid progress in science and technology and as much wealth as possible.
Wheeler once imagined writing out all the equations that might be the ultimate laws of physics on sheets of paper all over the floor. And then:
Stand up, look back on all those equations, some perhaps more hopeful than others, raise one’s finger commandingly, and give the order ‘Fly!’ Not one of those equations will put on wings, take off, or fly. Yet the universe ‘flies’. C. W. Misner, K. S. Thorne and J. A.Wheeler, Gravitation (1973.
Everyone should read these
- Jacob Bronowski, The Ascent of Man (BBC Publications, 1973)
- Jacob Bronowski, Science and Human Values (Harper & Row, 1956)
- Richard Byrne, ‘Imitation as Behaviour Parsing’, Philosophical Transactions of the Royal Society B358 (2003)
- Richard Dawkins, The Selfish Gene (Oxford University Press, 1976)
- David Deutsch, ‘Comment on Michael Lockwood, “‘Many Minds’ Interpretations of Quantum Mechanics”’, British Journal for the Philosophy of Science 47, 2 (1996)
- David Deutsch, The Fabric of Reality (Allen Lane, 1997)
- Karl Popper, Conjectures and Refutations (Routledge, 1963)
- Karl Popper, The Open Society and Its Enemies (Routledge, 1945)