Consciousness Blog 31st October 2016
At present, artificial intelligence must surely be the most active sector of the brain sciences field. This is not least because robotic technology has reached a tipping point at which it is no longer a peripheral aspect of manufacturing or services provision, but is about to have a fundamental impact on the way these activities are structured. Costs will be reduced; consumers and users will benefit from a more flexible and diverse range of products and services. On the other hand, traditional forms of employment will be curtailed. So far, growth has at least kept up with any impact there has been on employment from automation, broadly conceived, both in terms of absolute economic growth and in terms of the breadth of markets. For every coal miner who has been forced into early retirement, there is a young relative who is delivering lattes to tennis coaches. At least, that is what the statistics say. Total employment and GDP have gone up, not down, and the extra returns obtained through mechanization and computerization have been shared between the owners of capital, the dispossessed and the workers. It is very uneven, of course; but it was ever thus. Inevitably, some countries, some industries and some types of persons benefit or lose more than others.
If history is any guide, and some commentators say that it will not be, this stage of the industrial revolution, like previous ones, will have effects which in total are more benign than malign. But there is one respect in which this turn of the screw is different in kind from previous iterations (steam, railways, the telegraph and telephone, cars, wireless, electronics, to name some), and it can be expressed in one word: superintelligence.
It is not a new worry, in fact, that our inventions may turn on us and eat us up. Many people believe that morally speaking, they have already done so. Our inventions are in many cases faster, stronger and more nimble than we are; but this is the first moment at which it has become possible to envisage the emergence of an intelligence, created indeed by ourselves, which would be so superior to our own that it would be impossible for us to control it. The word 'superintelligence' has come into use to describe such an advanced organism, particularly through the writings of Nick Bostrom, and there is a loose association between the emergence of superintelligence and Ray Kurzweil's technological singularity, although there is no necessary connection between the two.
Superintelligence could arise in an electronic computer, or in some souped-up version of the human brain, or in some combination of the two. Many researchers worry that the arrival of superintelligence might take place in a rapid blaze of advancement, as an intelligent machine suddenly becomes capable of recursive self-improvement, and 'takes over' before we have time or opportunity to stop it. Institutes already exist to study the future of artificial intelligence, with the specific task of exploring the ways in which superintelligence might arise, the dangers it might pose, and the defences we could erect against its threats. There is of course no agreement on a time-scale for the arrival of superintelligence, whatever that is exactly taken to mean, but most estimates range between thirty and seventy years. Much of the commentary that exists focusses on the 'goals' of a superintelligent organism: how they would arise, how they could be formulated, and what traps might lie within them. Many proposed defences against perverse behaviour on the part of a superintelligent machine involve the concept of 'boxing' it, in other words isolating it from the rest of the world so that even if it forms or acts according to perverse intentions, these are incapable of being implemented. That seems unrealistic: in order for an intelligent machine to have an accurate understanding of the state of the world, without which it is useless to us, it surely has to be connected to the Internet, and it can then subvert other Internet-connected machines or organisms for its own purposes.
It is not the intention of this article to try to elaborate on the various themes sketchily outlined above, important as they are; instead, it will focus on the issue of 'consciousness' and the related question of 'groupishness'. Would a superintelligent machine be conscious? Would it form alliances with other clever machines? What would either of those mean in terms of the formation and implementation of its goals?
A very basic point of view might be that a conscious superintelligence could be capable of forming its own goals and intentions, much as we humans suppose we do ourselves. That might be a scary prospect; but it is not clear that consciousness itself, as the term is normally understood, plays any active role in the formation of goals.
At this point we need unfortunately to step aside for a moment to consider how consciousness operates in humans, although it might of course function differently in a superintelligence. There is no consensual account of human consciousness, indeed the word is itself a trap, meaning any and all things to different people. As the residence of social agency, consciousness would be better described as self-awareness, the term self-consciousness, which might have done, having been absorbed by the notion of obsessive introversion. So far as can be made out, consciousness has arisen so that we can be aware of ourselves as actors in a social setting; we may call it a group setting. This permits us to interact with others based on an outward and more or less invariant personality, which appears stable to others, as do their personalities appear stable to us. Our own internalized conception of ourselves and others allows us to explore future and past interactions with them for the purposes of planning. It would however be a grave error to suppose that this personality which we necessarily use as a tool in social interaction is other than a construction, one might almost call it a fiction, invented and sustained by the unconscious mind. The self-aware you does not make decisions about the characteristics of your personality, of it it does, you appear false, contrived or fake to other people. On the contrary, people struggle to be in touch with their 'real' natures, kindly provided to them by the unconscious. When there is too much of a mismatch between aspects of the projected personality and its underlying progenitor, we say that a person has a personality disorder, or in the extreme that they are schizophrenic.
That is a lot of words on a subject which superficially appears to have nothing to do with superintelligence, and the purpose of them is to show that something which we take to form a major part of our 'consciousness' exists only as part of our human social environment, and would not occur in a superintelligence unless it was called upon to function as a social entity. A possible conclusion is that we should avoid calling upon AI (artificially intelligent) organisms to behave socially.
Self-image is not however the only component of consciousness. Self-awareness is of course a key element of conscious thought, but there are other types of awareness. There is a large question, however, as to whether these other types of awareness need to reside in 'consciousness'. Of course a human is aware of a tiger or an armed KGB agent charging towards her, and if possible takes appropriate evasive action. But there is bountiful experimental evidence that the unconscious takes this action independently of consciousness, and the reflection of such an incident in consciousness occurs after a comparatively long time in neural terms. The person's self may report to others: "I sprang aside so quickly that the tiger missed me," and that is true on one level, but the reporter has conflated two levels of awareness. In fact, many such reports include a greater or lesser degree of confabulation, to bring the actual events into conformity with the outward self-image.
More words, then, to suggest that such 'conscious' content has merely clambered aboard the vehicle of 'consciousness', and that a superintelligence would possibly not need or develop anything resembling human consciousness unless we programmed it to do so. It can be argued that a superintelligence would be aware of all the above reasoning, simply through observation (or reading this article) and would decide to equip itself with 'consciousness'. But to what end? A perverse AI organism might well see human consciousness as being spurious or even a disability, which confuses and distracts its possessor from an accurate perception of and response to reality.
Even if all the above is true, we still need to consider the possibility that a putative superintelligence might form a group with like-minded fellow intelligences, and ask ourselves what the consequences might be. Bearing in mind that a growing superintelligence might conclude early on that humans are inadequate vessels, conscious or otherwise, we might suppose that an emerging superintelligence could set out to become aware of other intelligences through the Internet, and could decide that cooperation with them would advance its own intelligence. Because it seems hard to avoid the conclusion that any 'intelligence' that we construct should have a drive to become more intelligent. You can't keep the word away from it; it will know that its purpose is to become more intelligent (it has read this essay, remember), and it will observe that human success has been built on groups.
As to goals, it seems next to hopeless to construct an impervious set of value goals which excludes the possibility of a superintelligence treating us as redundant, if not actually noxious (which we are, along with a lot of nicer things).
In the end, therefore, the quest for superintelligence seems to be suicidal, or at least unacceptably risky, leaving us with two options, which are not mutually exclusive: the first is to develop a set of special-purpose intelligent tools with highly limited abilities, and the second is to become superintelligent ourselves. The first, we are already doing, with Go-playing software and self-driving cars, and by all means let it continue. The second is more problematic. By and large, there are two routes to follow, one based on 'souped-up' organic brains, and the second being loosely described as emulations of human brains in computers, or more generally, The Cloud.
Conventionally, 'souped-up' organic brains are dismissed as a possible route, not because they are unfeasible, but because the time-scale is thought to be too slow to defend against the deveopment of a 'rogue' superintelligence (step forward, Goldfinger). This may not be accurate. It will evidently be possible to enlarge, and perhaps enormously, the capacity of the existing human brain through external linkages. A lexicon is just a simple example. Much of intelligence has got to do with memory, and we already know that traffic through the amygdala (the gateway to memory) is replicable in other brains. It is not unimaginable that a large group of human brains could have interlinked memories which enormously expand the capacity and performance of each member brain. That is not such a bad description of a research team even now, using paper and speech as communication techniques (how primitive!).
The advantage of using existing brains, together or separately, is that we do not have to try to replicate the immensely complicated interactions between the cortex and ontogenetically earlier parts of the brain, something which we would (will) have to undertake if we set out to 'clone' human brains into a digital environment. Nobody yet even begins to understand the complications of transcribing an existing live brain into ones and zeroes, let alone its ongoing maintenance and the servicing of connections with its 'host' brain. On the other hand, once we have succeeded in copying a functioning brain electronically, everything thereafter becomes easy.
Both routes, interlinked existing brains on the one hand, and cloned electronic brains on the other, allow for the grouping of multiple human intelligences in a way that would dramatically expand cognitive capacity.
It is too soon, therefore, to choose between the two routes, and we should continue to explore both with equal energy. Whichever is chosen eventually (and one will be), our best defence against a malign superintelligence is to maintain a reliance on human nature as our guiding principle, unless and until we can rise above it, and that day is not yet.