By Michael Godfrey Bell, 2015
To read Agent Human on-line, just click here.
To download a copy of Agent Human, just click here.
For further information, contact email@example.com
A fictional account, set in 2130, of what life might be like in cyberspace, based on the ideas of Agent Human. Eight young people challenge the status quo by exploring group consciousness, something forbidden by the authorities, who try to 'wipe' them and destroy their legacy human bodies.
To read We, Immortals: The Future Of The Mind on-line, just click here.
To download a copy of We, Immortals: The Future Of The Mind, just click here.
The Collective Unconscious, Quantum Mechanics and PSI
Many current researchers into the inner and outer reaches of the human psyche do not attempt to construct an over-arching theory of the mind, and who can blame them, given the confusing mass of unexplained and contradictory data they face? Still, some people try, and a surprising number of them arrive at some type of 'field' theory, in which we, and all of our compeers, exist as islands in a pervasive sea which we but dimply experience.
This book attempts to record some of the more notable recent attempts at analysis of the mysteries that surround us, and reaches some tentative conclusions based on the inadequate evidence that exists so far. They are remarkable enough.
read it on-line, go to http://www.agenthuman.com/quantum/
Reflections On Groups
In Agent Human, the development of the more advanced stages of human consciousness is attributed to the demands of group living; but many other types of animal live in groups without, as far as we know, arriving at anything resembling human consciousness. Well, we do not know, and the past few years have produced countless demonstrations of intelligent behaviour in animals which were seen as suprising. And now come robots, or more generally, constructs having artificial intelligence, and it will be a good question, whether to equip them with the sorts of interactive (we may call them 'social') behaviours which might require something resembling consciousness to be fully effective. If Nature, or evolution, found it necessary to equip us with consciousness in order to optimize our 'groupish' performance, then why should it not be equally beneficial to do the same for robots?
Images immediately come to mind of I, Robot, or of Skynet, and a host of other fictional dystopias, and it can be expected that there will be legislative, moral, religious and simply prudential rules to try to prevent the emergence of conscious groups of robots. Not that anyone has yet succeeded in describing or locating consciousness, so that it might be quite difficult to formulate rules for robots which would not amount to throwing out the baby with the bathwater.
Consider, for a moment, a battlefield group of robotic actors thrown into the defence of a military facility against an armed attack. Let there not be any civilians, to simplify the problem. It still bristles with difficulties, involving recognition of enemy actors, judgment about necessary levels of force, trade-offs between surrendering territory or munitions and maximizing the destruction of enemy forces. At present, drones are the nearest approach to this, and they have human minders, a job by the way that destroys people, and I don't mean their targets. But it is easy to see that this is a temporary situation, and that it will be optimal to build in a considerable amount of cooperation between the robotic actors independently of any external, remote direction, for which there will not be time, in any case.
Read previous consciousness blogs:
Talking to yourself is not crazy, 30 November 2009
The future of human evolution, 05 December 2009
Testosterone, 15 December 2009
Self, 03 January 2010
Attention, 20 March 2010
Emotions, 01 May 2010
Face 16 May 2010
Trust 25 July 2010
Dancing 13 November 2011
Copying 20 May 2012
Altruism 15 July 2012
Brain Clone 10 March 2013
Decisions 30 March 2013
Only Connect 13 April 2013
Language 08 September 2013
Unconscious 19 October 2013
Booze 30 November 2013
Tether Hypothesis 12 January 2014
Spite 16 February 2014
Mating 08 June 2014
Brainy 22 August 2014
Fire 22 October 2014
Multibrains 12 July 2015
Disentangling Entanglement 15 September 2015
Free Will 9 January 2016
Faces 09 June 2016
Download a copy of The Futures Of The Human Race here.
Now let us dive into some recent research along the lines of demonstrating 'groupish' behaviour among animals or robots, beginning, improbably, with slime moulds. Nowadays, they are not considered to be moulds and are often not particularly slimy. There are lots of different types, but broadly one can say that they aggregate multiple individual eukaryotic cells (like ours, having membrane-bound internal structures) into a grouped organism, which can move, eat, excrete and reproduce. Researchers (Romain P. Boisseau, David Vogel and Audrey Dussutour of Toulouse University) have described learning behaviour in slime moulds whereby they found out that it was safe to cross a bridge coated with bitter substances in order to reach an attractive oat-based meal through a process of experimentation. 'Habituation in non-neural organisms: evidence from slime moulds' was published in The Proceedings of the Royal Society (27 April 2016.DOI: 10.1098/rspb.2016.0446). It was already known that slime moulds can learn their way around a maze, but this was a new level of 'intelligent' behaviour from an organism that certainly has no brain or neural circuits, and of course it would have been impossible for an individual slime mould cell to reach the food.
Dimos Dimarogonas, an associate professor at KTH Royal Institute of Technology in Sweden has reported work aimed at enabling off-the-shelf robots to cooperate with one another on complex jobs, by using body language. "Robots can stop what they're doing and go over to assist another robot which has asked for help," Dimarogonas says. "This will mean flexible and dynamic robots that act much more like humans – robots capable of constantly facing new choices and that are competent enough to make decisions." The project was completed in May 2016, with project partners at Aalto University in Finland, the National Technical University of Athens in Greece, and the École Centrale Paris in France. In a video, a robot points out an object to another robot, conveying the message that it needs the robot to lift the item. Says Dimarogonas: "The visual feedback that the robots receive is translated into the same symbol for the same object. With updated vision technology they can understand that one object is the same from different angles. That is translated to the same symbol one layer up to the decision-making – that it is a thing of interest that we need to transport or not. In other words, they have perceptual agreement." In another demonstration two robots carry an object together. One leads the other, which senses what the lead robot wants by the force it exerts on the object. "It's just like if you and I were carrying a table and I knew where it had to go," says Dimarogonas. "You would sense which direction I wanted to go by the way I turn and push, or pull."
Panagiotis Artemiadis, director of the Human-Oriented Robotics and Control Lab and an assistant professor of mechanical and aerospace engineering in the School for Engineering of Matter, Transport and Energy in the Ira A. Fulton Schools of Engineering, reported in July, 2016 that a human operator can control multiple drones through a wireless interface, by thinking of various tasks. The controller wears a skull cap outfitted with 128 electrodes wired to a computer, which records electrical brain activity. Up to four small robots, some of which fly, can be controlled with brain interfaces. If the controller moves a hand or thinks of something, certain areas light up. "I can see that activity from outside," says Artemidias. "Our goal is to decode that activity to control variables for the robots." For instance, if the user thinks about spreading out the drones – "We know what part of the brain controls that thought," Artemiadis said. "You can't do something collectively" with a joystick, he says. "If you want to swarm around an area and guard that area, you cannot do that." Artemiades says he had the idea to go to a lot of machines a few years ago. "If you lose half of them, it doesn't really matter," Artemiadis said, adding that he was surprised that "the brain cares about swarms and collective behaviors". The next step in Artemiadis's research is to have multiple people controlling multiple robots. He sees drone swarms performing complex operations, such as search-and-rescue missions. Video at https://vimeo.com/173548439.
Well, a short study of animal groups is all that is needed to remove the element of surprise. There is an intricate mechanism in the brain to deal with the behaviour of clouds of conspecifics, and not of course only in humans. So it is only a matter of time before groups of artificial intelligences are equipped with mechanisms to allow them to function effectively without human control in time-limited, physically challenging or crisis situations, displaying what might look to an outside observer as being conscious behaviour. Read more in Chapter Eleven of Agent Human, Migrating Consciousness Into External Environments – Multiple Consciousnesses In Computers Or Cyberspace.