By Michael Godfrey Bell,
2015


Agent Human, published in September 2010, 450 pages with 50 illustrations and tables, is the first book which combines a rigorous treatment of the biological and behavioural underpinnings of consciousness with a comprehensive theory of human agency, thus allowing robust predictions of the future both of human society and of consciousness itself, the human species' greatest achievement.

To read Agent Human on-line, just click here.

To download a copy of Agent Human, just click here.

More about the book

Author's resume

For further information, contact mgbell@agenthuman.com

We, Immortals:
The Future Of The Mind

A fictional account, set in 2130, of what life might be like in cyberspace, based on the ideas of Agent Human. Eight young people challenge the status quo by exploring group consciousness, something forbidden by the authorities, who try to 'wipe' them and destroy their legacy human bodies.

To read We, Immortals: The Future Of The Mind on-line, just click here.

To download a copy of We, Immortals: The Future Of The Mind, just click here.

*****************

The Collective Unconscious, Quantum Mechanics and PSI

Many current researchers into the inner and outer reaches of the human psyche do not attempt to construct an over-arching theory of the mind, and who can blame them, given the confusing mass of unexplained and contradictory data they face? Still, some people try, and a surprising number of them arrive at some type of 'field' theory, in which we, and all of our compeers, exist as islands in a pervasive sea which we but dimply experience.

This book attempts to record some of the more notable recent attempts at analysis of the mysteries that surround us, and reaches some tentative conclusions based on the inadequate evidence that exists so far. They are remarkable enough.

To read it on-line, go to http://www.agenthuman.com/quantum/
table_of_contents.html


Consciousness Blog 28/07/2016

Reflections On Groups

In Agent Human, the development of the more advanced stages of human consciousness is attributed to the demands of group living; but many other types of animal live in groups without, as far as we know, arriving at anything resembling human consciousness. Well, we do not know, and the past few years have produced countless demonstrations of intelligent behaviour in animals which were seen as suprising. And now come robots, or more generally, constructs having artificial intelligence, and it will be a good question, whether to equip them with the sorts of interactive (we may call them 'social') behaviours which might require something resembling consciousness to be fully effective. If Nature, or evolution, found it necessary to equip us with consciousness in order to optimize our 'groupish' performance, then why should it not be equally beneficial to do the same for robots?

Images immediately come to mind of I, Robot, or of Skynet, and a host of other fictional dystopias, and it can be expected that there will be legislative, moral, religious and simply prudential rules to try to prevent the emergence of conscious groups of robots. Not that anyone has yet succeeded in describing or locating consciousness, so that it might be quite difficult to formulate rules for robots which would not amount to throwing out the baby with the bathwater.

Consider, for a moment, a battlefield group of robotic actors thrown into the defence of a military facility against an armed attack. Let there not be any civilians, to simplify the problem. It still bristles with difficulties, involving recognition of enemy actors, judgment about necessary levels of force, trade-offs between surrendering territory or munitions and maximizing the destruction of enemy forces. At present, drones are the nearest approach to this, and they have human minders, a job by the way that destroys people, and I don't mean their targets. But it is easy to see that this is a temporary situation, and that it will be optimal to build in a considerable amount of cooperation between the robotic actors independently of any external, remote direction, for which there will not be time, in any case.

Continued below

Read previous consciousness blogs:

Talking to yourself is not crazy, 30 November 2009

The future of human evolution, 05 December 2009

Testosterone, 15 December 2009

Self, 03 January 2010

Elders, 11 January 2010

Grumpy , 07 February 2010

Attention, 20 March 2010

Emotions, 01 May 2010

Face 16 May 2010

Trust 25 July 2010

Yawning 19 September 2010

Laughter 06 December 2010

Sleep 18 December 2010

Morality 05 March 2011

Dancing 13 November 2011

Copying 20 May 2012

Altruism 15 July 2012

Brain Clone 10 March 2013

Decisions 30 March 2013

Only Connect 13 April 2013

Deception 16 June 2013

Quantum 25 August 2013

Language 08 September 2013

Unconscious 19 October 2013

Booze 30 November 2013

Tether Hypothesis 12 January 2014

Spite 16 February 2014

Mating 08 June 2014

Brainy 22 August 2014

Fire 22 October 2014

Animals 01 January 2015

Gazing 12 April 2015

Multibrains 12 July 2015

Disentangling Entanglement 15 September 2015

Free Will 9 January 2016

Music 13 February 2016

Insects 21 April 2016

Faces 09 June 2016

 

Download a copy of The Futures Of The Human Race here.


Consciousness Blog Continued

Now let us dive into some recent research along the lines of demonstrating 'groupish' behaviour among animals or robots, beginning, improbably, with slime moulds. Nowadays, they are not considered to be moulds and are often not particularly slimy. There are lots of different types, but broadly one can say that they aggregate multiple individual eukaryotic cells (like ours, having membrane-bound internal structures) into a grouped organism, which can move, eat, excrete and reproduce. Researchers (Romain P. Boisseau, David Vogel and Audrey Dussutour of Toulouse University) have described learning behaviour in slime moulds whereby they found out that it was safe to cross a bridge coated with bitter substances in order to reach an attractive oat-based meal through a process of experimentation. 'Habituation in non-neural organisms: evidence from slime moulds' was published in The Proceedings of the Royal Society (27 April 2016.DOI: 10.1098/rspb.2016.0446). It was already known that slime moulds can learn their way around a maze, but this was a new level of 'intelligent' behaviour from an organism that certainly has no brain or neural circuits, and of course it would have been impossible for an individual slime mould cell to reach the food.

Dimos Dimarogonas, an associate professor at KTH Royal Institute of Technology in Sweden has reported work aimed at enabling off-the-shelf robots to cooperate with one another on complex jobs, by using body language. "Robots can stop what they're doing and go over to assist another robot which has asked for help," Dimarogonas says. "This will mean flexible and dynamic robots that act much more like humans – robots capable of constantly facing new choices and that are competent enough to make decisions." The project was completed in May 2016, with project partners at Aalto University in Finland, the National Technical University of Athens in Greece, and the École Centrale Paris in France. In a video, a robot points out an object to another robot, conveying the message that it needs the robot to lift the item. Says Dimarogonas: "The visual feedback that the robots receive is translated into the same symbol for the same object. With updated vision technology they can understand that one object is the same from different angles. That is translated to the same symbol one layer up to the decision-making – that it is a thing of interest that we need to transport or not. In other words, they have perceptual agreement." In another demonstration two robots carry an object together. One leads the other, which senses what the lead robot wants by the force it exerts on the object. "It's just like if you and I were carrying a table and I knew where it had to go," says Dimarogonas. "You would sense which direction I wanted to go by the way I turn and push, or pull."

Panagiotis Artemiadis, director of the Human-Oriented Robotics and Control Lab and an assistant professor of mechanical and aerospace engineering in the School for Engineering of Matter, Transport and Energy in the Ira A. Fulton Schools of Engineering, reported in July, 2016 that a human operator can control multiple drones through a wireless interface, by thinking of various tasks. The controller wears a skull cap outfitted with 128 electrodes wired to a computer, which records electrical brain activity. Up to four small robots, some of which fly, can be controlled with brain interfaces. If the controller moves a hand or thinks of something, certain areas light up. "I can see that activity from outside," says Artemidias. "Our goal is to decode that activity to control variables for the robots." For instance, if the user thinks about spreading out the drones – "We know what part of the brain controls that thought," Artemiadis said. "You can't do something collectively" with a joystick, he says. "If you want to swarm around an area and guard that area, you cannot do that." Artemiades says he had the idea to go to a lot of machines a few years ago. "If you lose half of them, it doesn't really matter," Artemiadis said, adding that he was surprised that "the brain cares about swarms and collective behaviors". The next step in Artemiadis's research is to have multiple people controlling multiple robots. He sees drone swarms performing complex operations, such as search-and-rescue missions. Video at https://vimeo.com/173548439.

Well, a short study of animal groups is all that is needed to remove the element of surprise. There is an intricate mechanism in the brain to deal with the behaviour of clouds of conspecifics, and not of course only in humans. So it is only a matter of time before groups of artificial intelligences are equipped with mechanisms to allow them to function effectively without human control in time-limited, physically challenging or crisis situations, displaying what might look to an outside observer as being conscious behaviour. Read more in Chapter Eleven of Agent Human, Migrating Consciousness Into External Environments – Multiple Consciousnesses In Computers Or Cyberspace.

 

 

 

The material contained on this site is the intellectual property of M G Bell and may not be reproduced, transmitted or copied by any means including photocopying or electronic transmission, without his express written permission.