Consciousness Blog 28th July 2016


Reflections On Groups

In Agent Human, the development of the more advanced stages of human consciousness is attributed to the demands of group living; but many other types of animal live in groups without, as far as we know, arriving at anything resembling human consciousness. Well, we do not know, and the past few years have produced countless demonstrations of intelligent behaviour in animals which were seen as suprising. And now come robots, or more generally, constructs having artificial intelligence, and it will be a good question, whether to equip them with the sorts of interactive (we may call them 'social') behaviours which might require something resembling consciousness to be fully effective. If Nature, or evolution, found it necessary to equip us with consciousness in order to optimize our 'groupish' performance, then why should it not be equally beneficial to do the same for robots?

Images immediately come to mind of I, Robot, or of Skynet, and a host of other fictional dystopias, and it can be expected that there will be legislative, moral, religious and simply prudential rules to try to prevent the emergence of conscious groups of robots. Not that anyone has yet succeeded in describing or locating consciousness, so that it might be quite difficult to formulate rules for robots which would not amount to throwing out the baby with the bathwater.

Consider, for a moment, a battlefield group of robotic actors thrown into the defence of a military facility against an armed attack. Let there not be any civilians, to simplify the problem. It still bristles with difficulties, involving recognition of enemy actors, judgment about necessary levels of force, trade-offs between surrendering territory or munitions and maximizing the destruction of enemy forces. At present, drones are the nearest approach to this, and they have human minders, a job by the way that destroys people, and I don't mean their targets. But it is easy to see that this is a temporary situation, and that it will be optimal to build in a considerable amount of cooperation between the robotic actors independently of any external, remote direction, for which there will not be time, in any case.

Now let us dive into some recent research along the lines of demonstrating 'groupish' behaviour among animals or robots, beginning, improbably, with slime moulds. Nowadays, they are not considered to be moulds and are often not particularly slimy. There are lots of different types, but broadly one can say that they aggregate multiple individual eukaryotic cells (like ours, having membrane-bound internal structures) into a grouped organism, which can move, eat, excrete and reproduce. Researchers (Romain P. Boisseau, David Vogel and Audrey Dussutour of Toulouse University) have described learning behaviour in slime moulds whereby they found out that it was safe to cross a bridge coated with bitter substances in order to reach an attractive oat-based meal through a process of experimentation. 'Habituation in non-neural organisms: evidence from slime moulds' was published in The Proceedings of the Royal Society (27 April 2016.DOI: 10.1098/rspb.2016.0446). It was already known that slime moulds can learn their way around a maze, but this was a new level of 'intelligent' behaviour from an organism that certainly has no brain or neural circuits, and of course it would have been impossible for an individual slime mould cell to reach the food.

Dimos Dimarogonas, an associate professor at KTH Royal Institute of Technology in Sweden has reported work aimed at enabling off-the-shelf robots to cooperate with one another on complex jobs, by using body language. "Robots can stop what they're doing and go over to assist another robot which has asked for help," Dimarogonas says. "This will mean flexible and dynamic robots that act much more like humans – robots capable of constantly facing new choices and that are competent enough to make decisions." The project was completed in May 2016, with project partners at Aalto University in Finland, the National Technical University of Athens in Greece, and the École Centrale Paris in France. In a video, a robot points out an object to another robot, conveying the message that it needs the robot to lift the item. Says Dimarogonas: "The visual feedback that the robots receive is translated into the same symbol for the same object. With updated vision technology they can understand that one object is the same from different angles. That is translated to the same symbol one layer up to the decision-making – that it is a thing of interest that we need to transport or not. In other words, they have perceptual agreement." In another demonstration two robots carry an object together. One leads the other, which senses what the lead robot wants by the force it exerts on the object. "It's just like if you and I were carrying a table and I knew where it had to go," says Dimarogonas. "You would sense which direction I wanted to go by the way I turn and push, or pull."

Panagiotis Artemiadis, director of the Human-Oriented Robotics and Control Lab and an assistant professor of mechanical and aerospace engineering in the School for Engineering of Matter, Transport and Energy in the Ira A. Fulton Schools of Engineering, reported in July, 2016 that a human operator can control multiple drones through a wireless interface, by thinking of various tasks. The controller wears a skull cap outfitted with 128 electrodes wired to a computer, which records electrical brain activity. Up to four small robots, some of which fly, can be controlled with brain interfaces. If the controller moves a hand or thinks of something, certain areas light up. "I can see that activity from outside," says Artemidias. "Our goal is to decode that activity to control variables for the robots." For instance, if the user thinks about spreading out the drones – "We know what part of the brain controls that thought," Artemiadis said. "You can't do something collectively" with a joystick, he says. "If you want to swarm around an area and guard that area, you cannot do that." Artemiades says he had the idea to go to a lot of machines a few years ago. "If you lose half of them, it doesn't really matter," Artemiadis said, adding that he was surprised that "the brain cares about swarms and collective behaviors". The next step in Artemiadis's research is to have multiple people controlling multiple robots. He sees drone swarms performing complex operations, such as search-and-rescue missions. Video at https://vimeo.com/173548439.

Well, a short study of animal groups is all that is needed to remove the element of surprise. There is an intricate mechanism in the brain to deal with the behaviour of clouds of conspecifics, and not of course only in humans. So it is only a matter of time before groups of artificial intelligences are equipped with mechanisms to allow them to function effectively without human control in time-limited, physically challenging or crisis situations, displaying what might look to an outside observer as being conscious behaviour. Read more in Chapter Eleven of Agent Human, Migrating Consciousness Into External Environments – Multiple Consciousnesses In Computers Or Cyberspace.

 

 

The material contained on this site is the intellectual property of M G Bell and may not be reproduced, transmitted or copied by any means including photocopying or electronic transmission, without his express written permission.