HOME@

Artificial Empathy and Imitation

Paul Dumouchel@July 2011

last update:20110721


More and more artefacts, robots or electronic media interface are being designed whose function is to interact in social contexts. These artificial creatures, either, physically present in three dimensional space or existing only on a screen, are called upon to play different roles in social interactions, for example, to provide counsel, information, or entertainment (including of course sexual services, which are envisaged for future generations of robots and military uses, but access to information concerning these is limited and difficult). The goal of these technological developments is to replace human beings in certain social contexts, with artificial creatures with which (with whom?) we can credibly exchange, gas ifh they were human. We can refer to this broad technological endeavour as gartificial empathyh and it has major social, ethical and epistemological implications. Most of the proposed uses of these artificial creatures are in the domain of health care and in service industry in general, as counsellors, experts, or providers of information in public places like train stations, museums, business or administrations. The latter uses can be seen simply as designing more user friendly computerised services, but in the medical and education domains robots are destined to become gfriendsh and gcompanionsh either in schools, in hospitals or in old folks homes, and sometimes even as substitutes for more gintimate friends.h Does artificial empathy constitute the future of gcareh?

Apart from the many difficult and interesting technical issues such an enterprise entails, it also raises a number of important philosophical, ethical and socio-political questions. It also presents interesting challenges and questions in relation to the issue of imitation which has been central in much of my past research. There are at least three reasons for this. The first is that roboticists realise that in order to make an artificial agent that can appropriately interact with humans in an open context you cannot input beforehand in the machine all the relevant social information. You need to make a machine that can learn, and the central mechanism of social learning, according to them, is imitation. In other words if you want to create a human like artificial agent, create an agent that can imitate. Furthermore, because we are dealing here with a community of researchers who come from many different disciplines, cognitive science, psychology, computer science, primatology, medicine, neuroscience, philosophy, electrical engineering, etc. there is no commonly shared preconceived idea about what is eimitationf. These researchers are essentially interested in gwhat worksh.

This is related to the second reason. People who design and experiment with such robots do not consider that they are doing applied science. They view their artefacts and artificial agents as scientific instruments, as ways of discovering, what is the nature of learning, of imitation, or of social attachment. They do not see themselves so much as creating new and better technology rather they construe their enterprise as testing theories and discovering the nature of social interactions. They are engaged in a process of discovery and think that we will know what imitation is when we can make a robot that can imitate, rather than they are trying to make a robot that applies this or that theory of imitation.

There is a third, closely related reason why artificial empathy research should interest those who reflect upon the issue of imitation: imitation in artificial agent is essentially non-mentalist. There is no mental state inside of a robotfs eheadf that corresponds to imitation. Thus this is imitation without representation in a radical sense. As Lola Canamero of the Feelix Growing Project ( http://www.feelix-growing.org/) argues, the important point is that capacities like imitation or emotions can emerge directly from social interaction without their being inside the robot any module or particular subsystem that is responsible for this ehaviour. (Conference at Ritsumeikan School of Core Ethics and Frontier Sciences, July 2008).


UP: 20110720@REV:20110721
TOP@HOME (http://www.arsvi.com/a/index.htm)ž