John Paul Bichard's just-completed LIREC documentary deftly rises to the challenge of outlining the project's sometimes bewildering complexity and diversity while at the same time tracing the underlying connections between its many facets. Wrapped in beautiful graphics and pleasing production values, it will surely appeal to those with no prior knowledge of the LIREC project or its background, even while it conveys some of its technical breadth and depth.
Exploring and Designing our Future Robot Companions focuses on the people behind the robots in several thematically clustered interviews with representatives of each of the ten organisations involved – interspersed with tantalising glimpses of the robotic entities and devices in action. Through the words of the researchers themselves, we discover that the project is aimed squarely at what is surely set to become one of the defining concerns for robotics, computing, and AI in the 21st century: how to design artificial companions in a socially appropriate and useful way.
Introducing the project’s scope, Prof. Peter McOwan of Queen Mary University emphasises that LIREC is unique in this emerging field of enquiry in that it incorporates long-term studies situated in real-world environments, such as the home or office. It also uniquely focuses on on ways to implement the migration of companion entities across platforms – allowing them to move seamlessly from a robot embodiment onto a handheld device, for example, as a virtual character. It is evident from the outset of the documentary that this is an ambitious project, incorporating a vast array of interconnected technological, social and philosophical questions: physical robot design, software architecture, modelling the behaviour and emotions of companions as well as their capacity to detect the emotional expressions of humans, not to mention questions of ethics, memory, and longitudinal studies of human-companion relationships.
Games provide an ideal context for exploring some of these questions, as Prof. Ana Paiva of INESC-ID explains. Chess playing iCat, dinosaur Pleo, and the EMYS head social robot are artificial companions that embody the vital software architecture – the “agent mind” – that generates intelligent behaviour: a symbolic architecture that incorporates the capacity for emotional recognition and response, the development of which has been INESD-ID’s primary contribution to LIREC.
What kinds of behaviour should an artificial companion ideally be programmed to generate? Dr. Adam Miklosi of Eötvös Lóránd University believes that dogs provide an excellent starting point for modelling the behaviour of robots. In studies of dog ethology and the kinds of relationship that develop between humans and their canine companions, his research team derived models that can be usefully transferred to the domain of robots and artificial entities. Meanwhile, Dr. Carsten Zoll and his team at the Otto-Friedrich-Universität have been working with several LIREC partners to examine the behaviour and psychology of human-human and human-robot interaction, while Mattias Jacobson at SICS focused on long-term studies of human-robot companionship in the real world. No doubt by triangulating and integrating research on human-dog-robot interaction in its many dimensions, some novel models and metaphors for responsive artificial intelligence will emerge.
Evolving new models and metaphors of software architecture that incorporate an emotional dimension to enhance user experience has been the main objective of Secundino Correia and his team at Cnotinfor. They are responsible for Little Mozart, software designed to help children learn music through a virtual character. Mr. Secundino explains that the artificial character possesses a repertoire of emotional responses and the capacity to remember in a human-like, emotionally-coloured way, as well as the ability to migrate across devices.
For an artificial companion to become embodied, it needs a physical body to reside in; at the same time, its capacity for human interaction will be vastly enhanced if it can see the world around it in a human-like way, and especially if it can recognise human faces and even read the emotions that they express. Prof. Kryzstof Tchon and Dr. Kryzstof Arent’s team at WRUTS worked on the physical design and prototyping of robots. They discuss the EMYS head, a robot resembling the eponymous pond turtle and capable of expressing emotions through its physiognomy. Regarding the sensory apparatus of robots, Prof. Mcowan describes the emotional tracker system developed at Queen Mary University. It is software that can discern human faces, and can be trained to recognise facial expressions associated with various emotions. This information can then be fed back into the agent mind architecture of the robot, allowing it to acquire an emotional reading of its human companion and respond appropriately through the physical repertoire of its own emotional expressions.
Prof. Ruth Aylett of Heriot-Watt University discusses companion migration across various physical and virtual embodiments, and some of the ethical issues this can raise. Prof. Aylett says that artificial companions should be able to share your experience wherever you go, and outlines a scenario where they can travel with people on handheld devices, migrating onto other platforms as necessary, such as the wireless network of a building. With this kind of high-level connectivity, and given that these entities may possess long-term memory, concerns over ethics and privacy naturally arise, since agents could leak a potentially enormous amount of personal data in compromising ways. The familiar privacy concerns surrounding social networking sites and search engines pale in comparison to such scenarios, and so implementing mechanisms to deal with such issues remains an important consideration.
Integrating the artificial companion with its surrounding environment is another challenge, which Prof. Kerstin Dautenhahn and her team at the University of Hertfordshire have explored in their Robot House project. By developing an environment that as far as possible looks, feels and behaves like a home environment (rather than a research laboratory), Prof. Dautenhahn says that people begin to feel more comfortable and can more easily visualise the possibilities of having robot companions in their own homes. Dr. Kheng Lee Koay, also of the University of Hertfordshire, designed the Sunflower robot especially for this project, and the team additionally made use of Care-O-bot 3 by Frauenhofer and Sony’s AIBO for different tasks and activities.
While the LIREC project officially wraps up this July, the infrastructure of research and development it leaves as a legacy could foreseeably inform and inspire very exciting future developments in the fields of robotics and artificial intelligence as long as it remains open and publicly accessible. One such spinoff is already in progress thanks to Dave Griffiths of FoAM, whose online multiplayer game Germination X challenges Farmville with a radically different premise and a new set of algorithms, derived from LIREC research.
With the enthusiastic collaboration of all partners, Exploring and Designing our Future Robot Companions manages to meticulously summarise the complexity of the LIREC project, without losing sight of the big picture. Furthermore, it reminds us that beyond their expressed aims and objectives, projects such as this acquire a whole added dimension of value: that of connecting highly diverse and talented individuals doing cutting edge research and development – surely a key ingredient for radical innovation in any field.