28 Aug 2013

New Paper and Slides of: "Capturing the Implicit – an iterative approach to enculturing artificial agents "

A paper presented at the workshop on "Computers as Social Agents" at IVA@2013

Available from: http://cfpm.org/cpmrep221.html

Capturing the Implicit – an iterative approach to enculturing artificial agents
Peter Wallis and Bruce Edmonds
Abstract. Artificial agents of many kinds increasingly intrude into the human sphere. SatNavs, help systems, automatic telephone answering systems, and even robotic vacuum cleaners are positioned to do more than exist on the side-lines as potential tools. These devices, intentionally or not, often act in a way that in- trudes into our social life. Virtual assistants pop up offering help when an error is encountered, the robot vacuum cleaner starts to clean while one is having tea with the vicar, and automated call handling systems refuse to let you do what you want until you have answered a list of questions. This paper addresses the problem of how to produce artificial agents that are less socially inept. A distinction is drawn between things which are operationally available to us as human conversational- ists and the things that are available to a third party (e.g. a scientists or engineer) in terms of an explicit explanation or representation. The former implies a de- tailed skill at recognising and negotiating the subtle and context-dependent rules of human social interaction, but this skill is largely unconscious – we do not know how we do it, in the sense of the later kind of understanding. The paper proposes a process that bootstraps an incomplete formal functional understanding of hu- man social interaction via an iterative approach using interaction with a native. Each cycle of this iteration entering and correcting a narrative summary of what is happening in recordings of interactions with the automatic agent. This interac- tion is managed and guided through an “annotators’ work bench” that uses the current functional understanding to highlight when user input is not consistent with the current understanding, suggesting alternatives and accepting new sug- gestions via a structured dialogue. This relies on the fact that people are much better at noticing when dialogue is ”wrong” and in making alternate suggestions than theorising about social language use. This, we argue, would allow the itera- tive process to build up understanding and hence CA scripts that fit better within the human social world. Some preliminary work in this direction is described.

No comments:

Post a Comment