Automation as Team Player

The challenges associated with making automation a 'team player' in a joint human-agent activity are discussed.

Joint activity is defined as an extended set of actions that are carried out by an ensemble of people coordinating with each other. To carry out joint activity, each party effectively enters into a Basic Compact, an agreement to facilitate coordination, work toward shared goals, and prevent breakdowns in team coordination. Effective coordination requires establishing and maintaining common ground and an intelligent agent must fulfil the requirements of a Basic Compact to engage in common-grounding activities. researchgate

Jeff Miller provided us with this summary circulating within his community.

Woods, Hoffman, et al., Intelligent Systems, IEEE - December 2004 Uploaded with enhancements requested by David D. Woods 22 August 2013

• A BASIC COMPACT. To be a team player, an intelligent agent must fulfill the requirements of a basic compact to engage in common-grounding activities. (and let other team members know if it can't fully contribute)

• ADEQUATE MODELS. [TBaTP], model other participants' intentions and actions – are they having trouble? are they doing OK? are they blocked? how are others responding to any disruptions

• PREDICTABILITY. Human-agent team members must be mutually predictable. Currently, agents taking the initiative to adapt to humans may cause humans to hold back, fearing that the agent may jump to a bad conclusion. Therefore make the adaptations obvious and simple to avoid human confusion and reluctance. Adaptability and predictability are in tension.

• DIRECTABILITY. Agents must be directable. Policies are a way of expressing current limits, and being able to change limits without having to change agents' code explicitly.

• REVEALING STATUS AND INTENTIONS. Agents must be able to make these obvious. Examples of trouble in this area come from jet aircraft systems. Design agents to be transparent.

• INTERPRETING SIGNALS. Agents must be able to interpret signals and form models of their teammates. (Example is subtext from tone and pauses in spoken communication.)

• GOAL NEGOTIATION. Agents should be able to represent, reason about, and modify their goals in order to coordinate effectively as team members.

• COLLABORATION. To facilitate give-and-take, agents' understanding, problem solving, and task execution must be incremental and subject to negotiation.

• ATTENTION MANAGEMENT. When does an agent need to be a squeaky wheel, and when should it keep the channel clear for urgent communication? Airplane cockpit automation used as an example of being too quiet.

• COST CONTROL. Costs of coordination should be kept in mind. Mutual predictability and common understanding and purpose help reduce this cost

.

Having now posted this page I discover that this and related work has been circulating here in wiki as well.

Yesterday was the latest round of me lamenting that my dev tools are often awful team players in the sociotechnical system I interact with on a day-to-day basis. post

Collaboration is crucial in software teams – and not only among people. We need our software, our tools, and our automations to collaborate as well. But what does that mean? video

Jessica Kerr, explains how modern software teams must collaborate not only with each other, but with their own tools of automation. She introduces the line of representation. Presents Four prerequisites (the pillars of joint activity) and four precautions (ironies of automation). article