Multiagent Planning Architecture

A research project funded by the ARPA/Rome Laboratory Planning Initiative (ARPI).

Principal Investigator: Dr. David E. Wilkins
AI Center, SRI International

Subcontractors: Carnegie-Mellon University

Team Members:
Dr. David E. Wilkins, SRI
Dr. Marie desJardins, SRI
Dr. Karen L. Myers, SRI
Dr. Pauline Berry, SRI
Dr. Stephen F. Smith, CMU
Dr. John D. Lowrance, SRI
Thomas J. Lee, SRI

This page describes the Multiagent Planning Architecture as a research program within ARPI. More detail is available in the following documents. These are working papers and are modified frequently.

Note: these documents were modified extensively in October 1998, for MPA version 1.7 and in July, 1998, for MPA version 1.5. Major additions include new plan queries, a Generate-plan message, a Meta Planning-Cell Manager, an executor agent, and the use of a plan model based on tasks, plans, and action networks in both the common plan representation and in the MPA message specifications.

Click here for a one page summary of the Multiagent Planning Architecture.


If ARPI technology is ever to impact the military and industrial user community, there must be a means of marshaling a wide range of geographically dispersed components, coordinating their interaction, and flexibly interacting with human planning specialists. The objective of this effort is to develop a new architecture for large, sophisticated planning problems that require the coordinated efforts of diverse, geographically distributed human and computer experts.

MPA is an open planning architecture that facilitates incorporation of new technologies and allows the planning system to capitalize on the benefits of distributed computing for efficiency and robustness. MPA provides protocols to support the sharing of knowledge and capabilities among agents involved in cooperative problem solving. MPA has been demonstrated in the air campaign planning domain, and was used as the infrastructure for ARPI's MAPViS demonstration (formerly TIE 97-1).

Executive Summary

MPA will define a range of generic planning agents (PAs) that provide specific services in response to a range of requests. These agents will be capable of reporting incremental progress, providing whole or partial plans, and continually responding to new constraints, conditions, and suggestions. The activities of these agents are coordinated by meta-PAs (PAs that control other PAs) with specialized knowledge about strategies for division of labor, conflict resolution, and (in the future) plan merging. Each meta-PA is responsible for coordinating activities among its collection of PAs and other planning clusters.

The MPA framework has been used to develop several large-scale problem-solving systems for the domain of Air Campaign Planning (ACP). One such application integrated a set of technologies that spanned plan generation, scheduling, temporal reasoning, simulation, and visualization. These technologies cooperated in the development and evaluation of a complex plan containing more than 4000 nodes. This integration has validated the utility of MPA for combining sophisticated stand-alone systems into a powerful integrated problem-solving framework.

MPA demonstrations shows multiple asynchronous agents cooperatively generating a plan or set of alternative plans in parallel, a meta-PA reconfiguring the planning cell during planning, and agents running on different machines both locally and over the Internet. MPA demonstrations employ technologies developed outside SRI, show a flexible and novel combination of planning and scheduling techniques, and demonstrate dynamic strategy adaptation in response to partial results.


The Multiagent Planning Architecture (MPA) is a framework for integrating diverse technologies into a system capable of solving complex planning problems. MPA has been designed for application to planning problems that cannot be solved by individual systems, but rather require the coordinated efforts of a diverse set of technologies and human experts. MPA agents can be sophisticated problem-solving systems in their own right, and may span a range of programming languages. MPA's open design facilitates rapid incorporation of tools and capabilities, and allows the planning system to capitalize on the benefits of distributed computing architectures for efficiency and robustness.

Agents within MPA share well-defined, uniform interface specifications, making it possible to explore a broad range of cooperative problem-solving strategies. Sophisticated systems for planning and scheduling have been decomposed into modules, each of which has been transformed into an agent, allowing experimentation with different degrees of coupling between the planning and scheduling capabilities. We have also explored the definition of organizational units for agents that permit flexible control policies in generating plans. Within MPA, notions of baselevel planning cells and metalevel planning cells have been defined, where the baselevel cells provide sequential solution generation and the metalevel cells employ baselevel cells to support parallel generation of qualitatively different solutions. Metalevel cells provide the ability to rapidly explore the space of solutions to a given planning problem.

MPA is distinguished from other agent architectures in its emphasis on application to large-scale planning problems. The architecture includes agents designed specifically to handle plans and planning-related activities. Interagent communication protocols are specialized for the exchange of planning information and tasks. Another distinguishing feature of MPA is the emphasis on facilitating the integration of agents that are themselves sophisticated problem-solving agents. One of the primary goals of MPA is to facilitate such integrations. Most agent architectures develop specialized agents that are suited for operation within that specific architecture rather than incorporating legacy systems.

MPA provides the infrastructure necessary to support a broad range of distributed planning capabilities. At present, however, it does not include mechanisms for coordinating subplans generated by distributed planning agents. We intend to explore algorithms for distributed planning in the future, and believe that our infrastructure will support them.

MPA groups planning agents (PAs) and meta-PAs into Planning Cells. We are building upon the diverse range of planning capabilities already developed under the DARPA-Rome Laboratory Planning Initiative (ARPI), drawing from the ISO reference architecture while extending its capabilities for planning. Multiple planning cells can simultaneously produce alternative plans.

MPA provides wrappers and agent libraries (in both C and Lisp) to facilitate the construction of agents from legacy systems. PRS, a reactive execution system originally developed for NASA, provides the technology for our most sophisticated agent wrappers. PAs will be defined for several classes of ARPI technology, and communicates using the common plan representation and the agent communication languages already being developed by other ARPI projects.

MPA provides both a message format and a message-handling protocol to support the sharing of knowledge among agents involved in cooperative plan generation. Thus, all PAs of a common class would outwardly be the same, except for differences in the range of generic services provided; inwardly they would employ application-specific data structures and techniques. The MPA protocol is built on top of KQML, but an alternative implementation now exists based on SRI's Open Agent Architecture, and there may soon be an implementation based on Xerox's Inter-Language Unification system.

We will "program" a community of agents in MPA that apply new technologies to air campaign planning. In particular, we are decomposing an ARPI planning technology and an ARPI scheduling technology, so that each technology becomes a set of PAs. This decomposition allows previously distinct ARPI technologies to be more tightly and flexibly integrated, and allows other technologies to replace some of the PAs in a modular fashion. Meta-PAs define different control strategies among the PAs.

Single-Cell Configuration and Demonstration

We use the term configuration to refer to a particular organization of MPA agents and problem-solving strategies. Here, we describe two MPA configurations: a single-cell configuration for generating individual solutions to a planning task (used for our initial 1996 demonstration), and a multiple-cell configuration for generating alternative solutions in parallel (used in the MAPViS demonstration). The use of these configurations for performing planning/scheduling in an Air Campaign Planning domain is described.

The initial demonstration was given in September 1996, and showed a multiagent planner and scheduler, together with a temporal reasoning agent, accomplishing planning/scheduling in the Air Campaign Planning (ACP) domain used in ARPI's IFD-4 (Fourth Integrated Feasibility Demonstration). To demonstrate the capabilities of MPA, we showed multiple asynchronous agents cooperatively generating a plan, the planning-cell manager reconfiguring the planning cell during planning, and agents running on different machines both locally and over the Internet. The following figure depicts the configuration of the agents in the demonstration.

MPA Single Cell Configuration. Blue arrows represent message flow; because all agents communicate with the plan server, those arrows are omitted. Lines without arrowheads show planning cell composition.

The Planning-Cell Manager (PCM) is a meta-PA that controls the entire process, including initialization of the planning cell. The PCM specifies a Planning Cell Designator (PCD) which gives the name of the agent that fulfills each role in the planning cell.

The PCM and the Act Plan Server are implemented in PRS which allows complex real-time control of the processing of messages. Eventually, the Critic Manager may also be a PRS-based meta-PA that controls agents for individual critics. Currently, the critic manager for SIPE-2 is being used as an agent, although it has been modified to send messages to temporal reasoning and scheduling agents. The Tachyon agent is in C and employs a C wrapper, while the other agents have Lisp wrappers.

The Act Plan Server is not and ISO reference architecture plan server. It supports new features, including annotations, triggers, and different views of plans. Annotations are used to record features of the plan and triggers are used to notify agents of the posting of those features. The plan is written to the Act Plan Server in the Act formalism, which can be understood by the scheduler and the planner. The Act Plan Server answers queries about the plan, and handles the annotations and the triggers.

This planning cell, geographically distributed, is used to produce an air campaign plan. During planning, the Cell Manager repeatedly calls the Search Manager and the Critic Manager, appropriately reacting to the messages that are returned (e.g., by backtracking), until a final plan is found. The Cell Manager may change its planning strategy dynamically. The Search Manager expands the plan to another level of detail and writes the new plan to the Plan Server. The Critic Manager writes annotations about the plan, sometimes modifies the plan, and calls other agents, such as the Temmporal Reasoner and Scheduler.

The Scheduler is then called periodically to check the resource allocations. Depending on the PCM planning style, the period can be once per node, once per level, or once per plan. The Scheduler can recommend new resource assignments, which causes the Schedule Critic to modify the plan. The Scheduler posts annotations declaring which resources are overutilized or near capacity. If resource constraints are sufficiently unsatisfiable, it reports a schedule failure.

The demonstration develops a plan in which fuel tankers are overutilized, although this is not known at the outset. The Scheduler posts annotations about overutilized resources in the Act Plan Server. The PCM has posted a trigger on such annotations and is immediately notified. It responds with two different tactics to produce a better plan:

  1. The PCM sends an :advice message to the Planner, which causes the Planner to choose options requiring less fuel for the remainder of the plan expansion. This capability employs SRI's new Advisable Planner. The plan still has flaws because resources were already overutilized before the PCM issued the advice.
  2. The PCM invokes a second search for another plan, this time using advice from the start. This produces a fuel-economic plan in which tankers are not overutilized, again using the Advisable Planner.
Overall, this demonstration shows the flexibility provided by the MPA. Separate software systems (OPIS, Tachyon, and SIPE-2, using KQML and PRS for support) cooperatively generate a plan. They are distributed on different machines, and they are combined in multiple ways because of the flexible architecture. The Act Plan Server allows flexible communication of the plan among agents. The PCM encodes different strategies for monitoring and controlling the planning process, thus demonstrating dynamic strategy adaptation in response to partial results.

Multi-Cell Configuration and Demonstration

A further demonstration was given in June 1997, and showed multiple planning cells producing alternative plans for a task in parallel. A new agent called the Meta Planning-Cell Manager (meta-PCM) was implemented in PRS and controls the initialization of planning cells, distribution of tasks and advice, and reporting of solutions. For the demonstrations the meta-PCM controlled 2 planning cells which shared some agents such as the plan server. The configuation is shown in the following figure was was used both in the 1997 demonstration and in the MAPViS demonstration.

MPA Configuration for Multiple Planning Cells. The multiple-cell configuration includes multiple instances of the single-cell configuration coordinated by the Meta-PCM. The planning cells share common plan server, temporal reasoner, and scheduler agents, to reduce the number of running jobs, but multiple instances of the shared agents can also be chosen.

The Meta-PCM controls the entire process, including initialization of planning cells, distribution of tasks and advice, and reporting of solutions. The planning cells operate exactly as described for the single planning-cell configuration, except that they are invoked by the Meta-PCM instead of the user, and they refuse requests if they are already busy.

The multiple planning-cell configuration can be used to produce alternative plans for the same task in parallel. Different advice can be provided with each plan generation request, thus resulting in plans that differ in significant characteristics. As such, the multiple-cell configuration provides the means to rapidly explore varying portions of the overall set of candidate plans.

Additional extensions include the integration of agents for plan evaluation, user interaction, and plan visualization. The ARPI Plan Authoring Tool (APAT) from ISX, a legacy system written in Java, fills the role of user interface and advice manager and plan visualization (a service also provided by the VISAGE system from MAYA). The Air Campaign Simulator (ACS) from the University of Massachusetts, written in Lisp, provides Monte Carlo simulations of plans. The VISAGE system provides plan visualization for simulation outputs. Both of these agents read Acts from the Act Plan Server and translate them to their internal representations.


OBJECTIVES (for the next year)



This project is participating in the following TIEs. A description of each can be obtained from the ARPI Roadmap (password required).


MPA is an open planning architecture that facilitates incorporation of new technologies and allows the planning system to capitalize on the benefits of distributed computing for efficiency and robustness. MPA provides protocols to support the sharing of knowledge and capabilities among agents involved in cooperative problem solving. MPA has been demonstrated in the air campaign planning domain, and was used as the infrastructure for ARPI's MAPViS demonstration.

The demonstrations described show the flexibility provided by MPA. Separate software systems (OPIS, Tachyon, ACS, APAT, the Advisable Planner, and SIPE-2, using KQML and PRS for support), cooperatively generate and evaluate plans, generating multiple, alternative plans in parallel. Theses systems are implemented in different programming languages, running on different machines, and combined in multiple ways through the flexible architecture.

The Act Plan Server allows flexible communication of the plan among agents through the use of annotations, triggers, and views. The PCM encodes different strategies for controlling the planning process, demonstrating dynamic strategy adaptation in response to partial results. The planner and scheduler use legacy systems to provide a new integration of planning and scheduling technologies.

Other sites have been able to download the MPA wrapper and get an existing Lisp program communicating as an MPA agent in one day. Our experience indicates that MPA does indeed facilitate the integration of new technologies, which will encourage experimentation with and use of new technologies.

Publications relevant to this project.

MPA Resources and Links

Back to David E. Wilkins Home Page
Back to AI Center Home Page
Back to SRI International Home Page

David E. Wilkins
Last modified: Tue Jul 10 20:33:33 2001