SEAS Publishing Collaboration, and Access Control

SEAS Help System
Back to previous page
Forward to next page
Print this page
Exit and Close Window
Introduction: About Getting Started How To
Topics: Concepts Template Style Guide Frequently Asked Questions
Viewers: Manager Publication Information Situation Information
Multi. Arg. Uni. Arg. Derivative Question Uni. Arg. Primitive Question
Multi. Tmp. Uni. Tmp. Derivative Question Uni. Tmp. Primitive Question
Table Viewer Summary Viewer Memos
Collection Manager Collection Viewers Merging Tool
Glossary: Buttons Symbols Terms

SEAS Basic Concepts

Elements of Structured Argumentation

Our approach is based on the concept of a structured argument. A structured argument is based on a hierarchically organized set of questions that is used to assess whether an opportunity or threat of a given type is imminent. This hierarchy of questions is called the argument's template (as opposed to the argument, which answers the set of question posed by the template). The skeletal structure of this hierarchy is call the argument skeleton. Questions higher in the skeleton, called derivative questions, are answered by combining the answers to the questions immediately below them. This hierarchy of questions supporting questions may go a few levels deep before bottoming out in questions that must be directly assessed and answered; these are called primitive questions.

The figure below illustrates the skeleton of a seventeen-question argument template, with five derivative questions (1, 1.1, 1.2, 1.3, 1.4) and twelve primitive questions (1.1.1, 1.1.2, 1.1.3, 1.2.1, 1.2.2, 1.2.3, 1.3.1, 1.3.2, 1.3.3, 1.4.1, 1.4.2, 1.4.3). The links represent support relationships among the questions. A derivative question is supported by all the derivative and primitive questions immediately below it. For example, question 1 is answered based upon the answers to 1.1, 1.2, 1.3, and 1.4, and 1.2 is answered based upon the answers to 1.2.1, 1.2.2, and 1.2.3.


An inference method is used to automatically answer the derivative questions (light blue nodes, below) based upon the answers to primitive questions (darker blue nodes). The user answers the primitive questions in the question hierarchy, and the answers to the derivative questions are automatically calculated. An inference method pairs a fusion method with each derivative question. A fusion method combines the answers to the supporting questions to derive an answer to a derivative question. A typical fusion method might take the maximum answer as the conclusion when combining several answers assessed along a continuous scale. The same argument skeleton and fusion methods are typically used to support multiple argument templates over widely differing topics.

To complete an argument template given an argument skeleton and inference method, one associates a multiple-choice question with each node in the skeleton. To facilitate the rapid comprehension of arguments, we use a traffic light metaphor; relating choices to colored lights along a linear scale, from green at the low end to red at the high end. The questions in a template are typically yes/no or true/false; the multiple-choice answers for primitive questions partition this range, associating an answer with each colored light. Typically, a five-light scale is used (green, yellow-green, yellow, orange, red). Here green might correspond to false, red to true, and the other three to varying degrees of certainty (see below). No multiple-choice answers are associated with derivative questions; within arguments, their answers are strictly summarized by lights indicating their degree of certainty

The challenge in authoring an argument template is to break the problem down into a hierarchically structured set of questions (see the example above) that matches the selected argument skeleton and whose interrelationships among the answers follow the fusion methods. Therefore, it is critical that the author understands the structure of the argument skeleton and the effect of the fusion methods when fashioning the questions and multiple-choice answers that will be posed by the argument template.

1. POLITICAL: Is this country headed for a political crisis?

Arguments are formed by answering the questions posed by a template. Answers are chosen from the multiple choices given by the associated template. The rationale for each answer is recorded in text. Upon answering each question, the template's inference method is applied, deriving the answers to derivative questions. Using the traffic light metaphor, arguments can be displayed as a tree of colored nodes (see below). Nodes represent questions, and colors represent answers. The line of reasoning can be easily comprehended and the user is able to quickly determine which answers are driving the conclusion.

Information used as evidence to support the answers given in an argument are recorded as part of the argument. When information that is potentially relevant to answering a question posed is first found, it is entered as an exhibit. An exhibit assigns a unique identifier to the information, and records information for accessing it and a citation string for referring to it (typically consisting of some combination of title, author, and date). When the relevance of the information to the question at hand is determined, the exhibit is promoted to evidence. The relevance is recorded in two ways: as text explaining the significance and as the answer to the question that would be chosen if the answer were to be based solely upon this evidence. When evidence is present, the rationale typically explains how the collective evidence supports the answer chosen, explaining away that evidence that contradicts the answer and weaving together the supporting evidence to arrive at the stated conclusion.

A key difference between an expert and novice analyst is that the expert knows where to look for relevant information. Discovery tools provide a means for recording where to look for relevant information. They are typically recorded as part of the argument template, but can also be added as part of an argument. In either case, a discovery tool is associated with a question. A typical discovery tool might invoke a query to a search engine (e.g., Google) or reference a periodical on the web. In either case, the resulting information is examined to determine what if anything should be turned into an exhibit or evidence.

All of the arguments and templates thus far discussed are uni-dimensional. That is, they each are designed to arrive at the answer to a single overall question, the one upper most in the hierarchy. Multi-dimensional arguments and template are made up of multiple uni-dimensional components, where each addresses a common topic from a different perspective. For example, the assessment of the stability of a nation state might best be addressed by several independent assessments of the leadership, social, political, military, external, and economic situations (see below).

Other elements include collection, signal flags and memos. Collections are named containers into which objects can be placed, including other collections. They are used as an organization tool, to group objects on common topics, making them easier to find when needed. Signal flags are annotations on exhibits or evidence that mark them for analyst attention. Typically these are used to signal the arrival of new information that has yet to be fully incorporated into the associated argument. Memos are annotations that can be placed on any object. Unlike signal flags, they include a textual subject and body through which a message is conveyed pertaining to the objects on which they are attached. They and signal flags are devices for communication among multiple contributing analysts.

Arguments, templates, and collections all have situation descriptors and publication information. Situation descriptors capture the situation an argument addresses, the type of situation a template is intended to address, or the situation or type of situation to which a collection pertains. They include both textual elements and elements selected from fixed taxonomies of terms that typically capture the who, what, where, and when of situations. Publication information determines who has access to an object and whether or not they can modify that object.

Using Exhibits

There are four basic types of exhibits within SEAS: citations, files, URLs, and arguments. A citation exhibit is used to reference a book, magazine, report, or other document that is not available online. A bibliographic reference, recorded as the exhibit's citation, is the sole means by which a user can find and read such an exhibit. The basis for a file exhibit is a file provided by the user that created the exhibit. That file is archived on the SEAS server for future reference, and is downloaded to any user's machine when the inspect button is pushed for such an exhibit. Citations are recorded for file exhibits, as well as all other types of exhibits, but are only used to textually identify them when they appear in arguments or collections. URL exhibits are typically used to reference web pages, but can be used to reference any object that is web accessible. When their inspect button is pushed, their URL is followed by the client web browser. Finally, SEAS arguments can be directly used as argument exhibits. Typically, a uni-dimensional argument is used as an exhibit where the question it answers is the same (or nearly the same) as the primitive question in the argument to which it is attached. When such exhibits are promoted to evidence, their symbolic relevance (i.e., the answer induced by this evidence) can be automatically inherited from the supporting uni-dimensional argument. If the ultimate answer to the uni-dimensional argument changes, so will the relevance of this evidence.

Although there are only four types of exhibits, there are more than four ways to create them. Exhibits used elsewhere can be identified for reuse. When done, it is actually a copy of the exhibit/evidence that is used; thus, if the original is changed it will not impact where it has been reused. Besides a user specifying a file on their local disk to be used as the basis for a file exhibit, they can also specify a file using a URL. Finally, a URL can be used to specify a location where references to multiple potential exhibits can be found. These potential exhibits can be selectively or collectively turned into exhibits, at the user's discretion (see Using Discovery Tools below).

Using Discovery Tools

Discovery tools are recommended means of acquiring information to answer a question. They are most often associated with a template and are available for use in all arguments built upon that template. If they are directly associated with an argument, rather than its underlying template, it will generally not be available in other arguments. Of course, if that argument is copied, then copies of that discovery tool will be in the argument copies. If a discovery tool is of general use to anyone answering a question posed by a template, it is best if it part of the template. If it is not of general use, but might be of periodic use during argument development, then it is best associated with the argument.

Discovery tools are either based upon a template or a URL. If they are based upon a template, when triggered, they create new arguments based upon that template and add them as exhibits. Discovery tools based upon URLs trigger those URLs. These might constitute parameterized calls to search engines, references to web portals or pages, or calls to other web accessible tools. In their standard form, it is up to the user to examine the page or file returned, and manually create one or more exhibits based upon the returned information. Of course, copy-paste and drag-and-drop can aid in this process.

Another option is to create discovery tools that produce Exhibits from a URL. These assume that the URL will return references to multiple potential exhibits. If the page/file returned is in RSS, then the Items it finds in it are potential exhibits; if the page/file returned is in HTML, then the HREFs it finds are potential exhibits. When such discovery tools are triggered, the user (by default) is presented with a dialog where s/he can selectively choose which of the potentials to turn into exhibits; there is also an option for turning these into URL or file exhibits. However, there is another option. When such a discovery tool is defined, it can be made auto-populating; that is, it can be made to skip the dialog and turn every potential found into an exhibit, without any user intervention. When such auto-populating discovery tools are present, buttons are added to the arguments that include them that when pushed, trigger all such discovery tools in the argument. Since each new exhibit is annotated with a red signal flag, the new exhibits can be easily found. Retriggering these auto-populating discovery tools will not reintroduce any potential that is already an exhibit or evidence. Therefore, it is best to retain those exhibits that have been deemed irrelevant to prevent their reintroduction the next time the auto-populating discovery tools are triggered; instead signal their irrelevance by lowering their signal flags.

Discovery tools can also be associated with some kinds of collections. When present, they serve the same role of finding potentially relevant information. Auto-populating discovery tools can be associated with collections; these automatically fill the collections with potentially relevant exhibits. When present, there is a button associated with the collection for triggers them.

Using Collections

Collections are named containers into which one can place SEAS objects on a common theme. That theme is partially expressed by the name given a collection and the situation descriptor associated with it. The type of the collection can be used to further expresses this theme. A sequential collection indicates that the items in the collection are linearly ordered and constitute a series. One element in the series does not replace a previous element, but adds to it, by addressing a different aspect of the theme, usually a different time period. For example, a sequential collection is an ideal way to organize monthly arguments on a common topic, where each argument assesses the situation during a different month. Typically, the collection is incremented by copying the first item and adding the copy as the second. On the other hand, each item in a versioning collection is meant to replace the previous item, typically correcting or enhancing it. Its items too are linearly ordered, but there is typically only one item in active use, the current item, while the items that came before it are retained to ensure the integrity of earlier assessments, and as an historical record. Besides an item being designated as current, other items can be designated as the previous or next item. The next item is the one in line to become the next current item, at which time the present current item will become the previous. A versioning collection is ideal for tracking improvements and enhancements to a template over time. The initial version is established as the current one while the next one is under development. When the next one is ready to replace the current, the role of the current is changed to previous, the role of next changed to current, and a new copy of the next (now current) template is added to the collection and designated the next item. In so doing, arguments developed on earlier versions of the template are still based upon the same versions, yet the versioning collection makes it clear that there are newer versions available and which is the best to build upon at the moment. An alternatives collection captures the idea that its items are in competition with one another to be designated the best; the order in which the items are listed is of no consequence. This type of collection can be used to organize arguments that represent differing opinions on a common topic. If all such arguments are based upon a common template, then a consensus argument can be automatically produced through a join (see Joining Arguments below). A miscellaneous collection indicates that there is no additional theme and that the order in which the items are listed is of no consequence. Such a collection might be used to collect exhibits on a common topic for later use in support of arguments.

To encourage the use of sequential and versioning collections, we have added a "one click" versioning button to the viewer/editors for arguments and templates. Pushing the version button in the auxiliary toolbar will copy and save the current version of the argument or template to a sequential or versioning collection. The resulting dialog lists all of the sequential and versioning collections that include the argument or template, with a button to open each collection to see its contents.

In general, collections can be used to organize objects for easy access. Each user has a home collection that is included at the top of the SEAS Object Manager. Opening this home collection immediately reveals all of the items the user has placed in it. If it contains other collections, then those can be opened in hierarchy, revealing their contents. In this way a user's home collection plays a similar role to a user's home directory in a computer file system, with embedded collections acting much like subdirectories. Unlike directories, collections have situation descriptors, types, publication information, and (sometimes) roles making it even easier to find and share information. Further, if signal flags are raised or visible memos attached to objects within the user's home collection, it is so annotated, as are the objects within it, making it easy to quickly navigate to those objects needing attention.

Using Memos

Memos provide a means for annotating SEAS objects with the equivalent of sticky notes, formatted as memos. These can be used to record personal reminders or as a means of communicating with other users that have access to the objects to which they are attached, or the collections that they are in. Since memos include both Authors and Audience, access to them can be further restricted to specific individuals or groups (see Publishing, Collaboration, and Access Control below). Since they can be placed on published objects, they provide a way to mark-up what are otherwise unmodifiable objects. They include fields that are filled with text, including the Subject and Body, and the Type that is selected from a list. The memo Type indicates the purpose of the memo: they can be used to leave Instructions for others on how to use arguments/templates/collections/exhibits/evidence/discovery tools, to Critique any such objects, to record overriding Assumptions, to attach a Summary, to state the Context within which this object was/should be used, to indicate what is left To-Do, to indicate that an object is For-Review by others, or to attach a miscellaneous Comment. When viewing an argument, memos attached to its underlying template are visible, meaning that memos pertaining to instructions, assumptions, and context for the template's use are visible when arguments are created based upon them.

Each user can control which of the memos they have access to are visible. The parameters associated with each of the major viewer/editors include settings for which types of memos are to be visible. Only those included are displayed. Within the Memo Manager, any given memo can be set to not Display?, no matter the memo type settings. If any memo is not visible in a viewer/editor due to any of these settings, then the button that activates the Memo Manager will have blinking lines across it, indicating that some memos are being hidden from view. Within the Memo Manager, memos can be deleted. If it is an author that deletes a memo, then it is deleted for everyone; if it is a member of the audience that deletes a memo, then it is only deleted for them, with no effect on others.

Both signal flags and memos are used as a means of alerting, but they differ in several significant respects. Signal flags are meant to signal things that need to be addressed by someone (anyone); as soon as it is addressed, the flag is lowered; once lowered by one, it is lowered for all. On the other hand, memos can be used to alert a group to things that they all must do. For example, if a For-Review memo is created with a group as its Audience, then deletion of the memo by any member of the group does not delete it for the others; each member of the group must individually address it. Of course, if the group is included as Authors of the memo, deletion by any member of the group will delete it for all. Another difference is that a signal flag is visible to everyone that has access to the object to which it is attached; a memo can be further restricted to any subset of those that have access to the object. Signal flags have no type or other content and cannot be selectively filtered like memos. Memos also resemble email messages in some ways. However they differ in that they are attached to objects and they can be modified or retracted by their Authors after they are issued.

Publishing, Collaboration, and Access Control

Since SEAS is meant to be used by a community of analysts, it must address issues of privacy. When an analyst is in the early stages of argument development, they might not want their work to be accessible by others. During development, they might want certain individuals or groups to aid the process by reviewing or contributing to the process. Even when an argument is complete, they will want to control who it is that will be allowed to see the results. Further, when an argument is used as evidence in support of another argument, then that argument serving as evidence must be guaranteed to persist in its current state to guarantee the integrity of the argument it supports

To address these issues of access control and referencing, SEAS incorporates the concept of publishing. The concept is summarized in the following table. There are four key attributes that are related to two states of publishing: unpublished and published. The first, unique ID, is actually common to both, but it is so fundamental that we wanted to list it explicitly. All arguments, templates, and collections, no matter their publishing state, have an ID through which they can be uniquely identified. If there are multiple versions of an argument, template, or collection, each version has its own ID. Published arguments, templates, and collection are guaranteed to persist, that is, they will continue to exist; no such guarantee is made for unpublished arguments, templates, collections. As a consequence, only published objects can be reliably cited, much as only published works are (typically) included in bibliographies so that the reader has a real opportunity to obtain and read them. Unpublished arguments, templates, and collections are distinguished from published ones in that they are unstable i.e., likely to change in content. Published arguments, templates, and collections will not change. Finally, unpublished objects are distinguished from published ones in that their authors are given write access, while published ones restrict access by both their authors and audiences to reading.

All arguments, templates, and collections originate as unpublished works with a single author. While they remain unpublished, the author can add additional authors. Only the authors have access and they are free to make modifications as they see fit. Should more than one author attempt to change the same information at the same time in an unpublished argument or template, when the second author attempts to save their changes, they will be presented with a dialog that displays the version saved by the other author and their version, with an option to choose either one or to develop a new version by cutting and pasting between the two. When SEAS detects two authors simultaneously browsing the same unpublished argument or template, it warns the authors by displaying the collaboration warning symbol (see below). Once their draft argument, template, or collection is ready for limited external review, they might add people or organizations to the audience. It is dangerous for this audience to cite this unpublished work since it might go away or be substantially changed in the future. When an author decides that an argument, template, or collection is ready for external release, they publish it giving read access to a specified audience in addition to the authors. However, an argument can only be published if its underlying template is published. Once published, arguments, templates, and collections can be reliably cited and referenced in other arguments and collections since they are guaranteed to persist unchanged.

The following table summarizes the meaning of those symbols that are used by SEAS to communicate publishing, collaboration, and access control information.

The unpublished symbol indicates that the associated argument or template has not been published
The unpublished template symbol indicates that the associated argument and it underlying template have not been published
The read only symbol indicates that the associated argument or template cannot be edited (i.e., it is published or the user is not an author)
The collaboration warning symbol indicates that another author is currently accessing the same unpublished argument or template. Should more than one author attempt to change the same information at the same time, the last one finished will enter a dialog to resolve any conflicting changes. Clicking on this symbol will reveal the identities of the collaborators.

See the Help sections on Publication Information to see how to view and edit this information.

Automated Fusion Methods and Inference Methods

Automated fusion methods are used to automatically derive an answer to a question given the answers to supporting questions or evidence. They are used to fuse answers from supporting questions to answer a derivative question. They are used to fuse evidence to answer a primitive question. An inference method assigns a fusion method to every derivative question in a template.

Automated fusion methods are defined thinking about the possible answers fitting along a linear scale, with green corresponding to the low end and red to the high end. Therefore, when combining a green answer with a red answer using the Maximum fusion method, the result is red; doing the same using the Minimum fusion method results in green. As such, the Maximum fusion method should be used in those cases when any red answer among those being combined should result in red. On the other hand, the Minimum fusion method should be used when all answers among those being combined must be red before red is the result. If the red end of the scale corresponds to problematic conditions and the green end to desirable conditions, then the use of the Maximum fusion method is performing worst-case analysis and Minimum best-case analysis. Of course, if the red end corresponds to favorable conditions and green to problematic conditions, then the situation is reversed, making Maximum best-case and Minimum worst-case. For example, if the question being addressed is the desirability of a given vacation destination, and it is to be based on whether the weather is predominantly warm and dry, assuming that the red end of the scale represents favorable responses, then Minimum should be used if a favorable destination must be both warm and dry and Maximum should be used if a destination is favorable if it is either warm or dry.

Within SEAS, the answer to any question is not limited to a single choice/light. If the available information does not allow one to definitively select a single choice/light, multiple adjacent ones can be selected. For example, if the available information only allows one to eliminate the red choice, leaving all of the others as possible correct answers, then all of the others should constitute the answer. Similarly, if the available information clearly indicates that the red choice might be correct, but does not completely eliminate the possibility that the orange choice is correct, but clearly does eliminate the yellow through green choices, then the answer should include both the orange and red choices. Following this logic, if all choices are selected as the answer, then all of the choices remain possible, doing nothing more than reaffirming the initial condition that the choices span the range of possible answers. Thus, in this case, the information on which the answer was based has conveyed no new information regarding the answer to the question. Within SEAS, this condition, when all choices remain possible, is sometimes graphically represented with all lights on and at other times with all light off . However, in no way does this affect how the fusion methods perform. Another fusion method is available in High SEAS that Bounds the answers given, from the lowest to the highest, and all of the lights in between. The following table illustrates the application of Maximum, Minimum, and Bound to a variety of combinations of two answers.

Answer 1

Two other automated fusion methods available in SEAS are Average and Consensus. Average is the arithmetic average that you might expect. Using it to combine a green answer with a red answer results in yellow; combining yellow with red results in orange; combining yellow-green with red results in both yellow and orange since there is no single light that is half way between them. Consensus is similar but gives more emphasis to emphatic answers. Like average, combining green with red results in yellow as does combining yellow-green with orange. But unlike average, combining yellow with red results in red and combining yellow-green with red results in red, since red is the more emphatic answer. Consider asking two people a question: one says the answer is definitely yes while the other says they are not certain if the answer is yes or no. Under some circumstances, it might be better to go with the emphatic yes. That is the kind of reasoning that Consensus attempts to mimic. Another way for an answer to be less emphatic is to include more choices. Here too, consensus will favor the more emphatic over the less emphatic. One interesting consequence of this is that the consensus of any answer with another where all choices remain possible, results in the former answer i.e., combining an answer with the equivalent of a non-answer results in the answer. The following table highlights some of the similarities and differences in applying Average and Consensus. Returning to our question concerning the desirability of a vacation destination, Average or Consensus should be used if it being warm partially compensates for it being wet and it being dry partially compensates for it being cold. Consensus should be used rather than Average if an emphatic answer should predominate.

Answer 1

In some cases, one wants to weight the answers to be fused differently. Typically this is because the credibility of the sources of the answers differs; one wants to lean more heavily toward the answer given by the more reliable source. The Average Weighted fusion method, performs an arithmetic average after having discounted the answers according to their associated weights; those not discounted are given full weight while those that are discounted are given proportionally less weight. This will tend to make the result drift toward the answer that is more heavily weighted. The Consensus Weighted fusion method treats those answers given less weight as less emphatic. As the discounting associated with a given answer increases, it becomes closer and closer to being equivalent to no answer at all, and has less and less impact on the result. Within SEAS weights/discounts are graphically depicted as circular symbols with varying degrees of blue filling them. The weight is proportional to the blue area of the circle while the discount is proportional to the white area. A filled circle represents full impact while an empty circle represents no impact. The following table combines the same answers as in the previous table, but using weights. While this table is illuminating in itself, it is even more illuminating when compared with the previous table. These weighted fusion methods should be used for the vacation destination question, rather than their unweighted counterparts, if either being warm or dry is more important than the other. A greater amount of weight should be given to the more important aspect.

Answer 1
Average Weighted
Consensus Weighted

Joining Arguments

If multiple analysts have each developed their own independent assessment of a given situation, each capturing their assessment in a distinct argument based upon a common template, then placing these in a common (alternatives) collection and viewing them side by side, using the graphical collection viewer, permits differences and similarities in these assessments to be easily spotted. But at times, one wants to develop an argument that merges these disparate assessments into a common overall assessment. This is accomplished by joining the arguments to produce a new argument, with the answer to each question supported by one body of evidence from each disparate opinion. Each such body of evidence captures how one analyst answered the question with the rationale they gave as the relevance. The supporting evidence for each question is combined to arrive at an overall answer for each question, using the fusion method given when the join was initiated. When weighted fusion methods are used, the weights might be assigned to each argument based upon the credibility attributed each source. Thus, examining any question in the joint argument reveals how each analyst answered the question, what weight was attributed to the opinion of each analyst (if weights are being used), and the overall answer arrived at by combining the independent opinions.

Exporting and Importing

SEAS includes an exporting/importing facility. The exporting facility can be invoked from the Hierarchical Viewer/Editor or the Collection Viewer/Editor. There are two options when exporting regarding the form of the exported material: exports can be done to HTML or to AML (the Argument Markup Language).

Exporting to HTML creates an HTML page that resembles the page from which the export was invoked. However, this page excludes many of the features that make the original page dynamic (e.g., the buttons). Once produced, the user can download this page to their client machine and import it into MS Word or other HTML savvy applications. This is meant to provide an ease means for SEAS screens to be incorporated into documents being produced by the user. There is no means within SEAS for importing HTML exports.

Exporting to AML results in an XML file being downloaded to the user's machine. Such exports serve multiple purposes. One purpose is to provide a means by which SEAS objects can be saved, independently from the SEAS server. Once so saved, they can be imported to establish those objects on different SEAS servers, or to reestablish those objects on the originating server as they once were before going through modifications. When one exports an object in AML using SEAS, the export includes all of the objects on which the that object depends. For example, if a multi-dimensional argument is exported, the resulting AML file will include representations for its uni-dimensional arguments, their exhibits, evidence, and discovery tools, the underlying templates for all of the exported arguments with their discovery tools, any memos on any of those objects, among other related objects. The idea is to include everything that is required to reestablish the exported object on import.

Another purpose for an AML export is to provide a self contained description of a SEAS object that can be viewed and understood without SEAS. With an appropriate style file, the exported object can be viewed in a browser on a machine that does not have access to SEAS.

Another intended use for an AML export is to support the interchange of objects among other structured argumentation tools that support AML Although the detailed information used by any two structured argumentation tools will likely differ, AML is intended to capture the high level commonalities among those tools, allowing the exchange of rudimentary structures. It also allows arguments produced by different tools to be viewed in a common way through AML style sheets.

SEAS and High SEAS 7.1 - Patent Pending and Unpublished Copyright © 1998-2007, SRI International