Collection Mémoires et thèses électroniques
Accueil À propos Nous joindre

Chapter 4 A Taxonomy of the Proposed Approaches

Table des matières

In this chapter, we present our taxonomy of the proposed approaches in the domain of dialogue modeling and agent communication. We distinguish three main approaches: the mental approach, the social approach and the argumentative approach. The mental approach is based on the agents’ private mental states like beliefs, desires, and intentions. The social approach highlights the importance of the public and social aspect of agent conversations. The argumentative approach uses the dialectical models discussed by the philosophers of argumentation.

Communication between autonomous agents is widely recognized as a challenging research area in artificial intelligence and more particularly in the multi-agent systems community. Agent communication is at the intersection of several disciplines: philosophy of language, social psychology, artificial intelligence, logics, mathematics, etc. In a multi-agent system, agents may communicate in order to negotiate, to solve conflicts of interest, to cooperate, or simply to exchange information. All these communication requirements cannot be fulfilled by simply exchanging messages. Agents must be able to take part in coherent conversations which result from the performance of coordinated speech acts (Searle, 1969).

Over the years, important contributions have been made in modeling communication between software agents. Three main approaches have been proposed and applied to agent interactions and to agent communication languages (ACLs): the mental approach, the social approach, and the argumentative approach. Besides these approaches, some researchers proposed combined methods, called intentional-conventional approaches (Maudet, 2001). All these approaches originate from the research on the formalization of rational agents initiated by the pioneering work of Moore (1980) and Morgenstern (1986, 1987) in which knowledge and actions are considered.

In this chapter, we present and discuss these approaches on which our pragmatic approach presented in Chapter 5 is based. In Section 4.2, we present the mental approach. We summarize the model proposed by Cohen, Allen and Perrault, the rational interaction theory and other work. In Section 4.3, we present the social commitment approach. We discuss Singh et al.’s work, Colombetti et al.’s work and Flores and Kremer’s work. In Section 4.4, we discuss the argumentative approach. We present the dialectical models and the use of argumentation for dialogue modeling. In Section 4.5, we briefly present some intentional-conventional approaches. In Section 4.6, we conclude the chapter by comparing the different approaches.

In the mental approach, so-called agent’s mental structures (e.g. beliefs, desires and intentions: BDI) are used to model conversations and to define a formal semantics of speech acts. The objective of the BDI approach is to describe agents’ rational behavior.

Beliefs are simply an agent’s information at a given moment of time, i.e. what this agent believes to be true regarding the state of the world or other agents’ knowledge. Desires represent the states of the world wished by an agent, without other consideration: it is completely possible to have unrealizable or contradictory desires. The process by which an agent selects, among these desires, those which could be pursued is deliberation. In order to select these desires, an agent can evaluate the feasibility of each desire. Other criteria like preferences between desires can also be considered (Hulstijn, 2000b). To define the concept of intention, many philosophical works have been put forward. For example, Bratman (1987) distinguishes doing something intentionally and intending to do something. Searle (1983) speaks about the intentions directed towards the future and the intentions in action. These two concepts are dependent since the intentions directed towards the future are generally related to the performance of intentional actions. The link between the concepts of goal and intention was discussed by many researchers. Some authors like Grosz and Kraus (1996) distinguish the notion of intending that (a proposition is performed), close to the notion of goal, and the notion of intending to (perform an action). The difference between these two concepts is that the first one does not necessarily involve an action performed by the agent itself.

In this section we summarize two main proposals in this approach: the plan-based models of Cohen, Perrault and Allen and the rational interaction theory of Cohen and Levesque.

Plan-based models of dialogue can be claimed to originate from three classic papers: Cohen and Perrault (1979), Perrault and Allen (1980), and Allen and Perrault (1980). These models admit the hypothesis that agents participating in a conversation have rational behaviors leading them to build and to execute plans in order to achieve some goals. The production of an utterance by a speaker is related to the performance of a communication sub-goal. The communicative actions are registered in the plans formulated by the conversational agents at the same level as the physical actions.

The notion of plan

Planning is the construction of a plan, from a model of the world, while respecting certain criteria. A plan is an organized set of actions whose performance enables agents to achieve a goal. A plan allows agents to anticipate a succession of actions in order to achieve this goal, i.e. a certain final state of the world. To introduce this notion, we consider the following example in which an agent A asks another agent B a question, to which the latter then responds. The presentation is taken from (Allen and Perrault, 1980):

A has a goal to acquire certain information. This causes him to create a plan that involves asking B a question. B will hopefully possess the sought information and answer the question. A then executes the plan, and thereby asks B the question. B receives the question and attempts to infer A’s plan. In the plan, there might be goals that A cannot achieve without assistance. B can accept some of these goals as his own goals and create a plan to achieve them. B then executes its plan and thereby responds to A’s question.

Plan inference is the process through which an agent A attempts to infer another agent B’s plan, based on observed actions performed by B . Usually, this process starts with an incomplete plan, containing only a single observed action or an expected goal.

These two activities are modeled using the agents’ cognitive components. To establish or recognize a plan, knowledge about the state of the world is needed in order to be able to modify this world and to reach the final state corresponding to the fixed goal. Agents also need to have knowledge about the means of achieving this goal. The participants also have beliefs about the world and knowledge and beliefs on the other participants. They finally have intentions to do an action and intentions to be in a certain situation.

Mental attitudes are omnipresent in plan-based models. The formalization of such attitudes is inspired by Hintikka’s work (Hintikka, 1963). Allen and Perrault developed a modal logic in which the concepts of beliefs and knowledge are represented by the modal operators BEL and KNOW . This epistemic logic allows an agent to reason about what it knows and to deal with information that can be contradictory with its knowledge. There is no logical relation between what an agent A believes about another agent B ’s beliefs and agent A ’s own beliefs. For example, it is possible that agent A believes that a proposition p is true and believes that the agent B does not believe that p is true.

This epistemic logic is formalized as follows:

The formula BEL ( A , p ) is read: "agent A believes that the proposition p is true". In modal logic and according to the semantics of possible worlds, this means that: if there is a world M in which the proposition BEL ( A , p ) is true, p is true in all the accessible worlds from the world M by agent A using a belief accessibility relation. Worlds can be considered as a discrete sequence of events stretching infinitely into future (Cohen and Levesque, 1990). They can also be viewed as Kripke structures for a CTL-like logic (Rao and Georgeff, 1995) (Wooldridge, 2000). Intuitively, accessible worlds using a belief accessibility relation are the worlds that the agent believes possible. The formula KNOW ( A , p ) is true if BEL ( A , p ) is true and if p is indeed true. The authors assumed that the BEL operator satisfies the following axioms:

To formalize speech acts as actions, the authors use the concept of action schema . An action schema is a rule described by a name, a set of parameters, and some formulae which are its pre-conditions , effects , and body . Preconditions are conditions that must be true if the action's execution is to succeed. Effects are conditions that become true after the action is executed. The body is a set of partially ordered goal states that must be achieved after performing the action. An action is intentional when its author wants to perform it. A speech act is an intentional action. The pre-conditions of such an action contain the formula WANT ( A , Action ). Figure 4.1 explains these notions for the INFORM speech act. The definition of INFORM is based on Grice's idea (Grice, 1957) that the speaker informs the hearer of something merely by causing the hearer to believe that the speaker wants him to know something. This is like an operation in planning.

Allen and Perrault identified three types of inference rules: the ones concerning actions, the ones concerning knowledge, and the ones concerning planning by others. Rules concerning actions are rules that support plan recognition. Four inference rules concerning actions are defined as follows:

Precondition-Action Rul e: If P is a precondition of an action AC T, and an agent S believes that another agent A wants to achieve P , then we can probably infer that S believes A wants ACT to be performed.

Body-Action Rul e: If B is part of the body of AC T, and if S believes that A wants B to be performed, it is likely that S believes that A may want to perform AC T.

Action-Effect Rul e: If E is an effect of an action AC T, and S believes A wants to perform AC T, then it is plausible that S believes that A wants the effect of that action.

Want-Action Rul e: If S believes that A wants another agent N to want some action ACT to be performed, then S may believe that A wants ACT to be performed.

Rules concerning knowledge define relations between goals of acquiring knowledge and goals and actions that use that knowledge. Rules concerning planning by others are construction rules that can be seen as the inverse of plan inference rules. The plan construction rules are: Action - Precondition Rule , Action - Body Rule , Effect - Action Rule , Know Rule , Nested - Planning Rule , and Recognizing Nested - Planning Rule . These rules resemblance to previously mentioned rules.

Several researchers explored the idea of using plans to model agent interactions and suggested different types of plans: domain plans and discourse plans (Litman and Allen, 1990), individual plans (Pollack, 1990), and shared plans (Grosz and Sidner, 1990). However, the fact that interaction is a dynamic activity and is dependent on the action context makes it difficult to model it using a planning approach. In particular, the plan recognition that is necessary to deduce other agents’ intentions is extremely complex.

Cohen and Levesque (1990) proposed an action theory upon which a rational interaction theory has been built. This theory is based on a modal logic whose semantics is given in terms of possible worlds. Action representation is based on dynamic logic. The corresponding language contains the usual connectives of a first-order language, operators for the propositional attitudes, as well as action expressions. These elements are:

( BEL A p ), ( GOAL A p ): p follows from A ’s beliefs or goals.

( BMB A B p ): A believes that p is a mutual belief with B .

( AGT A a ): A is the only agent of action a .

a b : action a is an initial subsequence of b . Action variables range over sequences of primitive actions.

( HAPPENS a ), ( DONE a ): action a will happen next, action a has just happened.

a ; b : action sequence.

a | b nondeterministic choice.

p ? test action.

a * repetition.

p ? ; a action a occurring when p holds.

a ; p ? action a occurs after which p holds.

From these elements, the following abbreviations can be adopted:

To define the notion of intention, the authors use the notion of Persistent Goal P-GOAL that is an internal and individual commitment of agent. Formally:

This definition indicates that the agent A believes that p is currently false, chooses that it will be true later, and knows that before abandoning this choice, it must either believe it is true, believe it never will be true, or believe that q , an escape clause (used to model sub-goals, reasons, etc.) is false.

In this theory, intention to do an action a is a kind of persistent goal in which an agent commits to do an action, in a particular mental sate. Formally:

A fundamental notion in Cohen and Levesque’s theory is an ATTEMPT . This notion discussed by Searle (1969) is used to define the illocutionary acts. An attempt to achieve ψ via Φ by performing an action a is defined as follows:

This definition indicates that, before performing a , the agent A chooses that ψ should eventually become true, and intends that a should produce Φ relative to that choice. So, ψ represents some ultimate goal that may or may not be achieved by the attempt, while Φ represents what it takes to make an honest effort. Using this notion, the authors defined the semantics of some illocutionary acts. Figure 4.2 illustrates the case of the INFORM act.

The illocutionary act of informing is defined as an attempt by which the speaker (agent A ) is committed (in the sense of persistent goal) to the addressee’s knowing that A knows p . In other words, agent A is committed to the addressee’s knowing in which mental state A is. Although A is committed to getting the addressee to believe something about its goals, what A hopes to achieve is for the addressee to come to know p . To achieve this goal, it is necessary that the addressee B shares with A the mutual belief that B knows that A knows that p is true.

The fundamental idea of this approach is that illocutionary acts can only be derived from the analysis of the agents’ mental states. In addition, in Cohen and Levesque’s framework an agent intends to do an action if it has the persistent goal to have done the action. This reduction of intentions to do actions for goals is criticized by (Meyer et al., 1999): although intentions to do actions should be related to goals, this relation should express that doing the action helps in bringing about some goal and not that doing the action in itself is a goal.

According to the rational interaction theory, cooperation and sincerity are the two characteristics on which the agents’ rational behavior rests. Cooperation can take the form of very strict constraints, like the adoption of goals. An agent is cooperative when it adopts the goal of its addressee. Thus, recognizing the speaker’s underlying goals, as precisely as possible, is necessary to offer cooperative answers to it. In addition, the semantics of speech acts is conditioned by the fact that the speaker is sincere and that the addressee believes that the speaker is sincere. For example, in the INFORM act, the speaker is assumed to be sincere when it is committed to the addressee’s knowing its mental state.

Shapiro, Lespérance and Levesque (1998) proposed a language for specifying and verifying communicating multi-agent systems called Cognitive Agent Specification Language (CASL). Extended by Shapiro and Lespérance (2001) and Shapiro et al. (2002), CASL models agents as entities with mental states (knowledge and goals). It is based on a declarative action theory defined in the situation calculus (McCartyh and Hayes, 1969) combined to a programming language ConGolog (De Giacomo et al., 2000). CASL models Knowledge using a possible worlds account adapted to the situation calculus. A situation represents a snapshot of the domain. K ( a, s’ , s ) is used to denote that in situation s , agent a thinks that it could be in situation s’ . [ s ] means that is true in the situation s . Using K , the knowledge of an agent is defined as follows:

Know ( a , , s ) =def s’ ( K ( a , s’ , s ) [ s’ ])

An agent a knows a formula if is true in all K -accessible situation by agent a .

In CASL, three variants of the inform communicative action are supported (Lespérance, 2002):

inform ( a , b , ): agent a inform agent b that currently holds.

informWhether ( a , b , ): agent a inform agent b about the current truth value of .

informRef ( a , b , ): agent a inform agent b of who/what is.

The preconditions of these three actions are expressed using Know predicate. For example, an agent a can inform an agent b that , iff a knows that currently holds, and does not believe that b currently knows the truth value of .

In CASL, goals are modeled using an accessibility relation W over possible situations. The goal accessible situations for an agent are the ones where it thinks that all its goals are satisfied. W -accessible situations may include situations that the agent thinks are impossible. Intentions are defined using W and K relations so that the intention accessible situations are W -accessible situation that are also compatible with what the agent knows, in the sense that there is a K -accessible situation in the history of W -accessible situations. Thus, unlike goals, agents can only intend things that they believe are possible.

Using the CASL framework, Khan and Lespérence (2004) defined a model of cooperative ability , and show how agents use their intentions to determine their next actions. In a single agent domain, an agent’s ability to achieve a goal can be defined as its knowledge of a plan that is physically and epistimically executable and whose execution achieves the goal. As argued by the authors, modeling multi-agent ability is more complex because it requires to take into account the agents’ knowledge about other’s knowledge and intentions as well as how they select actions, behave rationally, etc. At the communication level, the authors extended CASL by providing two intention transfer communication actions: request and requestAct , and two cancellation actions: cancelRequest and cancelRequestAct . Finally, they defined rational plans and specified a planning framework for cooperating and communicating agents. The main idea in this framework is the role of intention and rationality in adopting a rational plan and in determining an agent’s actions.

On the basis of the rational interaction theory, a broad range of ACL performatives have been defined (Huber et al., 2001) (Huber et al., 2004) (Kumar et al., 2000). However, the complexity of the definitions causes sometimes confusion when selecting the correct performative in multi-message exchanges. In addition, these definitions have changed to match changes in the first version of performatives that have been defined, but not all performatives previously defined have been updated with each underlying definition change (Huber et al., 2004).

Several approaches have been defined for implementing cognitive concepts (Huhns and Singh, 1998). According to one of these approaches, the agent represents its beliefs, intentions, and desires in modular data structures and performs explicit manipulations on those structures to carry out means-ends reasoning or plan recognition. When the cognitive concepts are defined formally, the explicit manipulations can be accomplished through the application of a suitable theorem prover. Among the best of the systems using this approach is ARTIMIS (Sadek et al., 1997). ARTIMIS is an intentional system designed for human interaction and applied in a spoken-dialogue interface for information access. This system is based on a logic of beliefs and intentions defined from the Cohen and Levesque framework. In ARTIMIS, agents’ communicative acts are modeled as rational actions. The rational unit of the system enables agents to reason about knowledge and plans pertaining to their communicative acts.

One of the other best-known formalizations in the mental approach is Rao and Georgeff’s BDI-logic (Rao and Georgeff, 1991). Dealing with desires and intentions as primitives, the authors focus on the process of intention revision. The BDI-architecture is particularly interesting because it combines three distinct components: A philosophical foundation, a software architecture and a logical formalization (van der Hoek and Wooldridge, 2003). Syntactically, BDI logic is essentially branching time logic enhanced with additional modal operators: Bel , Des and Intend to capture agents’ beliefs, desires and intentions respectively. The semantics that Rao and Georgeff give to BDI modalities in their logic are based on Kripke structures and possible worlds. However, rather than assuming that worlds are instantaneous states of the real world, it is assumed that worlds are themselves branching temporal structures. While this enables the authors to define some interesting properties, it complicates the semantic machinery of the logic.

Although Rao and Georgeff’s BDI-logic shares much in common with Cohen and Levesque’s intention logic, there are two main differences between these two logics. The first and most obvious distinction is that Rao and Georgeff’s BDI-logic uses explicitly a CTL-like branching time logic. The second distinction is that worlds are a discrete sequence of events in the formalism proposed by Cohen and Levesque, and are branching temporal structures in the formalism proposed by Rao and Georgeff. In term of expressivity, Rao and Georgeff’s approach explores the possible interrelationships between beliefs, desires, and intentions from the perspective of semantic characterization. The most obvious relationships that can exist between agent’s belief, desire, and intention accessibility relations are whether one relation is a subset of another. For example, if desire accessibility relation is a subset of intention accessibility relation for a given agent, then we would have as an interaction axiom the fact that if this agent intends that a proposition is true, then it desires that this proposition is true.

Another important formalization is the KARO framework (for Knowledge, Actions, Results and Opportunities) proposed by (van Linder et al., 1998). KARO is a formal system that may be used to specify, analyze, and reason about the behavior of rational agents. The core of KARO is a combination of epistemic and dynamic logic. The framework comes with a sound and complete axiomatization. For instance, it is possible to model, using this framework, that an agent knows that some action is able to bring about some state of affairs since it knows that an action is feasible in the sense that the agent knows of its ability to perform the action.

The main difference between the KARO framework and Cohen and Levesque’s approach is that the KARO framework employs explicitly dynamic logic, a programming logic with explicit reference to actions (programs) within the language. In addition, according to Cohen and Levesque’s approach, an agent intends to do an action if it has the persistent goal to have done the action, however, in the KARO framework, intentions are represented by commitments consisting of actions. Because commitments have a very computational flavor, The KARO framework is more computational in nature. On the other hand, the difference between Rao and Georgeff’ logic and The KARO formalism is that the first logic focuses on the process of intention revision rather than the commitment acquisition which is essential to the KARO framework. Another difference is that BDI-logic rests on temporal logic rather than dynamic logic as in the case of the KARO formalism. Consequently, desires and intentions in BDI-Logic suffer from the problems associated with logical omniscience. A detailed description of these problems is discussed in (Meyer et al., 1999).

Several researchers used the approaches of Cohen and Levesque, Rao and Georgeff or KARO to define a formal semantics of ACLs (Hindriks et al., 2000), (Labrou, 1997), (Labrou and Finin, 1998), (Sadek, 1991), (van Eijk, 2000). For example, according to the semantics proposed by Labrou and Finin (1998), the fact that an agent Ag1 informs another agent Ag2 that a proposition p is true is interpreted as " Ag1 believes that p is true and believes that Ag2 intends to find whether p is true or not". However, these semantics have been criticized for not being verifiable because it is not possible to verify whether the agents’ behaviors match their private mental states (Dignum and Greaves, 2000), (Singh, 2000).

The mental approach has the advantage of being formally defined on the basis of modal logic and of a logic of action, which explains its success in the field of the human-machine interfaces. It also has the advantage of offering a complete theory which makes it possible to cover the three basic elements of the communication: syntax, semantics and pragmatics which is captured by the concept of planning. However, the approach based on planning has several limitations. The concept of plan can be useful when we consider simple conversations that agents can plan in advance. But as soon as the conversations become more complicated, this approach becomes inadequate. This is due to the fact that the dialogue is a very dynamic activity, whereas plans, although they can be revised when circumstances change, are static in nature because all communicative acts are planed in advance. In addition, plan revision is a computationally complex task. Moreover, the computational complexity of plan recognition algorithms is another limit. The plan recognition problem is also non decidable in certain cases (Bylander, 1991).

The semantics defined in this approach rests on a multimodal logic combined with an action theory. To use a language based on this semantics, agents must be specified according to a BDI approach. This semantics is simple, declarative and unambiguous. However, it remains difficult to verify it because agent’s mental states are private. Moreover, this semantics supposes that agents are sincere and cooperative. Although it is useful in certain cases, this assumption is not valid for all dialogue types, for example negotiation and persuasion. In addition, this semantics gives only the meaning of individual performatives and no semantics is defined for conversations. Defining pre / post-conditions of speech acts does not specify how BDI agents can take part in coherent conversations.

An alternative to the mental approach was proposed by Singh (1998) and Colombetti (2000) under the name of social approach. In opposition to the mental approach, this approach stresses the importance of conventions and the public and social aspects of dialogue. It is based on social commitments that are thought of as social and deontic notions. As argued by Dignum and her colleagues (Dignum et al., 2003), deontic concepts are important and fundamental elements to specify interactions in agent societies. Social commitments are commitments towards the other members of a community (Castelfranchi, 1995). They differ from the agent’s internal psychological commitments which capture the persistence of intentions as specified in the rational interaction theory (Cohen and Levesque, 1990). A speaker is committed to a statement when he made this statement or when he agreed upon this statement made by another participant. In fact, we do not speak here about the expression of a belief, but rather about a particular relationship between a participant and a statement. What is important here is not that an agent agrees or disagrees upon a statement, but rather the fact that the agent expresses agreement or disagreement, and acts accordingly. A social commitment is therefore a public attitude of a participant relative to a proposition.

This notion of social commitment was proposed in order to define a formal semantics that is verifiable (Singh, 2000). Thus, based on Habermas’s work (Habermas, 1984), Singh proposed a three-level semantics such that each act is associated with three validity claims: the objective claim (that the communication is true), the subjective claim (that the communication is sincere) and the practical claim (that the speaker is justified in making the communication). For instance, by informing agent B that proposition p is true, agent A (called debtor ) commits towards B (called creditor ) that p holds (objective conclusion), that it believes that p is true (subjective conclusion), and to the whole agent group that it has a reason to believe that p is true (practical conclusion). Singh’s approach is based on the mental approach when considering the subjective claim which is embedded within a social attitude when considering the practical claim. The practical claim actually leads to a social commitment made by the speaker towards the whole agent group. The commitment-based semantics has therefore been introduced in order to capture these three levels.

Technically, Singh defined the semantics of social commitments as an operator using Computation Tree Logic (CTL) (Emerson, 1990). This semantics is given relative to the following model: . is a set of states, is a partial order indicating branching time, relates states to similar states, is an interpretation which tells us which atomic propositions ( ) are true in a given state. The set of paths derived from < is denoted . gives the real path originating from a state. is a set of agents. , , and give the modal accessibility relations for beliefs, intentions, and commitments respectively. assigns to each agent at each moment the set of moments that the agent believes possible at that moment. assigns to each agent a set of paths that the agent is interpreted as having selected or preferred. assigns to each agent a set of paths on which the agent commits towards another agent. A commitment is denoted where and are two agents, and is a propositional formula. The meaning of a commitment is given by the following formula:

expresses " satisfies at state " and expresses " satisfies at state along path ".

Although it is verifiable at the objective level, this semantics remains unverifiable at the subjective level because this level is expressed in terms of mental states. In addition, the semantics given for the notion of social commitments does not reflect the deontic or the public aspect but only the fact that the content is true in the accessible states along some paths. The algebraic properties of this relation are also not specified.

Using Singh’s approach, Mallya et al. (2004) defined some constraints in order to capture some operations on commitments. These operations are: Create (that establishes the commitment), Cancel (that cancels the commitment), Release (that releases the debtor from a commitment), Assign (that replaces a commitment’s creditor by another), Delegate (that replaces the commitment’s debtor by another), and Discharge (that fulfills the commitment). An example of the defined constraints is: a commitment cannot be created more than once with a given identifier. The authors developed a representation for the temporal content capable of capturing realistic contracts. Then, they dealt with the problem of solving temporal commitments by showing how the satisfaction or breach of a commitment can be detected.

On the basis of the social commitment approach, Yolum and Singh (2002) proposed an approach for specifying protocols in which the content of the actions is captured through agent’s commitments. In this approach, commitments are formalized using a variant of the event calculus (Kowalski, 1986). The authors used the same operations specified in (Mallya et al., 2004). Then, they defined reasoning rules to capture the evolution of commitments through the agents’ actions. Using these rules in addition to the event calculus axioms and an event calculus planner (Shanahan, 2000), agents can reason about their actions. The event calculus planner is used to demonstrate how possible transitions can be generated between an initial state and a goal state given a protocol specification. As a related work, Chopra and Singh (2004) proposed a commitment-based formalism called non-monotonic commitment machines for representing multi-agent interaction protocols. This formalism uses commitments for representing states and actions. The meaning of a state is given by the commitments that hold in this state. The meaning of an action is defined by the way it manipulates commitments. This formalism does not directly specify sequences of states and transitions. Instead, it specifies rules in nonmonotonic causal logic (Giunchiglia et al., 2003). These rules model the changes in the state of a protocol as a result of the execution of actions. The inference mechanism in this logic computes new states at runtime. The nonmonotonic causal logic is used only to reason about actions in the sense that an action can be the cause for a formula to be true, for example causes and causes

Colombetti (2000) proposed a commitment-based semantics for an ACL called Albatross (Agent Language Based on a TReatment Of a Social Semantics). The definition of this ACL is based on an extended first order modal language L . This language contains terms of different sorts including: agent , action token , action type , force indicator , and message body . Colombetti used this language to define the meaning of speech acts according to Searle and Vanderveken’s classification (1985). To express the meaning of directive speech acts, he introduced the notion of precommitment . For example, when an agent requests another agent to do something, is trying to induce to make a commitment. In this situation, we speak about a precommitment of An expression of the form (respectively ) means that action commits (respectively precommits) agent to relative to agent If is an action token and is an action type, then means that is a token of action type If is an agent and is an action token, an expression of the form means that agent has just completed the execution of the action token . This predicate can be overloaded as follows:

In Albatross, A message is an expression with sub-expressions specifying a sender , a list of receiver s, a force indicator (in the sense of speech act theory), and a body (i.e., a statement of a content language conveying the content of the message). If and are agents, is a force indicator, and is a message body, the term denotes the following action type: a message is sent with sender , as one of the receivers, force indicator and body . For every message body there is a logical statement such that is valid, where the intuitive meaning of is that holds. This assumption is considered as meta-theoretic. A term of the form denotes the following action type: a speech act is performed with as the speaker, as one of the addressees, force and content The speaker of a speech act coincides with the agent that performs it. The relationship between messages and speech acts is expressed through an inference rule:

Using the language L , Colombetti defined a number of speech acts: declarations , assertives , commisives , directives . For example, for an assertive act, the point is to commit its actor to the truth of what is asserted, relative to every addressee and for a directive act, the point is to have the addressee perform some action. Assertive and directive acts are defined as follows:

Fornara and Colombetti (2002) defined an operational specification of Albatros by using social commitments. The essential components of this specification are: a commitment class that can be instantiated to a set of commitment objects, a fixed set of actions that agents may perform and a fixed set of roles that agents play during an interaction. Some basic operations on commitments are defined: Make commitment , Make precommitment , Cancel commitment , Cancel precommitment , Accept precommitment , Reject precommitment . These operations are used to define the meaning of the basic types of communicative acts as identified by speech act theory. The authors used this specification to define some interaction protocols (Fornara and Colombetti, 2003, 2004).

Verdicchio and Colombetti (2003) proposed a logical model of social commitments based on CTL+- (CTL* augmented with past operators). The purpose of their framework is to define an ACL semantics based upon the concept of social commitments. This framework relies on the assumption that agent communication should be analyzed in terms of communicative acts, by means of which agents create and manipulate commitments. They extended the temporal language of CTL+- in order to represent events and actions. Events are treated as a sort of individuals called event tokens . Every event token belongs to at last one event type , and takes place ( happens ) at exactly one time instant. By taking and as primitives, with is an event token, an event type, and an agent, they defined:

This formula expresses the fact that event of type is brought about by agent . Commitments and precommitments are only defined syntactically by two predicates and without any semantics. (respectively ) means that event has brought about a commitment (respectively a precommitment) for agent , relative to agent , to the truth of . The action types for commitment and precommitment manipulation are defined by axioms describing their constitutive effects, that is, by describing the state of affairs that necessarily hold if a token of a given action type is successfully performed. For example, the following axiom says that: if an agent successfully performs an action of making a commitment with as the debtor, as the creditor, and as the content, then on all paths agent is committed, relative to , to content , until agent possibly cancels such a commitment, after which the commitment no longer exists. The authors also studied fulfillment and violation of commitments.

Using commitment-based semantics proposed by Colombetti (2000) and by Verdicchio and Colombetti (2003), Fornara, Vigano, and Colombetti (2004) proposed to regard an ACL as a set of conventions to act on a fragment of institutional reality. Communicative acts are regarded as a sort of institutional actions, that is, as actions performed within an institution to modify a fragment of social reality (Searle, 1995). According to the authors, defining the semantics of an ACL has two sides: one side is the definition of the institutional effects brought about by the performance of communicative acts; the other side is the definition of the social context in which agents can carry out institutional actions. Institutional actions are particular types of actions that agents cannot perform by exploiting causal links. Rather, institutional actions are performed on the basis of a shared set of conventions and norms. Norms prescribe which institutional actions should or should not be executed among those that are authorized. They are important in the sense that they make an agent’s behavior at least partially predictable and allow agents to coordinate their actions according to the expected behavior of the others.

The approach proposed by Colombetti, Fornara, Verdicchio, and Vigano offers an operational specification and a logical definition of agent communication. However, this approach is only based on the notion of social commitments and it neglected the agents’ mental states and their reasoning process. Without this process it is not clear how agents manipulate their commitments when conversing.

Flores and Kremer (2002) proposed a social model for agent conversations for action based on social commitments and their negotiation. They used observable behavior and the concept of shared social commitments to ensure the coherence of agent conversations. During the conversation, each agent maintains a private record to which shared commitments are added and from which they are removed. The authors formally specify their model using the Z language.

In addition, they defined a basic protocol for the negotiation of social commitments called PFP (Protocol For Proposals). The protocol starts with a proposal from a sender to a receiver to concurrently adopt or discharge a social commitment. Either the receiver replies with an acceptance, rejection, or counteroffer or the sender issues a withdrawal or counteroffer. All utterances except a counteroffer terminate an instance of the protocol. Finally, it is expected that when an acceptance is issued, both speaker and addressee will simultaneously apply the proposed commitments to their record of shared commitments.

Flores et al. (2004) presented a conversational model where the meaning of messages is based on their use as coordinating devices. They distinguished two types of meaning: speaker's meaning , which is based on the use of messages for the communication of intent, and signal meaning , which is based on the use of messages as coordinating devices incrementing the common ground of interacting agents. Following this view, the meaning of messages is incrementally defined based on the following levels: a compositional level , where the meaning of messages is given according to their constituents; a conversational level , where the meaning of messages is given based on their occurrence as part of a conversation in which agents concur to advance the state of commitments; a commitment state level , where the meaning of messages is given according to the state of the commitments these messages manipulate; and a joint activity level, where the meaning of messages is given according to their use in joint activities.

The social approach is regarded as a change in agent design: from individual representation (private representation) to social interaction (public representation). An ACL must be conceived taking certain standards into consideration in such a way that agents belonging to different environments could interact. These standards are supposed to provide the possibility of testing the compliance of these agents with respect to the ACL specification. Commitment-based semantics has the advantage of being verifiable because unlike mental states, commitments are objective and public. They do not require to be reconstituted using inference processes. Compliance testing in this approach is based on the following idea: an observer of a MAS can maintain a record of the commitments being created and modified. From these, the observer can determine the compliance of other agents with respect to the given protocol. However, this technique does not allow us to check whether the protocol satisfies or not the properties that it should satisfy and whether the participating agents respect or not the semantics of the communicative acts. Indeed, when agents communicate using a semantics, we need to verify that they use the same semantics. In Chapter 8, we address this problem in a formal way using a model checking technique.

This approach has also been critiqued in (Khan and Lespérance, 2004) because communication cannot be reduced to the public social commitments level. The reason agents communicate is that this serves their private goals. Therefore, they must reason about these goals and the associated beliefs when communicate. Thus, a mentalistic semantics is also essential. For this reason, we think that a combined mental-social-argumentative semantics provides a good understanding of the agents’ communicative behavior.

On the other side, specifying protocols using a commitment-based approach does not provide a solution to the flexibility problem if agents cannot reason about their commitments. Although the event calculus planner and causal logic offer a reasoning mechanism to agents, this reasoning remains elementary. The reason is that agents cannot decide about the next act to be performed. The decision-making process is not taken into account in the protocols suggested in this approach. In Chapters 5 and 6, we show that using an argumentative theory in this approach provides such a process. On the other hand, in Chapter 9, we show that integrating dialogue games in a hybrid approach based on commitments and arguments provides more flexibility for these protocols.

The approach proposed by Colombetti and his colleagues is completely based on the social commitments and neglects the agents’ mental aspect. Therefore, this approach captures only the observable part of the communication, and does not explain how agents can participate in conversations. Finally, although the approach proposed by Singh mentions agents’ mental states, it does not specify how agents establish the link between their mental states and the different commitments. For example, how agents handle their commitments on the basis of their mental states is not specified. In our pragmatic approach (Chapters 5 and 6), we show how this link is established using the agents’ reasoning mechanism.

Another approach, called the argumentative approach, was proposed by Amgoud and her colleagues (Amgoud, 1999), (Amgoud et al., 2000a, 200b, 2002) as an extension to Dung’s work (Dung, 1995), and by McBurney and his colleagues (McBurney and Parsons, 2000), (McBurney, 2002), (McBurney et al., 2002). This approach is based upon an argumentation system that can include a preference relationship between arguments (Amgoud, 1999). According to this approach, the agents’ reasoning capabilities are often linked to their ability to argue. They are mainly based on the agent’s ability to establish a link between different facts, to determine if a fact is acceptable, to decide which arguments support which facts, etc. Before studying this approach we introduce some preliminary concepts.

Argumentation theory has been applied in the design of intelligent systems in several ways over the last decade. Arguments can be considered as tentative proofs for propositions (Fox et al., 1992), (Krause et al., 1995). One may imagine that knowledge in some domain is expressed in a logical language, with the axioms of the language corresponding to premises in the domain. Theorems in the language correspond to claims in the domain which can be derived from the premises by successive applications of some set of inference rules. For many real-life domains, the premises will be inconsistent in the sense that contrary propositions may be derived from them. In this formulation, arguments for propositions, or claims, are the same as proofs in a deductive logic, except that the premises on which these proofs rest are not all known to be true. Arguments are thus treated as tentative proofs for claims.

Many formalisms of argumentation such as (Pollock, 1991, 1992), (Prakken and Sartor, 1996), and (Vreeswijk, 1997) regard an argument as a structured chain of rules. An argument begins with one or more premises. After this follows the repeated application of various rules, which generate new conclusions and therefore enable the application of additional rules.

The understanding of an argument as a tentative proof and a chain of rules attends to its internal structure, as analogous to a chain of inference steps connecting a set of premises to a claim. A second strand of research in artificial intelligence has emphasized the relationship between arguments when considered as abstract entities, ignoring their internal structures. This approach has enabled argumentation systems to be defined as defeasible reasoning systems (Pollock, 1991, 1992), (Simari and Loui, 1992). Arguments are thus defeasible, meaning that the argument by itself is not a conclusive reason for the conclusions it brings about. In defeasible logic (also called nonmonotonic logic), inferences are defeasible, that is, the inferences can be defeated when additional information is available.

In this logic, the conclusions are not deductively valid: it is possible that the premises are true while the conclusion is not. Whether or not an argument should be accepted depends on its possible counterarguments. To decide about the acceptability of arguments, Dung (1995) proposed the use of a formal argumentation framework. In this framework, an argument framework is a set of arguments (considered as abstract entities) together with a binary relationship across this set, called attack . A set of arguments Args is conflict-free if there is no arguments Arg1 and Arg2 in Args such that Arg1 attacks Arg2 . Any given argument is said to be acceptable with respect to a designated subset S of the set of arguments if every argument which attacks the given argument is itself attacked by an argument in the designated subset. Such a subset S is said to be admissible if it is conflict-free and if every argument it contains is acceptable with respect to S . Intuitively, acceptable arguments with respect to some set S are those which are defended by the elements of S against all attacks. Similarly, an admissible set of arguments is one which defends its own members against all attacks.

The monological models of argumentation, like Toulmin’s model (Toulmin, 1958), focus on structural relationships between arguments. On the contrary, formal dialectics proposes dialogical structures to model the connectedness of utterances. Dialectical models focus on the issue of fallacious arguments, i.e., invalid arguments that appear to be valid. They are rule-governed structures of organized conversations in which two parties (in the simplest case) speak in turn in an orderly way. These rules are the principles that govern the participants’ acts, and consequently the use of dialectical moves.

Hamblin (1970) and MacKenzie (1979) proposed a mathematical model of dialogues. They defined some connectors necessary to the formalization of the propositional contents of utterances, and a set of locutions for capturing the speech acts performed by participants when conversing. The dialectical system proposed by MacKenzie, and called system DC , is an extension to the one proposed by Hamblin. MacKenzie’s DC proposed in the course of analyzing the fallacy of question-begging provides a set of rules for arguing about the truth of a proposition. Each participant, called player, has the goal of convincing the other participant, and can assert or retract facts, challenging the other player’s assertions, ask whether something is true or not, and demand that inconsistencies be resolved. When a player asserts a proposition or an argument for a proposition, this proposition or argument is inserted into a public store accessible to both participants. These stores are called commitments stores ( CS ). There are rules which define how the commitment stores are updated and whether particular illocutions can be uttered at a particular time.

A MacKenzie’s dialectical system mainly consists of:

1. A set of moves: they are linguistic acts, for example assertions, questions, etc.

2. A commitment store: it contains the different propositions and arguments asserted by the players. This store, accessible by all the players, makes it possible to keep the trace of the various phases of the dialogue.

3. A set of dialogue rules: they define the allowed and the prohibited moves. These rules have the following form "if condition, moves C are prohibited". A dialogue is said to be successful when the participants conform to its rules.

The language used in DC contains propositional formulae: " p ", "¬ p " and " p q ". Locutions are constructed from communicative functions that are applied to these propositions. For example, the moves: "question(fine)" and "assertion (fine, fine → hot)" indicate respectively the question "is it fine?" and the assertion "the weather is fine, and when the weather is fine, the weather is hot".

Table 4.1 illustrates the evolution the CSs of two players A and B during the following dialogue:

A1 : The doctors cannot make this surgery

B2 : Why ?

A3 : Because the patient is too old and that he refuses

B4 : Why does he refuse ?

A5 : Because there is little chance of success.

The dialogue starts with A ’s assertion ( d ): "the doctors cannot make this surgery". Thus, A commits itself and commits its adversary B to this fact. Thereafter, B challenges this assertion (one speaks in this case about a disengagement on the fact and an engagement on the challenge). After that, A provides a justification, which commits the two players to this assertion and to the fact that this assertion logically implies the challenged fact. The dialogue continues in a similar way with B ’s challenge of an A ’s justification part, which involves a new A ’s justification.

Several researchers have attempted to use argumentation techniques for modeling and analyzing negotiation dialogues (Sycara, 1990), (Parsons and Jennings, 1996), (Tohmé, 1997) (Rahwan et al., 2004). Amgoud and her colleagues (2000a, 2000b) extended these proposals by investigating the use of argumentation for a wider range of dialogue types. In this section we summarize this work.

The approach proposed by Amgoud et al. relies upon MacKenzie’s formal dialectics. The dialogue rules of this system are formulated in terms of the arguments that each player can construct. Dialogues are assumed to take place between two agents, P and C , where P is arguing in favor of some proposition, and C argues "con". Each player has a knowledge base Σ P and Σ C respectively, containing their beliefs. As in DC, each player has another knowledge base, accessible to both players, containing commitments made during the dialogue. These commitment stores are denoted CS ( P ) and CS ( C ) respectively. The union of the commitment stores can be viewed as the state of the dialogue at turn t . All the bases described above contain propositional formulae and are not closed under deduction.

Both players are equipped with an argumentation system. Each has access to his own private knowledge base and to both commitment stores. The two argumentation systems are then used to help players to maintain the coherence of their beliefs, and thus to avoid asserting things which are defeated by other knowledge from CS ( P ) ∪ CS ( C ). In this sense the argumentation systems help to ensure that players are rational .

To model dialogue types proposed by Walton and Krabbe (1995) (see Chapter 2, Section 2.7.2), the authors used seven dialogue moves: assert , accept , question , challenge , request , promise and refuse . For each move, they defined rationality rules , dialogue rules , and update rules . The rationality rules specify the preconditions for playing the move. The update rules specify how commitment stores are modified by the move. The dialogue rules specify the moves the other player can make next, and so specify the protocol under which the dialogue takes place. Figure 4.3 presents these rules for the assert and challenge moves.

The authors showed that this framework can be used to implement the language for persuasive negotiation interactions proposed by Sierra et al. (1998). In (Parsons et al., 2002), this approach is used to analyze formal agent dialogues using the dialogue typology proposed by Walton and Krabbe. The authors defined a set of locutions by which agents can trade arguments and a set of protocols by which dialogues can be carried out. In (Parsons et al., 2003), this approach is used to examine the outcomes of the dialogues an argumentation system permits. As an outcome, the authors used the set of acceptance propositions (i.e. what agents come to accept during the course of the dialogue). This argumentation approach has the advantage of linking communication and reasoning as well as of being verifiable. However, the approach by itself does not allow capturing certain notions such as obligations, conventions, roles, etc.

On the basis of Amgoud et al.’s work, Sadri et al. (2001) proposed a protocol but with fewer locutions called dialogue moves . The legal dialogue moves are request , promise , accept , refuse , challenge and justify . The content of the dialogue moves request and promise are resources, while the content of the other four dialogue moves are themselves dialogue moves. For example, accept ( Move ) is used to accept a previous dialogue move Move and challenge ( Move ) is used to ask a justification for a previous dialogue move Move . Because the intended application is a dialogue over scarce resources, the authors proposed a semantic linking utterances to a first-order logic describing resources. In this framework, an agent’s knowledge is described as an abductive logic program consisting of if then rules and of the resources owned by the agent. The abducibles of this logic program are the possible locutions which the agent may utter in response to a message it receives.

The research work on argumentation that we have described concentrates on formal dialectics. Another field of argumentation in artificial intelligence focuses on discourses which are rhetorically argumentative. This field, called rhetorical argumentation , deals with arguments which are both based on the audience’s perception of the world, and with evaluative judgments rather than with establishing the truth of a proposition (Grasso, 2002). In Aristotle’s rhetorical argumentation, the emphasis is put on the audience rather than on the argument itself. In a persuasive dialogue, the rhetorician appeals to the audience’s set of beliefs in order to try to persuade this audience, rather than to achieve general acceptability (Aristotle, 1926). Using Aristotle’s definition, philosophers Perelman and Olbrechts-Tyteca (1969) proposed a n ew rhetoric theory aiming at identifying discursive techniques. Based on an approach that goes from examples to generalization, this theory proposes a collection of argument schemas which are successful in practice. This collection is classified in terms of the objects of the argumentation and the types of audience’s beliefs that the schema exploits. Each schema is described by associations of concepts, either known or new to the audience in order to win the audience’s acceptance. A rhetorical schema is meant to express when it is admissible to use a given relationship between concepts. Grasso used this theory to propose a framework for rhetorical argumentation (Grasso, 2002) and a mental model for a rhetorical arguer (Grasso, 2003). The purpose is to build artificial agents able to engage in rhetorical argumentation. In this framework, argumentation aims at reaching an evaluation of an object or of a state of affairs. This evaluation is a way to pass value from one topic to another, in the same way as a deductive argument passes truth from one proposition to another. Formally, we say that there exists an evaluation of a concept c , in the set of concepts C from a certain perspective p of a set P from which the evaluation is made, if there exists a mapping E of the pair ( c , p ) into a set V of values. Assuming that V is a set consisting of two elements: good and bad , we write:

E : C × P V = { good , bad }

Grasso defines a rhetorical argument as the act of putting forward the evaluation of a concept, on the basis of a relationship existing between this concept and another concept, and by means of a rhetorical schema. If we have a concept c and an evaluation of such a concept, we can put forward a rhetorical argument in favor or against a second concept c’ iff 1) a relationship exists between the two concepts c and c’ and 2) a schema can be identified that exploits such a relation.

As a related work, Reed, Walton and Prakken (Prakken et al., 2003), (Reed and Walton, 2003), (Walton and Reed, 2003) proposed a classification and a formalization of argumentation schemes . Argumentation schemes are forms of argument (structures of inference) representing common types of argumentation. They represent structures of arguments used in everyday discourse, as well as in specific contexts such as legal argumentation or scientific argumentation. They represent the deductive and inductive forms of argument which are classical in logic. But they can also represent forms of argument that are neither deductive nor inductive, but that fall into a third category, sometimes called abductive or presumptive. The authors illustrated how argumentation schemes should be fitted into the technique of argument diagramming, using an XML system: the Araucaria (Reed and Rowe, 2001). This system provides an interface through which the user can mark up a text of discourse to produce an argument diagram. They also studied how to model legal reasoning about evidence within general theories of defeasible reasoning and argumentation.

The advantage of the argumentative approach lies in the link that it establishes between communication and reasoning. Like humans, agents must reason to be able to take part in intelligent dialogues. In addition, the distinction made between the reasoning level (rationality rules) and the commitment level (update rules) is important for the use of an ACL because it makes it possible to show in an implicit way the relation between agent reasoning (in particular on the basis of its argumentation system) and its participation in conversations. However, the commitment level remains elementary since it only captures the propositions asserted in a dialogue. Other commitment types, such as commitments to do actions and conditional commitments are not taken into account. Moreover, the handling of these commitments in a dialogue is only reflected by the addition and the suppression of propositions in or from commitment stores. However, attack , defense , justification and withdrawal operations that can be applied to these commitments are not supported. In addition, to accept or refuse arguments, agents must use not only their argumentation systems but also some social considerations such as agents’ trustworthiness.

The dialectical systems on which this approach is based has the advantage of being governed by dialectical rules. These systems are normative frameworks of argumentation considered as dialectical games that each agent must win. This winning-based vision is useful for modeling certain argumentative dialogues like persuasion and negotiation. However, it is not adapted for cooperative dialogues like information-seeking or problem resolution dialogues. In fact, although the formal dialectics provides a dialogical structure, it does not offer a complete dialogue model. The reason is that the evolution and the dynamics of dialogues are only captured by their histories presented by the concept of commitment stores. These histories do not represent the dialogue state and do not distinguish the argumentation phases from the other phases.

In addition to these approaches, certain researchers added to the mental approach some social aspects. These combined approaches are called intentional-conventional approaches.

As outlined by Clark (1974), agent communication is both a cognitive and a social activity. The mere individual dispositions of the participants cannot explain this phenomenon in a satisfactory manner. This is why an increasing number of researchers often use the terms of mixed or reactive / deliberative approaches (Pulman, 1996), (Traum, 1996), (Hulstijn, 2000a). During the conversation, deliberative processes related to the participants’ intentions and desires can take place, as well as more reactive processes related to the conventional aspects of the interactions. The idea is to integrate social attitudes (obligations, interpersonal relationships, roles, powers, etc.) into mental approaches.

In this respect, Pulman (1996) introduces a BDIO (Belief-Desire-Intention-Obligation) approach. In the same direction, Broersen and his colleagues proposed the BOID approach (Broersen et al., 2001). This approach is an abstract agent representation that consists of the four components Beliefs, Obligations, Intentions and Desires. The simple-minded BOID is a lightweight stimulus response agent, that only exhibits reactive behavior. This simple-minded BOID is extended (as time and resources allow) with capabilities for deliberation which may result in more complex (e.g. pro-active) behavior. The BOID architecture contains mechanisms to solve conflicts between the outputs of the four components. This approach consists of two phases: the first phase results in an intermediate epistemic state, and the second phase results in new intended actions. Moreover, Rousseau, Moulin and Lapalme (1996) presented a multi-agent system for simulating conversations involving software agents based on a conversation model and communication protocols designed in order to take into account phenomena present in human conversations. The conversation is thought of as a language game (Wittgenstein, 1958) in which agents negotiate about the mental states they transfer to their interlocutors. An agent proposes certain mental states (beliefs, intentions, emotions, etc.) and other participants react to these proposals, accepting or rejecting the proposed mental objects, asking for further information or justifications, etc. Agents position themselves with respect to the transferred mental states. In the same direction, Moulin and Bouzouba (Moulin, 1998), (Bouzouba and Moulin, 1999), suggest adding mechanisms enabling agents involved in a conversation to manipulate social knowledge such as the agents’ social power within the interaction context. They show that agents’ social relationships should be taken into account in the interaction framework. Thus, they propose an architecture (a conversation manager) that stresses the importance of social relationships and allows agents to handle explicit and implicit information conveyed by speech acts.

In this chapter, we reviewed a certain number of proposals relevant for the study of the general problem of communication between software agents in a MAS. These various proposals share the theoretical base provided by speech act theory. Beyond the isolated aspect of exchanges, agents can communicate by using traditional protocols like those of FIPA or those based on dialogue games. Table 4.2 illustrates a comparison between these proposals on the basis of three criteria: formalisms, semantics and pragmatics.

The semantics of the mental approach is unverifiable since it is impossible to check, without access to the agent’ programs, the compliance of this agent with respect to the given semantics. For example, if an agent A informs another agent B that p is true, one cannot check whether or not A believes that p is true. Because it is based on public commitments, the semantics of the social approach is verifiable. The semantics of the argumentative approach is also verifiable because it uses arguments that are public. For example, if an agent A informs another agent B that p is true, one can check whether or not agent A has an argument supporting p by challenging it. These three semantics are declarative because they are based on attitudes that are described declaratively rather than by procedures. These semantics describe the meaning of the communicative acts rather than how they can be used.

At the pragmatic level, the mental approach is based on the concept of planning, whereas the argumentative approach uses the formal dialectics and dialogue games. On the other hand, the social approach uses operational descriptions of protocols specified by commitments.

It is clear that the pragmatic level must be improved because planning, formal dialectics and commitment-based protocols do not allow agents to take part in conversations in a flexible way while respecting their autonomy. In order to participate flexibly in complex conversations such as negotiations, persuasions and deliberations, agents must be able to make decisions and not only to execute pre-defined plans and protocols. In addition, in the research work on agent communication there is no conversational model that specifies the dynamics and the evolution of conversations and that provides an efficient decision making process enabling agents to decide how to act next. On the other hand, the approaches discussed in this chapter do not take into account the social relationships that can exist between agents, for example how agents’ trustworthiness can be considered as an acceptability criterion of arguments. Finally, these approaches do not address the correctness and the verification issues of the communication mechanisms. Verifying that a given agent communication protocol satisfies some properties that are important in a given application context, and verifying that agents respect the semantics when communicating are interesting aspects yet to be addressed. In the second part of this dissertation, we propose our unified framework for the pragmatics and the semantics in which we address these different issues.

© Jamal Bentahar, 2005