Collection Mémoires et thèses électroniques
Accueil À propos Nous joindre

Chapter 6[*] Commitment and Argument Network

Table des matières

In this chapter, we propose a formal framework called Commitment and Argument Network (CAN) which offers an external representation of conversations between agents. This framework is based on our pragmatic approach proposed in the previous chapter. Using this formalism allows us: (1) to represent the dynamics of conversations between agents; (2) to analyze agent conversations; (3) to help autonomous agents take part in conversations.

As outlined in Chapters 3 and 4, several proposals on agent communication have been focused on modeling pragmatic and semantic issues. However, few researchers have addressed the issue of representing the dynamics of conversations. The purpose of this chapter is to propose a formal framework called Commitment and Argument Network (CAN) for representing these dynamics. This framework represents agent actions likely to take place in a conversation. As outlined in Chapter 5, these actions are interpreted in terms of the creation of and positioning on social commitments and arguments. The proposed formalism allows us to model the dynamics of conversations and offers an external representation of the conversational activity. An external representation of a conversation is a representation of the different communicative acts that can be observed by an external observant. This notion of external representation (Clark, 1996) is extremely useful because it provides conversational agents with a common understanding of the current state of the conversation and its evolution (Rousseau et al., 1996). Based on our formalism, a model is made available to agents which they can access simultaneously. This formalism clearly illustrates the creation steps of new commitments and the positioning steps on these commitments, as well as the argumentation steps.

In the previous chapter, we presented our formulation of commitments and of the relations between these commitments and arguments. Indeed, our goal is to develop a pragmatic approach based on commitments and arguments. This approach aims at providing software agents with a flexible means to interact. Thus, agents can participate in conversations by manipulating commitments and by producing arguments. It is the agents’ responsibility (and not the designers’ role) to choose, in an autonomous way, the actions to be performed by using their argumentation systems. In this chapter, we show how a conversation can be modeled using the CAN formalism on the basis of this approach. In a conversational activity, agents manage commitments and arguments. Our purpose is to represent the dynamics of conversations using this formalism. This representation allows us to ensure conversational consistency and coherence in terms of the actions performed by agents on the commitments and arguments. Indeed, this framework has two objectives: it can be used to analyze conversations, as well as provide a means for allowing agents to take part in conversations.

The rest of this chapter is structured as follows. In Section 6.2, we present the foundations of the CAN formalism. In Section 6.3, we give an example illustrating how an agent conversation can be represented and analyzed using this framework. In Section 6.4, we demonstrate how our formalism can be used as a means permitting agents to take part in conversations. Two additional examples using additional commitment types are then presented in Section 6.5. We show, in Section 6.6, that the CAN framework can represent any argumentative conversation. Finally in Section 6.7, we compare our pragmatic approach and our framework to related work.

In this chapter, we simplify the notation of a social commitment by omitting the argument related to content time. A social commitment will be denoted: SC ( Ag1 , Ag2 , t , φ ) instead of SC ( Ag1 , Ag2 , tsc , φ , tφ ).

A commitment and argument network is a mathematical structure which we define formally as follows (the explanation of the different components will be given after):

Definition 6.1 A commitment and argument network is a 12-uple :

< A , E , SC ( Ag1 , Ag2 , t0 , φ0 ), T , Ω , Σ , F , FEΣΣ , FΩ , FAΣΩ , FAΩΩ , FEΩΣ >

where :

  • A : a finite and nonempty set of participants.

In this chapter , we suppose that : A = { Ag1 , Ag2 } .

  • E : a finite and nonempty set of social commitments.

These commitments can be absolute commitments ( ABC ) , conditional commitments ( CC ) or commitment attempts ( CT ) .

E ={ SC ( Ag1 , Ag2 , t0 , φ0 ), ... , SC ( Agi , Agj , tn , φn ) } such that : i , j { 1 , 2 } .

  • SC ( Ag1 , Ag2 , t0 , φ0 ): a distinguished element of E indicating the initial commitment.

This element allows us to define the subject of a conversation.

  • T : the set of time points.

T = { t0 , ... , tn } .

  • Ω : the set of creation and positioning actions.

Ω = { Create , Withdraw , Reactivate , Satisfy , Violate , Accept-content , Refuse-content , Challenge-content , Suspend-content , Change-content } .

  • Σ : the set of argumentation relations.

Σ = { Defend-content , Attack-content , Justify-content , Contradict-content } .

  • F : a partial function relating one commitment to a second commitment using one argumentation relation and a time unit. We call this function : commitment-argument-commitment function.

F : E × E Σ × T

  • FEΣΣ : a partial function relating one commitment to a pair made up of an argumentation relation and a time point using one argumentation relation and another time point. We call this function commitment-argument-argument function.

FEΣΣ : E × Σ × T Σ × T

  • FΩ : a partial function relating an agent ( a participant ) to a commitment using a set of pairs made up of a creation or a positioning action and a time point. We call this function agent-commitment function.

FΩ : A × E 2Ω × T

  • FAΣΩ : a partial function relating an agent to an argumentation relation characterized by a time point using a set of pairs made up of a creation or positioning action and a time point. We call this function agent-action-argument function.

FAΣΩ : A × Σ × T 2Ω - { Change-content } × T

  • FAΩΩ : a partial function relating an agent to a creation or a positioning action characterized by a time unit using a set of pairs made up of a positioning action and a time unit. We call this function agent-action-action function.

FAΩΩ : A × Ω × T 2Ω - { Create , Satisfy , Violate , Change-content } × T

  • FEΩΣ : a partial function relating a commitment to a creation or a positioning action characterized by a time unit using one argumentation relation. We call this function commitment-argument-action function.

FEΩΣ : E × Ω × T Σ × T

Let us now comment upon these sets and functions.

The function F allows us to define the argumentation relation which can exist between two commitment contents, i.e. a defense, an attack, a justification or a contradiction relation. For example:

F ( SC ( Ag1 , Ag2 , ti , φi ), SC ( Ag1 , Ag2 , tj , φj )) = ( Defend-content , tk )

This means that the content of the commitment identified by ti (called source of the defense relation) defends the content of the commitment identified by tj (called target of the defense relation). The time unit tk , associated with the defense relation, is the time at which this defense has occurred.

Schematically, the function F is presented in the following way (Figure 6.1):

In all the figures of this chapter, a social commitment identified by ti will be denoted SCi .

The function FEΣΣ allows us to define an argumentation relation on another argumentation relation. For example:

FEΣΣ ( SC ( Ag1 , Ag2 , ti , φi ), Defend-content , tk ) = ( Attack-content , tl )

This relation points out that the content of the commitment identified by ti attacks at time tl the content of a defense relation that occurred at time tk. This defense relation is defined using the function F . The content of an argumentation relation is the content of the argument used in this relation.

Schematically, we present the function FEΣΣ in the following way (Figure 6.2):

The function FΩ allows us to define a set of creation and positioning actions (acceptance, refusal, etc.) performed by an agent on a commitment content. For example:

FΩ ( Ag1 , SC ( Ag2 , Ag1 , ti , φi )) = {( Accept-content , tk )}

This reflects the acceptance at moment tk of the content related to the commitment identified by ti .

Schematically, we present the function FΩ as follows (Figure 6.3):

The function FAΣΩ allows an agent to take position by accepting or refusing an argumentation relation. For instance:

FAΣΩ ( Ag1 , Defend-content , tk ) = {( Refuse-content , tl )}

This means that the agent Ag1 refuses at time tl the defense relation which is defined by the function F . The defense relation has occurred at time tk .

The function FAΣΩ is presented as follows (Figure 6.4):

The function FAΩΩ allows an agent to position itself relative to a positioning action by accepting it, refusing it, challenging it, withdrawing it or reactivating it. The positioning action on which an agent can take positions can be defined by the function FΩ or the function FAΣΩ . For instance:

FAΩΩ ( Ag1 , Refuse-content , tk ) = {( Challenge-content , tl )}

This example shows the case in which the agent Ag1 challenges at time tl a refusal action that occurred at time tk . This refusal action is defined by the function FΩ .

Schematically, the function FAΩΩ is illustrated as follows (Figure 6.5):

The function FEΩΣ allows us to define an argumentation relation binding a commitment to a creation or a positioning action. This action is defined by the function FΩ . For example:

FEΩΣ ( SC ( Ag1 , Ag2 , ti , φi ), Refuse-content , tk ) = ( Defend-content , tl )

This example highlights the case in which the content of the commitment identified by ti defends at time tl the refusal action that occurred at time tk . The refusal action is defined by the function FΩ .

The graphical representation of this function is shown as follows (Figure 6.6):

In this section, we show how to represent a dialogue between two agents using the CAN framework. We use the conceptual graphs notation (CG) proposed by Sowa (1984) in order to describe the propositional contents of commitments. Conceptual graphs are a system of logic and a knowledge representation language consisting of concepts and relations between these concepts. They are labeled graphs in which concept nodes are connected by relation nodes. With their direct mapping to natural language, CG serve as an intermediate language for translating computer-oriented formalisms to and from natural languages. A concept is represented by a type (ex. PERSON) and a referent (ex. john) and denoted [TYPE: Referent] (ex. [PERSON: John]). A conceptual relation links two concepts and is represented between brackets. When representing natural language sentences, case-relations are normally used. Examples are: AGNT (agent), PTNT (patient), OBJ (object), CHRC (characteristic), PTIM (point in time). The advantage of CG over predicate calculus is that they can be used to represent the literal meaning of utterances, without ambiguities, and in a logically precise form.

Before considering the example, we introduce the following notation: denoting the set of different states of an argument ( H , h ). is a finite and ordered set. The ordering relation between the elements of this set is defined as follows:

Definition 6.2 iff the argument ( H , h ) was in state before to be in state

The current state of an argument ( H , h ) is the biggest element of the set according to the ordering relation .

Let us consider the following dialogue D1 :

SA ( I0 , Ag1 , Ag2 , tu0 , U0 ): The disease M is not genetic.

SA ( I1 , Ag2 , Ag1 , tu1 , U1 ): Why?

SA ( I2 , Ag1 , Ag2 , tu2 , U2 ): Because it does not appear at birth.

SA ( I3 , Ag2 , Ag1 , tu3 , U3 ): A disease which does not appear at birth can be genetic as well.

SA ( I4 , Ag1 , Ag2 , tu4 , U4 ): How?

SA ( I5 , Ag2 , Ag1 , tu5 , U5 ): It can be due to a genetic anomaly in the DNA appearing at a certain age.

SA ( I6 , Ag1 , Ag2 , tu6 , U6 ): It is true, you are right.

With its speech act identified by I0 , agent Ag1 creates, as explained in Chapter 5, a propositional commitment, i.e.:

where PC ( Ag1 , Ag2 , t0 , p0 ) is the initial commitment of the dialogue and p0 is the propositional content which can be described by the following CG:

In the CAN formalism, this speech act results in the function:

FΩ ( Ag1 , PC ( Ag1 , Ag2 , t0 , p0 )) = {( Create , tu0 )}

Thereafter, agent Ag2 performs the speech act identified by I1 and takes position on the content of PC ( Ag1 , Ag2 , t0 , p0 ) by challenging it. Thus, "challenged" becomes the current state of the commitment. Hence, we have:

In the CAN formalism, this speech act results in the function:

FΩ ( Ag2 , PC ( Ag1 , Ag2 , t0 , p0 )) = {( Challenge-content , tu1 )}

Then, agent Ag1 justifies the propositional content p0 of its commitment by performing the speech act identified by I2 . Hence, it creates another commitment PC ( Ag1 , Ag2 , t1 , p1 ). Thus, "justified" becomes the current state of the commitment identified by t0 . We have:

where p1 is described by the following CG:

The Ag1 ’s knowledge base contains the arguments ( p1 , p0 ) and ( p1 , p1 ). Thus, in argumentation terms, agent Ag1 presents its argument ( p1 , p0 ). We have:

Arg ( Ag1 , p1 , Justify-content ( Ag1 , tu0 , PC ( Ag1 , Ag2 , t0 , p0 ))

This is represented in the CAN formalism by the functions:

FΩ ( Ag1 , PC ( Ag1 , Ag2 , t1 , p1 )) = {( Create , tu2 )},

F ( PC ( Ag1 , Ag2 , t1 , p1 ), PC ( Ag1 , Ag2 , t0 , p0 )) = ( Justify-content , tu2 )

By the speech act identified by I3 , agent Ag2 refuses Ag1 ’s argument. Then, it creates a new commitment PC ( Ag2 , Ag1 , t2 , p2 ). We have:

where the content p2 is described by the following CG [4] :

This is represented in the CAN formalism by the functions:

FAΣΩ ( Ag2 , Justify-content , tu2 ) = {( Refuse-content , tu3 )},

FΩ ( Ag2 , PC ( Ag2 , Ag1 , p2 )) = {( Create , tu3 )}

Agent Ag1 ’s speech act, identified by I4 , challenges the content of the commitment identified by t2 . This allows us to change the content for the "challenged" state:

In the CAN formalism, this results in the function:

FΩ ( Ag1 , PC ( Ag2 , Ag1 , t2 , p2 )) = {( Challenge-content , tu4 )}

Then, agent Ag2 justifies the content of its commitment PC ( Ag2 , Ag1 , t2 , p2 ) by performing the speech act identified by I5 . It then creates another commitment PC ( Ag1 , Ag2 , t3 , p3 ). Thus, "Justified" becomes the current state of PC ( Ag2 , Ag1 , t2 , p2 ). We have:

where the content p3 is described by the following CG:

In argumentation terms, agent Ag2 presents its argument ( p3 , p2 ). Thus, we have:

Arg ( Ag2 , p3 , Justify-content ( Ag2 , tu5 , PC ( Ag2 , Ag1 , t2 , p2 ))

In the CAN formalism, this results in the following functions:

FΩ ( Ag2 , PC ( Ag2 , Ag1 , p3 )) = {( Create , tu5 )},

F ( PC ( Ag2 , Ag1 , t3 , p3 ), PC ( Ag2 , Ag1 , t2 , p2 )) = ( Justify-content , tu5 )

Agent Ag2 ’s speech act, identified by I6 , reflects Ag2 ’s acceptance of both the content of the commitment identified by t3 and the argument defending it. Thus, "Accepted" is the final state of this commitment. We have:

In the CAN formalism, this is represented by the functions:

FAΣΩ ( Ag1 , Justify-content , tu5 ) = {( Accept-content , tu6 )},

FΩ ( Ag1 , PC ( Ag2 , Ag1 , t3 , p3 )) = {( Accept-content , tu6 )}

To summarize, the dialogue D1 can be represented by the following CAN:

< A , E , PC ( Ag1 , Ag2 , t0 , p0 ), T , Ω , Σ , F , FEΣΣ , FΩ , FAΣΩ , FAΩΩ , FEΩΣ > such that:

Figure 6.7 shows the graphical representation of the network.

So far, we have shown how the CAN formalism enables us to illustrate the connectedness of speech acts performed by agents in a conversation. In the previous section’s example, we started from an existing dialogue, which we examined and modeled it using a CAN. This highlights a process that enables us to analyze a conversation using the CAN formalism. However, our formalism also provides a means for agents to take part in conversations.

Agents can jointly build the network that represents their conversation as it progresses. This allows agents:

1- To make sure at any time that the conversation is consistent;

2- To determine which speech act to perform on the basis of the current state of the conversation, using an argumentation system and other cognitive elements.

Consistency is ensured by the relationships existing between various commitments, diverse argumentation relations and different actions (creation, acceptance, fulfillment, etc.). A speech act is consistent with the rest of the conversation if it leads to the creation of a new commitment related to another commitment through an argumentation relation, or if it makes it possible to take position on a commitment, on an argumentation relation or on an action (i.e. creation, refusal, etc.). Moreover, the agent must know everything about the current state of the conversation in order to determine its next speech act. For example, when an agent creates a commitment and/or an argumentation relation, the other agent may decide to act on what has been created by accepting it, by refusing it, or by challenging it, depending on its argumentation system. Similarly, when an agent finds that its commitment, argument or action is being challenged, it must create a commitment in order to justify it. The network is built as the conversation progresses. This process differs from the one used to analyze a conversation. Therefore, agents use a dynamic process in order to build the network while taking part in the conversation.

In order to illustrate this way of using the CAN formalism, we revisit the example of Section 6.3 and demonstrate how agents build the network piece by piece while performing their speech acts. By doing that, agents are able to continue the conversation. The rules for building a CAN are the constraints specified in the axioms presented in Chapter 5. These axioms specify how agents can perform communicative acts according to there argumentation systems. The Ag1 ’s knowledge base contains the arguments ( p1 , p0 ), ( p1 , p1 ), and ( p3 , p3 ). The Ag2 ’s knowledge base contains the arguments ( p3 , p2 ) and ( p3 , p3 ).

Let us simulate the conversation of agents Ag1 and Ag2 using the CAN approach. Agent Ag1 decides to start the conversation about a particular topic p0 that interests it (the underlying mechanism related to this choice belongs to the cognitive layer that is not considered here (see our agent architecture in Section 5.6 of Chapter 5)). Hence, Ag1 creates a propositional commitment whose content is p0 since it has an argument supporting it, i.e.:

FΩ ( Ag1 , PC ( Ag1 , Ag2 , t0 , p0 )) = {( Create , tu0 )}

This corresponds to the speech act identified by I0 :

SA ( I0 , Ag1 , Ag2 , tu0 , U0 ): The disease M is not genetic.

Then, agent Ag2 decides to take position on the content of PC ( Ag1 , Ag2 , t0 , p0 ) by challenging it since it does not have any argument in favor or against it. As a matter of fact, Ag2 wants to know which Ag1 ’s argument supports the content of this commitment. Therefore, Ag2 performs the action corresponding to the speech act identified by I1 :

SA ( I1 , Ag2 , Ag1 , tu1 , U1 ): Why?

FΩ ( Ag2 , PC ( Ag1 , Ag2 , p0 )) = {( Challenge-content , tu1 )}

We notice here that as for commitment attempts (Chapter 5, Axiom A 3), we cannot verify wether Ag2 has an argument for or against p0 or not because this aspect is related to its private internal state.

Now, Ag1 must defend its proposition: it creates the commitment PC ( Ag1 , Ag2 , t1 , p1 ) whose content justifies the content of PC ( Ag1 , Ag2 , t0 , p0 ). In doing so, this agent performs the action corresponding to the speech act identified by I2 :

SA ( I2 , Ag1 , Ag2 , tu2 , U2 ): Because it does not appear at birth.

FΩ ( Ag1 , PC ( Ag1 , Ag2 , p1 )) = {( Create , tu2 )}

F ( PC ( Ag1 , Ag2 , t1 , p1 ), PC ( Ag1 , Ag2 , t0 , p0 )) = ( Justify-content , tu2 )

Ag2 has an argument against the justification relation. Consequently, it refuses it by creating the commitment PC ( Ag2 , Ag1 , t2 , p2 ). It performs the action corresponding to the speech act identified by I3 :

SA ( I3 , Ag2 , Ag1 , tu3 , U3 ): A disease which does not appear at birth can be genetic as well.

FAΣΩ ( Ag2 , Justify-content , tu2 ) = {( Refuse-content , tu3 )}

FΩ ( Ag2 , PC ( Ag2 , Ag1 , t2 , p2 )) = {( Create , tu3 )}

Because agent Ag1 does not have any argument for or against p2 , it challenges the content of PC ( Ag2 , Ag1 , t2 , p2 ) using its argumentation system. By doing that, it performs the action corresponding to the speech act identified by I4 :

SA ( I4 , Ag1 , Ag2 , tu4 , U4 ): How?

FΩ ( Ag1 , PC ( Ag2 , Ag1 , t2 , p2 )) = {( Challenge-content , tu4 )}

The content of Ag2 ’s commitment PC ( Ag2 , Ag1 , t2 , p2 ) being challenged. Therefore, agent Ag2 must try to justify it. Because its knowledge base contains the argument ( p3, p2 ), it creates the commitment PC ( Ag2 , Ag1 , t3 , p3 ) and performs the actions corresponding to the speech act identified by I5 :

SA ( I5 , Ag2 , Ag1 , tu5 , U5 ): It can be due to a genetic anomaly in the DNA appearing at a certain age.

FΩ ( Ag2 , PC ( Ag2 , Ag1 , t3 , p3 )) = {( Create , tu5 )}

F ( PC ( Ag2 , Ag1 , t3 , p3 ), PC ( Ag2 , Ag1 , t2 , p2 )) = ( Justify-content , tu5 )

Thereafter, because the Ag1 ’s knowledge base contains an argument for p3 , it accepts the content of PC ( Ag2 , Ag1 , t3 , p3 ) and the argumentation relation ( Justify-content , tu5 ) using its argumentation system. It performs the actions corresponding to the speech act identified by I6 :

SA ( I6 , Ag1 , Ag2 , tu6 , U6 ): It is true, you are right.

FAΣΩ ( Ag1 , Justify-content , tu5 ) = {( Accept-content , tu6 )}

FΩ ( Ag1 , PC ( Ag2 , Ag1 , t3 , p3 )) = {( Accept-content , tu6 )}

In the following examples, we give the final version of the networks without illustrating the steps that led to their construction. Moreover, for simplicity, we do not describe the content of commitments.

The example presented in Section 6.3 illustrated the case in which an agent takes position on a commitment and on an argumentation relation. The following example of dialogue ( D2 ) illustrates the case in which an agent takes position on a creation action.

SA ( I0 , Ag1 , Ag2 , tu0 , U0) : I will travel to the Himalayas.

SA(I1 , Ag2 , Ag1 , tu1 , U1) : Why do you tell me that?

SA(I2 , Ag1 , Ag2 , tu2 , U2) : It is only to inform you.

SA(I3 , Ag2 , Ag1 , tu3 , U3) : Ok, thank you.

The network associated with this dialogue is:

< A , E , AC ( Ag1 , Ag2 , t0 , ( α , p0 )), T , Ω , Σ , F , FEΣΣ , FΩ , FAΣΩ , FAΩΩ , FEΩΣ > such that:

A = { Ag1 , Ag2 }

E = { AC ( Ag1 , Ag2 , t0 , ( α , p0 )), PC ( Ag1 , Ag2 , t1 , p1 )}

T = { tu0 , ..., tu3 }

FΩ ( Ag1 , AC ( Ag1 , Ag2 , t0 , ( α , p0 )) = {( Create , tu0 )}

FAΩΩ ( Ag2 , Create , tu0 ) = {( Challenge-content , tu1 )}

FΩ ( Ag1 , PC ( Ag1 , Ag2 , t1 , p1 )) = {( Create , tu2 )}

FEΩΣ ( PC ( Ag1 , Ag2 , t1 , p1 ), Create, tu0) = ( Justify-content , tu2 )

FΩ ( Ag2 , PC ( Ag1 , Ag2 , t1 , p1 )) = {( Accept-content , tu3 )}

The graphical representation of this network is illustrated by Figure 6.8.

Agent Ag1 creates an action commitment AC ( Ag1 , Ag2 , t0 , ( α , p0 )) (it is committed to traveling to the Himalayas) by performing the speech act identified by I0 . Thereafter, agent Ag2 challenges the creation action of this commitment by performing the speech act identified by I1 . In order to justify its creation action of AC ( Ag1 , Ag2 , t0 , ( α , p0 )), Ag1 creates a propositional commitment PC ( Ag1 , Ag2 , t1 , p1 ) by performing the speech act identified by I2 . Finally, Ag2 accepts the content of this commitment by performing the speech act identified by I3 .

The CAN formalism also allows us to manage commitment attempts. Dialogues D3 and D4 illustrate respectively the acceptance and the refusal of a commitment attempt.

Dialogue D3 :

SA ( I0 , Ag1 , Ag2 , tu0 , U0 ): Can you drive me to the airport at 5PM?

SA ( I1 , Ag2 , Ag1 , tu1 , U1 ): Yes, I can.

SA ( I2 , Ag2 , Ag1 , tu2 , U2 ): I will be available at 5PM.

The network associated with this dialogue is:

< A , E , ACT ( Ag1 , Ag2 , t0 , ( α , p0 )), T , Ω , Σ , F , FEΣΣ , FΩ , FAΣΩ , FAΩΩ , FEΩΣ > such that:

A = { Ag1 , Ag2 }

E = { ACT ( Ag1 , Ag2 , t0 , ( α , p0 )), AC ( Ag2 , Ag1 , t1 , ( α , p0 )), PC ( Ag2 , Ag1 , t2 , p1 )}

T = { tu0 , tu1 , tu2 }

FΩ ( Ag1 , ACT ( Ag1 , Ag2 , t0 , ( α , p0 ))) = {( Create , tu0 )}

FΩ ( Ag2 , ACT ( Ag1 , Ag2 , t0 , ( α , p0 ))) = {( Accept-content , tu1 )}

FΩ ( Ag2 , AC ( Ag2 , Ag1 , t1 , ( α , p0 ))) = {( Create , tu1 )}

FΩ ( Ag2 , PC ( Ag2 , Ag1 , t2 , p1 )) = {( Create , tu2 )}

F ( PC ( Ag2 , Ag1 , t2 , p1 ), AC ( Ag2 , Ag1 , t1 , ( α , p0 ))) = ( Justify-content , tu2 )

The graphical representation of this network is illustrated by Figure 6.9.

Agent Ag1 creates a commitment attempt ACT ( Ag1 , Ag2 , t0 , ( α , p0 )) about an action α by performing the speech act identified by I0 . Agent Ag2 accepts this commitment by performing the speech act identified by I1 . Therefore, it creates the action commitment AC ( Ag2 , Ag1 , t1 , ( α , p0 )) (it commits to drive agent Ag1 to the airport at 5PM). Thereafter, Ag2 creates the propositional commitment PC ( Ag2 , Ag1 , t2 , p1 ) that supports the content p0 by performing the speech act identified by I2 .

Dialogue D4 :

SA ( I0 , Ag1 , Ag2 , tu0 , U0 ): Can you drive me to the airport at 5PM?

SA ( I1 , Ag2 , Ag1 , tu1, U1 ): No, I cannot.

SA ( I2 , Ag1 , Ag2 , tu2, U2 ): Why not?

SA ( I3 , Ag2 , Ag1 , tu3, U3 ): Because I have a meeting at 5PM.

SA ( I4 , Ag1 , Ag2 , tu4, U4 ): Ok, thank you.

The network associated with this dialogue is:

< A , E , ACT ( Ag1 , Ag2 , t0 , ( α , p0 )), T , Ω , Σ , F , FEΣΣ , FΩ , FAΣΩ , FAΩΩ , FEΩΣ >

such that:

A = { Ag1 , Ag2 }

E = { ACT ( Ag1 , Ag2 , t0 , ( α , p0 )), PC ( Ag2 , Ag1 , t1 , ¬ p0 ), PC ( Ag2 , Ag1 , t2 , p1 )}

T = { tu0 , ..., tu3 }

FΩ ( Ag1 , ACT ( Ag1 , Ag2 , t0 , ( α , p0 ))) = {( Create , tu0 )}

FΩ (Ag2, ACT ( Ag1 , Ag2 , t0 , ( α , p0 ))) = {( Refuse-content , tu1 )}

FΩ ( Ag2 , PC ( Ag2 , Ag1 , t1 , ¬ p0 )) = {( Create , tu1 )}

FΩ ( Ag1 , PC ( Ag2 , Ag1 , t1 , ¬ p0 )) = {( Challenge-content , tu2 )}

FΩ (Ag2, PC ( Ag2 , Ag1 , t2 , p1 )) = {(Create, tu3 )}

F ( PC ( Ag2 , Ag1 , t2 , p1 ), PC ( Ag2 , Ag1 , t1 , ¬ p0 )) = ( Justify-content , tu3 )

The graphical representation of this network is illustrated by Figure 6.10.

As a result of refusing the commitment attempt ACT ( Ag1 , Ag2 , t0 , ( α , p0 )) by performing the speech act identified by I1 , agent Ag2 creates the propositional commitment PC ( Ag2 , Ag1 , t1 , ¬ p0 ). By performing the speech act identified by I2 , agent Ag1 challenges the content of this commitment. Therefore, Ag2 creates the propositional commitment PC ( Ag2 , Ag1 , t2 , p1 ), by performing the speech act identified by I3 , in order to justify PC ( Ag2 , Ag1 , t1 , ¬ p0 ).

So far, we have shown how the CAN formalism allows us to represent conversations by illustrating the connectedness of speech acts performed by agents. However, we did not show if it can represent any coherent conversation. To do this we have to provide a mathematical demonstration. The purpose is to show that the formalism is sufficient to handle any argumentative conversation for communication between software agents. An argumentative conversation is a conversation that contains argumentation relations in order to achieve a goal (for example a persuasion or a negotiation goal). First, we have to define what is a conversation and what is a coherent conversation. For us, a conversation is a sequence of utterances (i.e. a sequence of speech acts). A coherent conversation is a conversation in which there is a positioning relation or an argumentation relation between the utterances. For example, if an agent Ag1 performs a speech act whose content is p , and another agent Ag2 performs another speech act in which it accepts, refuses, challenges, attacks, etc. p , then, this part of the conversation is considered as coherent. However, if Ag2 performs a speech act whose content is q without any positioning or argumentation relation between p and q , then, the conversation is considered as incoherent.

In this section we show that the CAN formalism covers all the elements describing a conversation. We use for this purpose the following formal presentation due to (Günter, 1984).

Let A be a set of agents ( A = { Ag1 , ..., Agn }), L be a set of well-formed expressions (L = { φ0 ,..., φm }), P be a set of designatory phrases ( P = { p0 ,..., pk }), and V be a set of performatives ( V = { v0 ,..., vl }). A conversation is a finite sequence of 4-tuples, each of which consists of: a name Agi A , a well-formed expression φi L , a performative verb vi V , and a designatory phrase pi P . The well-formed expressions represent the participants’ statements. The term sequence highlights the temporal order in which these expressions are used. The names represent the participants in the conversation. The performative verb indicates the type of the speech act performed when using the expression. The designatory phrase identifies the speech act. Formally:

C is a conversation iff: there are a language a set of participants, a set of performative verbs, a set P of designatory phrases, and

and

The CAN formalism allows us to represent these various elements. The language L is used to describe the commitment content (for example predicate calculus or conceptual graphs). The expressions φi are thus represented by the commitment content φ . The set of the participants is the set A of the CAN formalism. The performative verbs and the designatory phrases are captured by the actions that agents perform on commitments and arguments. The sequence of the 4-tuples is modeled by the utterance times associated with the different actions in the CAN formalism. It is modeled by the set T of time units associated with the set of the actions Ω and to the set of the argumentation relations Σ (see Definition 6.1).

According to (Günter, 1984), a conversation can also highlight the goal of the accomplished actions. In the CAN formalism, this is illustrated by the fact that it is possible to justify not only a commitment content, but also a creation action of a commitment (see Definition 5.5 of Chapter 5).

Notation

We denote the set of coherent conversations and the set of commitment and argument networks. We denote a commitment and argument network which is associated with a coherent conversation by with is an element of and an element of .

Proposition 6.3 .

In other words, for any coherent conversation, there is always a CAN which represents it.

Proof

We use a proof by contradiction. A conversation can be described in the simplest form as a sequence of utterances . Each utterance is associated with a participant Agj .

Let us assume that: a coherent conversation such that . In other words, let us assume that there is a coherent conversation such that no network can represent it. This implies the existence of an utterance which one cannot represent in a network. Let Therefore the utterance does not allow us to perform one of the following actions:

It remains only two possibilities to interpret  :

1- Taking position on a commitment, an action or an argumentation relation which does not belong to . In this case the resulting conversation is not coherent because it highlights a positioning on an element which was not created. For example, challenging the content of a commitment which does not exist (see our definition of coherence above).

2- The utterance cannot result in an element which can be supported by the elements of the CAN. This can be due to one of the two following reasons:

Reason 1: The utterance cannot lead to the creation of a commitment, a positioning action and / or an argumentation relation. This is false by definition.

Reason 2: The positioning action reflected by cannot be presented by one of the functions of the CAN (i.e. the functions: F , FEΣΣ , FΩ , FAΣΩ , FAΩΩ , FEΩΣ ). This is false because it is possible to take position by nesting , n times, on a positioning action, or on an argumentation relation. The reason is that a positioning action of an unspecified order X is always represented by the Cartesian product: Ω × T .

Let us show this last issue by the illustration of Figure 6.11.

Let Ω be the following set Ω = { Ω0 , ..., Ωm }. Using the definition of the function FΩ we have:

FΩ ( Ag1 , SC ( Ag1 , Ag2 , t0 , φ0 )) = ( Ω0 , t1 )

Using the definition of the function FAΩΩ we obtain:

FAΩΩ ( Ag2 , FΩ ( Ag1 , SC ( Ag1 , Ag2 , t0 , φ0 ))) = FAΩΩ ( Ag2 , Ω0 , t1 ) = ( Ω1 , t2 )

Therefore, we obtain:

In the same way, one can show that it is always possible to define an argumentation relation on any argumentation relation created previously, considering that an argumentation relation of any order is represented by the Cartesian product: Σ × T .

Therefore, the starting assumption is false. Thus, we proved that any coherent conversation can be represented by a CAN formalism.

Proposition 6.4 .

In other words, for any coherent conversation, there is one and only one CAN which represents it.

Proof

The proof of this proposition is based on the proposition 6.1 and on the fact that any speech act can be interpreted in our approach in a unique way as an action performed on a commitment or on an argument. Because any action is presented by one and only one function, the CAN representing a conversation is unique.

In Section 6.2 we presented the structure of the CAN formalism, and we illustrated its construction process through the example of Section 6.3. In these two sections, we only highlighted the fact that the CAN formalism can be used to represent conversations. However, in the proposition 6.1, we showed generally that the CAN formalism is able to represent any coherent conversation, in particular by showing the falseness of the reason Reason2. The proposition is thus not a "petitio principii" since "nesting property" (see Reason2) is not an assumption in our proof. Our proof is rather a proof by construction because we showed that we can build a CAN for any coherent conversation.

This theoretical result is of great utility because it offers a formal framework to represent different types of conversations, for example, the conversation types proposed by Walton and Krabbe (1995).

KQML was the first standard proposed to specify communications between agents (Finin et al., 1995). More recently, FIPA (1997, 1999, 2001a) proposed a new standard called FIPA-ACL. KQML and FIPA-ACL are both based on the mental approach. These two languages use protocols like those proposed by Pitt and Mamdani (2000) and the Contract Net (Smith, 1980) and the NetBill (Cox et al., 1995). These protocols define, in a fixed way, which sequences of moves are conventionally expected in a conversation. Protocols are often technically modeled as finite state machines that represent sequences of states and transitions and are usually too rigid to be used to model conversations between autonomous agents. In this context, the CAN formalism can allow the action sequences described by a protocol, but in a more flexible way. Contrary to protocols, agents using the CAN system do not follow a pre-planned sequence, but they reason in terms of commitments, arguments and relations between these two types of elements. In order to select the next communicative act to be performed, an agent reasons on the current state of the conversation using its argumentation system. This state is represented by the CAN framework and by the notion of commitment and argument state. In addition, protocols are semi-formally specified. However, the CAN framework is formally specified using a formal approach based on action and argumentation theories. These formal foundations allow us to prove some interesting properties like propositions 6.1 and 6.2. They also enable us to define a formal semantics and a verification method for agent communication using a model checking technique. Chapters 7 and 8 detail these two issues.

Several researchers proposed dialogue games in order to offer more flexibility (Dastani et al., 2000), (Maudet and Chaib-draa, 2002), (McBurney et al., 2002). The CAN formalism can be used to represent these dialogue games and to illustrate how various games can be combined in order to build complete conversation. In Chapter 9, we present a persuasion dialogue game protocol specified using our approach. Additionally, the CAN framework can be used not only as a specification tool but also as a means that agents can use in order to be able to effectively participate in coherent conversations.

Singh and Colombetti propose a commitment-based approach that emphasizes the importance of the social aspect of communication (Colombetti, 2000), (Singh, 1998, 2000). Singh’s and Colombetti’s work were focused on the definition of a semantics for speech acts. When considering the conversational aspect, Singh simply proposed the enhancement of the classical protocols (like those used in FIPA) by using commitments in order to ensure the compliance of the agents’ behavior with the protocol. A participating agent can maintain a record of the commitments being created and modified. From these, the agent can determine the compliance of the other agents according to the given protocol. However, this approach is still not flexible and it does not indicate how agents can select the communicative acts. Colombetti proposed general conversational principles from which the structure of well-formed conversations should be derived. However, the way of implementing these principles is not specified. The management of commitments is only partially addressed in this approach.

On the basis of Singh’s and Colombetti’s proposals, Yolum and Singh (2002) developed a technique for specifying protocols in which actions’ content is captured through agents’ commitments. They provide operations and reasoning rules to capture the evolution of commitments. Using these rules, agents can reason about their actions. Chopra and Singh (2004) proposed a commitment-based formalism called non-monotonic commitment machines for representing multi-agent interaction protocols. This formalism specifies rules using nonmonotonic causal logic . These rules model the changes in the state of a protocol as a result of the performance of actions. The nonmonotonic causal logic in this formalism is used only to reason about actions in terms of whether an action can be the cause of another action. However, how agents can select actions using this reasoning mechanism is not addressed. In addition, the relation between this reasoning and private mental states of agents is not specified. In a similar way, Fornara and Colombetti (2003) proposed a method to define interaction protocols. This method is based on the specification of an interaction diagram (ID) specifying which actions can be performed under given conditions. The advantage of these approaches is that they are verifiable because they are based on public notions. They also allow us to represent the interaction dynamics through the allowed operations. Like these proposals, our approach and our CAN formalism are also based on commitments. However, our approach uses an argumentation theory which is more general that the nonmonotonic causal logic used in (Chopra and Singh, 2004). This is due to the fact that in our approach, agents can reason about commitments, commitment contents, and positioning actions in order to decide about the next act to be performed. This argumentation-based reasoning uses both the agents’ mental states and the current state of the conversation. Our approach explicitly specifies how agents handle their commitments and how they take positions on other agents’ commitments by using arguments. In addition, the operations we use in our pragmatic approach are different from the operations used in (Fornara and Colombetti, 2003), (Chopra and Singh, 2004), (Yolum and Singh, 2002). Finally, unlike the other formalisms, the CAN formalism can be used both to assist agents to communicate in a coherent way by representing the evolution of the conversation and to specify flexible protocols using, for example, the dialogue game approach.

Amgoud and her colleagues (2000a, 2000b, 2001) proposed to model dialogues using an argumentative approach and formal dialectics. Using MacKenzie’s dialectical system (1979), they defined a certain number of dialogue rules and update rules for the different types of locutions supported by their dialogue model. These locutions are: assert, accept, question, challenge, request, promise and refuse. Dialogue rules define the protocol, while update rules capture the effect of the speech acts on the state of the dialogue. To reflect the dialogue dynamics, they use the concept of a commitment store . Each agent has its own commitment store accessible by all the other agents. These commitment stores contain only the moves which were performed. Therefore, they reflect only the dialogue history. In the same way, Parsons et al. (2003), McBurney (2002) and Sadri et al. (2001) proposed protocols based on an argumentative approach. These protocols are based on Walton and Krabbe’s classification of dialogues and on formal dialectics. In these protocols, agents can argue about the truth of propositions. Agents can communicate both propositional statements and arguments about these statements. These protocols have the advantage of taking into account the capacity of agents to reason as well as their attitudes (confident, careful,...). Semantically, these protocols are specified by defining pre- and post-conditions for each locution. The main difference between these proposals and our work is that our approach formalizes a social aspect of agent interaction (represented by the notion of social commitments) and its relation to the agent reasoning using an argumentation theory. Thus, our approach is an hybrid one that is based on commitments and arguments. Another important difference is that argumentation-based protocols (McBurney, 2002), (McBurney et al., 2002), (Parsons et al., 2003) use moves from formal dialectics, whereas our approach uses an action theory to specify agents’ speech acts as actions that these agents apply to commitments and to arguments. The semantics of these actions is defined in Chapter 7 using dynamic logic. By using these actions we can capture not only the locutions used in these protocols but also the argumentation actions represented in our framework by attack, defense, justify and contradict actions. In addition, in our approach, dynamics is reflected not only by the connectedness of the commitments resulting from the performed speech acts, but also by the concepts of the commitment state, the commitment content state and the argument state. The CAN formalism more clearly illustrates this dynamics in terms of actions on commitments and arguments. Moreover, unlike the CAN formalism, the notion of commitment store does not make it possible to distinguish the argumentation phases from the other phases and does not allow us to illustrate the positioning of an agent on an another agent’s action.

Reed (1998) introduced the notion of dialogue frame as a model of inter-agent communication. He used this notion to present the dialogue types defined by Walton and Krabbe (1995): persuasion, negotiation, investigation, deliberation and information seeking. These types are represented by a set as follows:

where B is a set of agent’s beliefs, C a set of agent’s contracts, and P a set of agent’s plans.

Formally, a dialogue frame is a 4-tuple:

where t is the type of the dialogue frame, is the set of beliefs, contrasts or plans, τ is the topic of the dialogue frame, x0 and y0 are the interlocutors and uj xj yj refers to the j th utterance occurring in a dialogue between agents xj and yj such that ( xj = yj+1 and yj = xj+1 ). A dialogue frame is of a particular type (< t , > D ), and focused on a particular topic ( τ ). For instance, a persuasion dialogue will be focused on a particular belief, a deliberation on a plan, and so on. Reed’s approach makes it possible to illustrate the conversation dynamics only in terms of sequences of utterances. As an external representation, the CAN formalism is more complete than the concept of dialogue frame. In the CAN formalism, the dynamics is reflected by the actions that agents perform on commitments and arguments and by the argumentation relations existing between these commitments and arguments. The sequence of utterances is captured in our framework by the set T of time units that we associate with the various actions. In addition to being a means to analyze conversations, the CAN formalism provides agents with a means that enables them to participate in coherent conversations and to select their future moves. Like the dialogue frames, our formalism can represent any dialogue type. In Chapter 9, we present the example of the persuasion dialogue.



[4] To get this graph, we use the rule:

p⇒q ≡ ¬(p∧¬q), with p = ¬("there is a disease that appears at birth") and q = ¬("this disease is genetic").

Note that in the formula, *x is a mark of coreference which appears in the referent part of a concept.

© Jamal Bentahar, 2005