Collection Mémoires et thèses électroniques
Accueil À propos Nous joindre

Chapter 9[*] Application: Specifying and Implementing a Persuasion Dialogue Game Protocol

Table des matières

In this chapter, we present an application of our pragmatic approach. We propose a new persuasion dialogue game protocol for agent communication specified using this approach. We show how this protocol is modeled by the CAN framework. Our dialogue game protocol is specified by indicating its entry conditions, its dynamics and its exit conditions. In order to solve the problem of the acceptance of arguments, the protocol integrates the concept of agents’ trustworthiness in its specification. The chapter proposes a set of algorithms for the implementation of the persuasion protocol and discusses their termination, complexity and correctness. The chapter addresses also the implementation issues of our protocol using logic programming and an agent-oriented platform.

Research in agent communication protocols has received much attention during the last years. Protocols are means of achieving meaningful interactions. In multi-agent systems (MAS), agents use protocols to guide their interactions with each other. Protocols describe the allowed communicative acts that agents can perform when conversing. These protocols specify the rules governing a dialogue between agents in MAS.

Protocols for multi-agent interaction need to be flexible because of the open and dynamic nature of MAS. Traditionally, these protocols are specified as finite state machines or Petri nets without taking into account the agents’ autonomy. Therefore, these protocols are not flexible enough to be used in open MAS (Maudet and Chaib-draa, 2002). To solve this problem, several researchers proposed protocols using dialogue games (Dastani et al., 2000) (Dignum et al., 2001) (Maudet and Chaib-draa, 2002) (McBurney and Parsons, 2002) (see Chapter 3 for more details). Dialogue games are interactions between players, in which each player moves by performing utterances according to a pre-defined set of roles (McBurney and Parsons, 2002). The flexibility is achieved by combining different games to construct complete and more complex protocols.

In this chapter, we propose a persuasion protocol specified using a set of dialogue games. We formalize these dialogue games as a set of conversation policies. Conversation policies are declarative specifications that govern communication between autonomous agents (Greaves et al., 2000). Indeed, protocols specified using, for example, finite state machines are not flexible in the sense that agents must respect the whole protocol from the beginning to the end without reasoning about them. Thus, we propose to specify these protocols by small conversation policies that can be logically put together using a combination of dialogue games.

On the other hand, the protocols described in the literature are often specified by pre/post conditions. These protocols often neglect the decision-making process that allows agents to accept or to refuse an utterance. The protocols based on formal dialectics (Elvang-Goransson et al., 1993), (Prakken, 2001), (Amgoud et al., 2000a, 2000b) use the argumentation as a way of expressing decision-making. However, the sole use of argumentation does not make it possible to solve a decision-making problem well. We think that other social elements such as agents’ trustworthiness must also be taken into account.

The contribution of this chapter is the proposition of a new approach for specifying protocols for agent communication. A new persuasion dialogue game protocol is specified and implemented following this approach. This protocol is modeled using our pragmatic approach based on commitments and arguments. It is flexible in the sense that it is specified by small conversation policies that can be combined and in the sense that agents can reason about this protocol using their argumentation systems and the trustworthiness notion. The algorithms implementing this protocol are specified using the CAN framework. This protocol is characterized by the fact that it integrates the agents’ trustworthiness as a component of the decision-making process. Indeed, this chapter presents three main results:

1- A new formal language for specifying a persuasion dialogue game protocol as a combination of conversation policies.

2- A termination proof of the protocol based on the tableau method described in Chapter 8.

3- An implementation of the specification using an agent-oriented and logic programming framework.

The rest of this chapter is organized as follows. In Section 9.2, we address the specification of our persuasion protocol. We present the protocol form, the specification of each dialogue game and the protocol dynamics. We also present the different algorithms implementing these dialogue games, develop a termination proof, and discuss the correctness and complexity analysis. In Section 9.3, we highlight the importance of agents’ trustworthiness and present our model of this trustworthiness. In Section 9.4, we describe some issues in the implementation of the trustworthiness model and dialogue games. In Sections 9.5, 9.6, and 9.7, we compare our protocol to related work, we discuss the flexibility of this protocol, and we conclude.

According to the classification proposed by Walton and Krabbe (1995), each type of dialogue has an initial situation and the goal of the dialogue is to change this situation in a particular way. Figure 9.1 illustrates the initial situation as well as the goal of the persuasion dialogue.

In the same context, Vanderveken (2001) proposed a logic of discourse in which there are only four possible discursive goals that speakers can attempt to achieve by conversing. These goals are: descriptive, deliberative, declaratory and expressive goals. Persuasion dialogue is a sub-type of the dialogue types having a descriptive goal. In his typology, Vanderveken argued that each dialogue type with a discursive goal has a mode of achievement of the discursive goal and preparatory conditions. The mode of achievement imposes a certain sequence of speech acts. For a persuasion dialogue, a certain sequence of defense utterances, questions and answers is needed for the successful implementation of such a dialogue. Preparatory conditions determine a structured set of presuppositions related to the discursive goal. The persuasion dialogue has the preparatory conditions that there is a conflict between the agents’ points of view and that each agent has the capacity to defend its point of view.

In addition, in the domain of artificial intelligence and law, many computational and logical models of argument and debate, and of reasoning with conflicting information have been proposed (Prakken, 1997), (Prakken and Sartor, 1998), (Bench-Capon et al., 2003). Prakken and Sartor (1998) introduced a dialectical proof theory for an argumentation framework. A proof of a formula takes the form of a dialogue tree, in which each branch of the tree is a dialogue and the root of the tree is an argument for the formula. The idea is that every move in a dialogue consists of an argument based on the input theory, where each stated argument attacks the last move of the opponent in a way that meets the player’s burden of proof.

Our persuasion protocol is defined by specifying its entry conditions, its exit conditions, and its dynamics. Entry conditions correspond to the initial situation of the dialogue and to the preparatory conditions. Exit conditions correspond to the final situation that makes it possible to determine if the dialogue goal is achieved or not. The dynamics specifies the different types of actions that can be performed by agents so that each agent can achieve its goal. The dynamics correspond to the mode of achievement of the discursive goal. It also corresponds to the dialectical proof theory where the root is the persuasion subject. The dynamics is specified by a set of initiative / reactive dialogue games. An initiative game involves creating a new commitment. A reactive game consists in taking position on an existing commitment (acceptance, refusal, challenge, defense, etc.).

Our persuasion protocol is specified as a set of initiative / reactive dialogue games that are specified as a combination of conversation policies. In accordance with our pragmatic approach (see Chapters 5 and 6), the game moves are considered as actions that agents apply to commitments, to their contents and to arguments. A conversation policy is specified as follows:

This specification indicates that if an agent Ag1 performs the action Action_Ag1 , and that the condition Cond is satisfied, then the interlocutor Ag2 will perform the action Action_Ag2 afterwords. The condition Cond is expressed in terms of the possibility of generating an argument from the agent’s argumentation system and in terms of the interlocutor’s trustworthiness.

Before introducing some formal notation we use in our specification, we notice that we distinguish between arguments that an agent has (private arguments) and arguments that this agent uses in the conversation (public arguments). We introduce the following sets:

Support ( Ag , p ) = { p’ / p’ p }

Create_Support ( Ag1 , SC ( Ag1 , Ag2 , t , p )) = { SC ( Ag1 , Ag2 , tx , px ) / px p }

Support ( Ag , p ) is the set of Ag s’ private arguments supporting p .

Create_Support ( Ag1 , SC ( Ag1 , Ag2 , t , p )) is the set of commitments created by agent Ag1 to support the content of SC ( Ag1 , Ag2 , t , p ). This set is closed under the support relation i.e.:

We use the notation: p Arg_Sys ( Ag1 ) to denote the fact that a propositional formula p can be generated from the argumentation system of Ag1 denoted Arg_Sys ( Ag1 ). The formula ¬( p Arg_Sys ( Ag1 )) indicates the fact that p cannot be generated from Ag1 ’s argumentation system. A propositional formula p can be generated from an agent’s argumentation system, if this agent can find an argument that supports p . To simplify the formalism, we use the notation Act’ ( Agx , SC ( Agi , Agj , t0 , p )) to indicate the action that agent Agx performs on the commitment SC ( Agi , Agj , t0 , p ) or on its content ( Act’ ∈{ Create , Withdraw , Accept-content , Challenge-content , Refuse-content }). For the actions related to the argumentation relations, we write Act-Arg ( Agx , [ SC ( Agn , Agm , t1 , q )], SC ( Agi , Agj , t0 , p )). This notation indicates that Agx defends (resp. attacks or justifies) the content of SC ( Agi , Agj , t0 , p ) by the content of SC ( Agn , Agm , t1 , q ) ( Act-Arg ∈{ Defend-content , Attack-content , Justify-content }). The commitment that is written between square brackets [ ] is the support of the argument. In a general way, we use the notation Act’ ( Agx , S ) to indicate the action that Agx performs on the set of commitments S or on the contents of these commitments, and the notation Act-Arg ( Agx , [ S ], SC ( Agi , Agj , t0 , p )) to indicate the argumentation-related action that Agx performs on the content of SC ( Agi , Agj , t0 , p ) using the contents of S as support. We also introduce the notation Act-Arg ( Agx , [ S ], S’ ) to indicate that Agx performs an argumentation-related action on the contents of a set of commitments S’ using the contents of S as supports.

We distinguish two types of dialogue games: entry game and chaining games . The entry game allows the two agents to open the persuasion dialogue. It corresponds to the entry conditions. The chaining games make it possible to continue the conversation. The protocol terminates when the exit conditions are satisfied (Figure 9.2).

For this game we distinguish two cases:

Case1. SC(Ag1, Ag2, tx, p) S

In this case, Ag1 justifies the content of its commitment SC ( Ag1 , Ag2 , tx , p ) by creating a set of commitments S . As for the Defend action, Ag2 can accept, challenge and/or attack a subset of S . The specification of this case is given by the following conversation policies ( Specification 4 ):


, pi are propositional formulae.

a4 = a2 , b4 = b2 , c4 = c2

Case2. {SC(Ag1, Ag2, tx, p)} = S

In this case, the justification game has the following specification ( Specification 5 ):


a4 = Ag1 Trust ( Ag2 , D )

b4 = Ag1 Trust ( Ag2 , D )

Trust ( Ag , D ) is the set of the trustworthy agents for Ag relative to a domain D . Here we assume that p is in the domain D . This aspect will be discussed later.

Ag1 justifies the content of its commitment SC ( Ag1 , Ag2 , tx , p ) by itself (i.e. by p ). This means that p is part of Ag1 ’s knowledge. Only two moves are possible for Ag2 : 1) accept the content of SC ( Idx , Ag1 , Ag2 , p ) if Ag1 is a trustworthy agent for Ag2 ( a’4 ), 2) if not, refuse this content ( b’4 ). Ag2 cannot attack this content because it does not have an argument against p . The reason is that Ag1 plays a justification game because Ag2 played a challenge game.

Like the definition of the Defend action, we define the Justify action as follows:

This means that Ag1 creates the set S of commitments to support the commitment SC ( Ag1 , Ag2 , tx , p ).

The persuasion dynamics is described by the chaining of a finite set of dialogue games: acceptance move, refusal move, defense, challenge, attack and justification games. These games can be combined in a sequential and parallel way (Figure 9.3).

After Ag1 ’s defense game at moment t1 , Ag2 can, at moment t2 , accept a part of the arguments presented by Ag1 , challenge another part, and/or attack a third part. These games are played in parallel. At moment t3 , Ag1 answers the challenge game by playing a justification game and answers the attack game by playing an acceptance move, a challenge game, another attack game, and/or a final refusal move. The persuasion dynamics continues until the exit conditions become satisfied (final acceptance or a refusal). From our specifications, it follows that our protocol plays the role of the dialectical proof theory of the argumentation system.

Indeed, our persuasion protocol can be described by BNF grammar. To do this, we first introduce the following definitions:

where: ε is the empty dialogue game, and "//" is the parallelization symbol. G1 // G2 means that an agent can play the two games in parallel.

The persuasion protocol can be defined as follows:

where ";" is the sequencing symbol.


In this section we present a simple example dialogue that illustrates some notions presented in this chapter.

This example was also studied in (Amgoud and Maudet, 2002) in a context of strategical considerations for argumentative agents. The letters on the left of the utterances are the propositional formulae that represent the propositional contents. Agent Ag1 ’s KB contains: ([ q , r ], p ), ([ s , t ], q ) and ([ u ], u ). Agent Ag2 ’s KB contains: ([¬ t ], ¬ p ), ([ u , v ], ¬ t ), ([ u ], u ) and ([ v ], v ). The combination of the dialogue games that allows us to describe the persuasion dialogue dynamics is as follows:

Ag1 creates SC ( Ag1 , Ag2 , t0 , p ) to achieve the goal of persuading Ag2 that p is true. Ag1 can create this commitment because it has an argument for p . Ag2 refuses SC ( Ag1 , Ag2 , t0 , p ) because it has an argument against p . Thus, the entry game is played and the persuasion dialogue is opened. Ag1 defends SC ( Ag1 , Ag2 , t0 , p ) by creating SC ( Ag1 , Ag2 , t2 , q ) and SC ( Ag1 , Ag2 , t3 , r ). Ag2 accepts SC ( Ag1 , Ag2 , t3 , r ) because it has an argument for r and challenges SC ( Ag1 , Ag2 , t2 , q ) because it has no argument for q or against q . Ag1 plays a justification game to justify SC ( Ag1 , Ag2 , t2 , q ) by creating SC ( Ag1 , Ag2 , t4 , s ) and SC ( Ag1 , Ag2 , t5 , t ). Ag2 accepts the content of SC ( Ag1 , Ag2 , t4 , s ) and attacks the content of SC ( Ag1 , Ag2 , t5 , t ) by creating SC ( Ag2 , Ag1 , t6 , u ) and SC ( Ag2 , Ag1 , t7 , v ). Finally, Ag1 plays acceptance moves because it has an argument for u and it does not have arguments against v and the dialogue terminates. Indeed, before accepting v , Ag1 challenges it and Ag2 defends it by itself (i.e. ([ SC ( Ag2 , Ag1 , t7 , v ), SC ( Ag2 , Ag1 , t7 , v )])). Then, Ag1 accepts this argument because it considers Ag2 trustworthy (see Figure 9.9 Section 9.4). Ag1 updates its KB by removing the attacked argument and including the new argument. Figure 9.12 (Section 9.4) illustrates the screen shot of this example generated by our prototype. In this figure commitments are described only by their contents and the identifiers of the two agents are the two first arguments of the exchanged communicative actions. The contents are specified using a predicate language that the two agents share (the ontology).

The general algorithm representing our persuasion dialogue game protocol is given by Algorithm 9.1. Part A of Algorithm 9.1 specifies the entry conditions. Part B indicates the exit conditions. The persuasion dynamics (i.e. the sequence of utterances) is given by the function Dynamics . The specification of this function is given by Algorithms 9.2, 9.3, 9.4, 9.5 and 9.6. To simplify these algorithms, we suppose that the support of an argument is composed only by one commitment. In these algorithms SAg1 indicates the set of arguments of agent Ag1 (i.e. its knowledge base). S’Ag1 indicates the set of arguments that Ag1 used in the current dialogue. The set S’Ag1 allows us to avoid the use of same arguments several times. These algorithms specify the different dialogue games of our protocol as if then rules.

Algorithm 9.2 deals with the acceptance (Termination game) and the refusal (Entry game) cases. The acceptance of SC ( Idx , Ag1 , Ag2, p ) makes it possible to solve the conflict and to stop the algorithm. In the refusal case, if Ag1 finds an argument ( r , q ) not yet used for its commitment SC ( Idy , Ag1 , Ag2 , q ), then this agent creates a new commitment SC ( Idz , Ag1 , Ag2 , r ) to defend SC ( Idy , Ag1 , Ag2 , q ). Ag1 updates the set S’Ag1 by adding the argument ( r , q ). Ag1 informs Ag2 about its action using the Send primitive. The Send primitive has the form Send ( Destination , Action ). If Ag1 does not have arguments to defend its commitment, then the conflict cannot be solved because each agent refuses the arguments of the other and the algorithm stops.

Algorithm 9.3 deals with the Challenge game. Ag1 justifies its commitment if it finds an argument not yet used. As for the refusal case, Ag1 updates S’Ag1 and informs Ag2 about its action. If Ag1 does not find such an argument, then it indicates to Ag2 that the content of the challenged commitment is knowledge that Ag1 believes true by justifying it by itself. The formal definition of the justification relation is the same as the defense relation.

Algorithm 9.4 deals with the case of Ag1 reaction if Ag2 justifies the content of its commitment by itself (case 2 of Justification game). Trustworthy ( Ag2 , q ) is a boolean function that enables Ag1 to determine if Ag2 is trustworthy or not. If according to Ag1 , Ag2 is trustworthy, then Ag1 accepts Ag2 ’s commitment. If not, Ag1 refuses Ag2 ’s commitment. In the next section (Section 9.3) we propose a probabilistic model of trustworthiness to determine the value of Trustworthy ( Ag2 , q ) function.

Algorithm 9.5 deals with the case where Ag2 attacks the support of Ag1 ’s argument (Attack game). Ag1 attacks Ag2 ’s argument if Ag1 has an against-argument not already used. If not Ag1 refuses this argument. If Ag1 cannot attack or refuse Ag2 ’s argument, then Ag1 accepts Ag2 ’s argument if Ag1 has an argument. If not Ag1 challenges Ag2 ’s argument if Ag1 has no arguments nor against-arguments.

Algorithm 9.6 deals with the case in which the reactive game of Ag2 is a defense of its argument (Defense game) or a justification of its commitment (case 1 of Justification game). Thus, Ag1 can attack the support of the Ag2 ’s argument or its conclusion according to Ag1 ’s arguments. As in Algorithm 9.5, Ag1 accepts or challenges the support of Ag2 ’s argument in the opposite case.

In this section we discuss the termination of our protocol (i.e. the termination of Algorithm 9.1). Informally, to prove the termination of Algorithm 9.1, it is enough to prove that the protocol dynamics always converges to a final acceptance or a final refusal.

According to the Algorithms 9.2, 9.3, 9.4, 9.5 and 9.6, the protocol chaining can have one of the following possibilities:

1- Agent Ag2 accepts all the supports of the initial commitment SC ( Ag1 , Ag2 , tx , p ). Therefore, we have: ( Accept-content, ti ) ∈ FΩ ( Ag2 , SC ( Ag1, Ag2, tx , p )).

2- Agent Ag2 refuses one of the supports of SC ( Idx , Ag1 , Ag2 , p ), and Ag1 does not find an argument to defend this support. Thus, we have: FΩ ( Ag2 , SC ( Idx , Ag1 , Ag2 , p )) = {..., ( Refuse-content , ti )}.

3- The two agents attack each other about a part of the last arguments.

4- Agent Ag2 challenges a part of the arguments presented by Ag1 .

Possibilities 1 and 2 converge to a final acceptance and a final refusal. Possibility 3 converges to a situation where an agent finds an argument ( H , h ) to attack the support of the interlocutor’s argument, but this argument was already used (( H , h ) ∈ S’Ag ). The reason is that the agents’ knowledge bases are finite. In this case, this agent refuses the interlocutor’s argument (Algorithm 9.2). Thus, possibility 3 converges to a final refusal. For the same reason, possibility 4 converges to the situation in which Ag1 justifies a support by itself. In this situation, Ag2 can play only an acceptance move if Ag1 is trustworthy or a refusal move if not (Algorithm 9.4). Thus, possibility 4 converges to a final acceptance or a final refusal.

Formally, the termination of our dialogue game protocol is stated by the following theorem.

Theorem 9.1 The protocol dynamics always terminates.


To prove this theorem, we use a tableau method (Cleaveland, 1990). The idea is to formalize our specifications (Section 9.2.4) as tableau rules and then to prove the finiteness of the tableau. Tableau rules are written in such a way that premises appear above conclusions. Using a tableau method means that the specifications are conducted in a top-down fashion. For example, specification 2 (defense game) can be expressed by the following rules:

We denote the formulae of our specifications by σ , and we define the set of σ . We define an ordering on and we prove that has no infinite ascending chains. Intuitively, this relation is to hold between σ1 and σ2 if it is possible that σ1 is an ancestor of σ2 in some tableau. Before defining this ordering, we introduce some notations: Act* ( Ag , [ S ], S’ ) with Act* ∈ { Act’ , Act-Arg } is a formula. We notice that formulae in which there is no support [ S ], can be written as follows: Act* ( Ag , [ ], S’ ). σ [ S ] → R σ [ S’ ] indicates that the tableau rule R has the formula σ [ S ] as premise and the formula σ [ S’ ] as conclusion, with σ [ S ] = Act* ( Ag , [ S ], S’ ). The size | S | is the number of commitments in S .

Intuitively, in order to prove that a tableau system is finite, we need to prove the following:

1- if σ [ S0 ] → R σ [ S1 ] then σ [ S0 ] σ [ S1 ].

2- has no infinite ascending chains (i.e. the inverse of is well-founded).

Property 1 reflects the fact that applying tableau rules results in shorter formulae, and property 2 means that this process has a limit. The proof of 1 proceeds by a case analysis on R . Most cases are straightforward. We consider here the case of R 3. For this rule we have two cases. If | S1 | < | S0 |, then σ [ S0 ] σ [ S1 ]. If | S1 | ≥ | S0 |, we can apply the rules corresponding to the Attack game specification. The three first rules are straightforward since S2 = . For the last rule, we have the same situation that R 3. Suppose that there is no path in the tableau σ [ S0 ] → R0 σ [ S1 ] → R1 σ [ S2 ]... Rn σ [ Sn ] such that | Sn | = 0. This means that i) the number of arguments that agents have is infinite or that ii) one or several arguments are used several times. However, situation i is not possible because the agents’ knowledge bases SAg are finite sets, and situation ii is not allowed in our protocol because agents cannot use arguments already used (i.e. arguments already in in S’Ag ). We note here that the agents’ knowledge bases are updated after each conversation by removing the attacked arguments that cannot be defended and adding the new accepted arguments.

Because the definition of is based on the size of formulae and since | S0 | ∈ N (< ∞) and < is well-founded in , it follows that there is no infinite ascending chains of the form σ [ S0 ] σ [ S1 ]...

We notice that what we proved here is the termination of the protocol run and not the termination of the dialogue. For this reason, this proof uses the protocol specification in terms of the dialogue rules. It is clear that the termination of the protocol run results in the termination of the dialogue.

Correctness . We can formalize the correctness problem of our algorithms as follows: Algorithm 9.1 is correct iff the protocol description based on this algorithm satisfies the protocol specification (i.e. what the protocol must do). The specification can be formalized as a set of claims or properties that must be predefined. The idea is to describe the protocol as a transition system T for a dialogue game protocol as defined in Chapter 8 (Definition 8.2), and to express the specification as logical formulae ψ using our DCTL*CAN logic (see Chapter 7). This formalization enables us to deal with the correctness problem as a model-checking problem, i.e. whether or not. For this purpose we can use our model checking technique that we proposed in Chapter 8.

Because our persuasion dialogue game protocol is specified using our pragmatic approach (Chapters 5 and 6), and dialogue game specifications are described as if then rules, it is easy to translate this protocol to a transition system T for a dialogue game. Transitions are labeled by the different actions that we use in our specifications of dialogue games (i.e. Action_Agi ). The syntax of these actions can be easily translated to the syntax of DCTL*CAN. For example the action:

Defend-content ( Ag1 , [ S ], SC ( Idx , Ag1 , Ag2 , p ))

can be translated to:

Defend-content ( Ag1 , SC ( Idx , Ag1 , Ag2 , p ), p’ )


and .

Each dialogue game can be described by a fragment of the transition system T as follows: each conversation policy of the form :

can be described by two states s1 and s2 and a transition s1 Action_Ag2 s2 . Action_Ag1 is the label of a transition whose s1 is the target state. We notice that the condition Cond is omitted. This does not affect the correctness of the protocol, because the conditions are used by agents as a reasoning mechanism about the protocol and do not belong to the protocol itself. Using this procedure, we can describe our persuasion protocol by a transition system for dialogue game protocol with 11 states and 16 transitions. The initial state s0 is the source state of one transition labeled by the creation action. This transition system has two final states correspond respectively to the acceptance and the refusal states. Finally, the properties to be verified are derived from the specifications. The properties described in Chapter 8 (Section 8.4.2) are examples of the properties that our protocol must satisfied.

Complexity . The purpose of Algorithm 9.1 is to resolve the initial conflict or to decide after a finite number of moves that the conflict can not be resolved. Every move is based on the state of SAg and S’Ag because agents must seek arguments or counter-arguments in SAg and S’Ag . If we do not take into account the trustworthiness part of the algorithm, and since |SAg| < |S’Ag| , the time complexity of Algorithm 9.1 is: Ο( max (| SAg1 |, | SAg2 |)). The complexity of the trustworthiness part will be discussed in Section 9.3.3.

Several models of trustworthiness have been developed in the context of MAS (Sabater and Sierra, 2002), (Yu and Singh, 2002), (Ramchurn et al., 2003). However, their formulations do not take into account the elements we use in our approach (accepted and refused arguments, satisfied and violated commitments). For this reason, we propose a model that is more appropriate for our protocol. This model has the advantage of being simple and rigorous.

In our model, an agent’s trustworthiness is a probability function defined as follows:

This function associates to each agent a probability measure representing its trustworthiness in the domain D according to another agent. Let X be a random variable representing an agent’s trustworthiness. To evaluate the trustworthiness of an agent Agb , an agent Aga uses the records of its interactions with Agb . Equation 9.1 indicates how to calculate this trustworthiness as a probability measure (number of successful outcomes / total number of possible outcomes).

( 9.1 )

Nb_arg(Agb)Aga is the number of Agb s’ arguments that are accepted by Aga .

Nb_SC(Agb)Aga is the number of satisfied commitments whose Agb is the debtor and Aga is the creditor.

T_Nb_arg(Agb)Aga is the total number of Agb s’ arguments towards Aga .

T_Nb_SC(Agb)Aga is the total number of commitments whose Agb is the debtor and Aga is the creditor.

All these commitments and arguments are related to the domain D . The basic idea is that the trust degree of an agent can be induced according to how much information acquired from it has been accepted as belief in the past. Because all the factors of Equation 9.1 are related to the past, this information number is finite.

TRUST(Agb)Aga is the trustworthiness of Agb according to Aga ’s point of view. This trustworthiness is a dynamic value that changes according to the interactions taking place between Aga and Agb . This supposes that Aga knows Agb . If not, or if the number of interactions is not sufficient to determine this trustworthiness, the consultation of other agents becomes necessary.

As proposed in (Abdul-Rahman and Hailes, 2000) (Yu and Singh, 2002), each agent has two kinds of beliefs when evaluating the trustworthiness of another agent: local beliefs and total beliefs. Local beliefs are based on the direct interactions between agents. Total beliefs are based on the combination of the different testimonies of other agents called witnesses . In our model, local beliefs are given by Equation 9.1. Total beliefs require studying how different probability measures offered by witnesses can be combined. We deal with this aspect in the following section.

Let us suppose that an agent Aga wants to evaluate the trustworthiness of an agent Agb with which it never (or not enough) interacted before. This agent must consult agents that it knows to be trustworthy ( confidence agents ). A trustworthiness threshold w must be fixed. Thus, Agb will be considered trustworthy by Aga iff TRUST(Agb)Aga is higher or equal to w . Aga attributes a trustworthiness measure to each confidence agent Agi . When it is consulted by Aga , each confidence agent Agi provides a trustworthiness value for Agb if Agi knows Agb . Confidence agents use their local beliefs to calculate this value (Equation 9.1). Thus, the problem consists in evaluating Agb ’s trustworthiness using the trustworthiness values transmitted by confidence agents. Figure 9.4 illustrates this problem.

We notice that this problem cannot be formulated as a problem of conditional probability. Consequently, it is not possible to use Bayes’ theorem or total probability theorem . The reason is that events in our problem are not mutually exclusive, whereas this condition is necessary for these two theorems. Probability values offered by confidence agents are not mutually exclusive since they are provided simultaneously.

To solve this problem we must study the distribution of the random variable X representing the trustworthiness of Agb . Since X takes only two values: 0 (the agent is not trustworthy) or 1 (the agent is trustworthy), variable X follows a Bernoulli distribution ß (1, p ). According to this distribution, we have:

( 9.2 )

where E(X) is the expectation of the random variable X and p is the probability that the agent is trustworthy. Thus, p is the probability that we seek. Therefore, it is enough to calculate the expectation E(X) to find TRUST(Agb)Aga . However, this expectation is a theoretical mean that we must estimate. To this end, we can use the Central Limit Theorem (CLT) and the law of large numbers . The CLT states that whenever a random sample of size n ( X1,...Xn ) is taken from any distribution with mean μ , then the sample mean ( X1 + ... +Xn )/ n will be approximately normally distributed with mean μ . As an application of this theorem, the arithmetic mean (average) (X1+...+ Xn )/ n approaches a normal distribution of mean μ , the expectation and standard deviation .Generally, and according to the law of large numbers, the expectation can be estimated by the weighted arithmetic mean.

Our random variable X is the weighted average of n independent random variables Xi that correspond to Agb ’s trustworthiness according to the point of view of confidence agents Agi . These random variables follow the same distribution: the Bernoulli distribution. They are also independent because the probability that Agb is trustworthy according to an agent Agt is independent of the probability that this agent ( Agb ) is trustworthy according to another agent Agr . Consequently, the random variable X follows a normal distribution whose average is the weighted average of the expectations of the independent random variables Xi . The estimation of expectation E(X) can be given by Equation 9.3.

( 9.3 )

The value represents an estimation of TRUST(Agb)Aga .

Equation 9.3 does not take into account the number of interactions between confidence agents and Agb . This number is an important factor because it makes it possible to favor information coming from agents knowing more Agb . Equation 9.4 gives us an estimation of TRUST(Agb)Aga if we take into account this factor and we suppose that all confidence agents have the same trustworthiness.

( 9.4 )

where N(Agi)Agb indicates the number of interactions between a confidence agent Agi and Agb . This number can be identified by the total number of Agb ’s commitments and arguments.

The combination of Equations 9.3 and 9.4 gives us a good estimation of TRUST(Agb)Aga (Equation 9.5) that takes into account the three most important factors: (1) the trustworthiness of confidence agents according to the point of view of Aga (2) the Agb ’s trustworthiness according to the point of view of confidence agents (3) the number of interactions between confidence agents and Agb . This number is an important factor because it makes it possible to favor information coming from agents knowing more Agb .

( 9.5 )

This Equation shows how trust can be obtained by merging the trustworthiness values transmitted by some mediators. This merging method takes into account the proportional relevance of each trustworthiness value, rather than treating them equally. The function Trustworthy ( Ag2 ) of Algorithm 9.4 can be specified as follows:

If M > w Then Return true Else return false .

According to Equation 9.5, we have :

Consequently, the well-known lottery paradox of Kyburg can never happen. If all trustworthiness values transmitted by the mediators are below the threshold w , then Aga will not trust Agb .

To calculate M , we need the trustworthiness of other agents. A practical solution consists in building a trust graph like the TrustNet proposed by Yu and Singh (2002).

In previous section (Section 9.3.2) ) we offered a solution to the trustworthiness combination problem to evaluate the trustworthiness of a new agent ( Agb ). To simplify the problem we supposed that each consulted agent (a confidence agent ) offers a trustworthiness value of Agb if it knows it. If a confidence agent does not offer any trustworthiness value, it will not be taken into account at the moment of the evaluation of Agb ’s trustworthiness by Aga . However, as outlined in (Yu and singh, 2002), a confidence agent can, if it does not know Agb , offer to Aga a set of agents which eventually know Agb . In this case, Aga will consult the proposed agents. These agents also have a trustworthiness value according to the point of view of the agent that proposed them. For this reason, Aga applies Equation 9.5 to assess the trustworthiness values of these agents. These new values will be used to evaluate the Agb ’s trustworthiness. We can build a trust graph in order to deal with this situation. Such a graph is defined as follows:

Definition 9.8 A trust graph is a directed and weighted graph. The nodes are agents and an edge ( Agi, Agj ) means that agent Agi knows agent Agj. The weight of the edge ( Agi, Agj ) is a pair ( x, y ) where x is the Agj’s trustworthiness according to the point of view of Agi and y is the interaction number between Agi and Agj. The weight of a node is the agent’s trustworthiness according to the point of view of the source agent.

According to this definition, in order to determine the trustworthiness of the target agent Agb , it is necessary to find the weight of the node representing this agent in the graph. The graph is constructed while Aga receives answers from the consulted agents. The evaluation process of the nodes starts when all the graph is built. This means that this process only starts when Aga has received all the answers from the consulted agents. The process terminates when the node representing Agb is evaluated. The graph construction and the node evaluation algorithms are given respectively by Algorithms 9.7 and 9.8.

Algorithm 9.7 : The construction of the trust graph is described as follows:

1- Agent Aga sends a request about the Agb ’s trustworthiness to all the confidence agents Agi . The nodes representing these agents (denoted Node ( Agi )) are added to the graph. Since the trustworthiness values of these agents are known, the weights of these nodes (denoted Weight ( Node ( Agi ))) can be evaluated. These weights are represented by TRUST ( Agi ) Aga (i.e by Agi ’s trustworthiness according to the point of view of Aga ).

2- Aga uses the primitive Send ( Agi , Investigation ( Agb )) in order to ask Agi to offer a trustworthiness value for Agb . The Agi s’ answers are recovered when they are offered in a variable denoted Str by Str = Receive ( Agi ). Str.Agents represents the set of agents referred by Agi . Str.TRUST ( Agj ) Agi is the trustworthiness value of an agent Agj (belonging to the set Str.Agents ) from the point of view of the agent which referred it (i,e, Agi ).

3- When a consulted agent answers by indicating a set of agents, these agents will also be consulted. They can be regarded as potential witnesses. These witnesses are added to a set called: Potonial_Witnesses . When a potential witness is consulted, it is removed from the set.

4- To ensure that the evaluation process terminates, two limits are used: the maximum number of agents to be consulted ( Limit_Nbr_Visited_Agents ) and the maximum number of witnesses who must offer an answer ( Limit_Nbr_Witnesses ). The variable Nbr_Additional_Agents is used to be sure that the first limit is respected when Aga starts to receive the answers of the consulted agents.

Algorithm 9.8 : The evaluation of a graph node is based on the trustworthiness combination formula (Equation 9.5). The weight of each node that represents the trustworthiness value of the agent represented by the node is evaluated on the basis of the weights of the adjacent nodes. For example, let Arc ( Agx , Agy ) an arc in the graph, before evaluating Agy it is necessary to evaluate Agx . Consequently, the evaluation algorithm is a recursive one. The algorithm terminates because the nodes of the set Confidence ( Aga ) are already evaluated by Algorithm 9.7. Since the evaluation is done recursively, the call of this algorithm in the main program has as parameter the agent Agb .

Complexity . Our trustworthiness model is based on the construction of a trust graph and on a recursive call to the function Evaluate-Node ( Agy ) to assess the weight of all the nodes. Since each node is visited exactly once, there are n recursive calls, where n is the number of nodes in the graph. To assess the weight of a node we need the weights of its neighboring nodes and the weights of the input edges. Thus, the algorithm takes a time in Ο( n ) for the recursive calls and a time in Ο( a ) to assess the agents’ trustworthiness where a is the number of edges. The run time of the trustworthiness algorithm is therefore in Ο ( max ( a , n )) i.e. linear in the size of the graph.

In total, Algorithm 9.1 of our persuasion dialogue game protocol takes a time in:

Ο( max (| SAg1 |, | SAg2 |) + max ( a , n )) = Ο( max (| SAg1 |, | SAg2 |, a , n )).

In this section we describe the implementation of our persuasion dialogue game protocol (the different dialogue games and the trustworthiness model) using the JackTM platform (The Agent Oriented Software Group, 2004). We chose this language for three main reasons:

1- It is an agent-oriented language offering a framework for multi-agent system development. This framework can support different agent models.

2- It is built on top of and fully integrated with the Java programming language. It includes all components of Java and it offers specific extensions to implement agents’ behaviors.

3- It supports logical variables and cursors . A cursor is a representation of the results of a query. It is an enumerator which provides query result enumeration by means of re-binding the logical variables used in the query. These features are particularly helpful when querying the state of an agent’s beliefs. Their semantics is mid-way between logic programming languages with the addition of type checking Java style and embedded SQL.

Our system consists of two types of agents: conversational agents and trust model agents . These agents are implemented as JackTM agents, i.e. they inherit from the basic class JackTM Agent . Conversational agents are agents that take part in the persuasion protocol. Trust model agents are agents that can inform an agent about the trustworthiness of another agent (Figure 9.5).

According to the specification of the Justification game (Section 9.2.4 (D)), an agent Ag2 can play an acceptance or a refusal move according to whether it considers that its interlocutor Ag1 is trustworthy or not. If Ag1 is unknown for Ag2 , Ag2 can ask agents that it considers trustworthy for it to offer a trustworthiness assessment of Ag1 . From the received answers, Ag2 builds a trust graph and assesses the Ag1’ s trustworthiness as explained in Section 9.3.3.

To take part in our persuasion protocol, agents must have knowledge and argumentation systems. Agents’ knowledge are implemented using JackTM data structures called beliefsets . The argumentation systems are implemented as Java modules using a logical programming paradigm. These modules use agents’ beliefsets to build arguments for or against certain propositional formulae. The actions that agents perform on commitments or on their contents are programmed as events . When an agent receives such an event, it seeks a plan to handle it. These plans are the algorithms 9.2, 9.3, 9.4, 9.5, and 9.6 presented in this chapter.

The trustworthiness model is implemented using the same principle (events + plans). The requests sent by an agent about the trustworthiness of another agent are events and the evaluations of agents’ trustworthiness are programmed in plans. The trust graph is implemented as a Java data structure (oriented graph).

As Java classes, conversational agents and trust model agents have private data called Belief Data . For example, the different commitments and arguments that are created and manipulated are given by a data structure called CAN implemented using tables and the different actions expected by an agent in the context of a particular game are given by a data structure (table) called data_expected_actions . The different agents’ trustworthiness values that an agent has are recorded in a data structure (table) called data_trust . These data and their types are given in Figures 9.6 and 9.7.

The trustworthiness model is implemented by agents of type: trust model agent. Each agent of this type has a knowledge base implemented using JackTM beliefsets . This knowledge base called table_trust has the following structure: Agent_name , Agent_trust , and Interaction_number . Thus, each agent has information on other agents about their trustworthiness and the number of times that it interacted with them. The visited agents during the evaluation process and the agents added in the trust graph are recorded in two JackTM beliefsets called: table_visited_agents and table_graph_trust . The two limits used in Algorithm 9.7 ( Limit_Nbr_Visited_Agents and Limit_Nbr_Witnesses ) and the trustworthiness threshold w are passed as parameters to the JackTM constructor of the original agent Aga that seeks to know if its interlocutor Agb is trustworthy or not. This original agent is a conversational agent.

The main steps of the evaluation process of Agb ’s trustworthiness are implemented as follows:

1- By respecting the two limits and the threshold w , Aga consults its knowledge base data_trust of type table_trust and sends a request to its confidence agents Agi ( i = 1,.., n ) about Agb ’s trustworthiness. The JackTM primitive Send makes it possible to send the request as a JackTM message that we call Ask_Trust of MessageEvent type. Aga sends this request starting by confidence agents whose trustworthiness value is highest.

2- In order to answer to the Aga ’s request, each agent Agi executes a JackTM plan instance that we call Plan_ev_Ask_Trust . Thus, each agent Agi consults its knowledge base and offers to Aga an Agb ’s trustworthiness value if Agb is known by Agi . If not, Agi proposes a set of confidence agents from its point of view, with their trustworthiness values and the number of times that it interacted with them. In the first case, Agi sends to Aga a JackTM message that we call Trust_Value . In the second case, Agi sends a message that we call Confidence_Agent . These two messages are of type MessageEvent .

3- When Aga receives the Trust_Value message, it executes a plan: Plan_ev_Trust_Value . According to this plan, Aga adds to a graph structure called graph_data_trust two information: 1) the agent Agi and its trustworthiness value as graph node, 2) The trustworthiness value that Agi offers for Agb and the number of times that Agi interacted with Agb as arc relating the node Agi and the node Agb . This first part of the trust graph is recorded until the end of the evaluation process of Agb ’s trustworthiness. When Aga receives the Confidence_Agent message, it executes another plan: Plan_ev_Confidence_Agent . According to this plan, Aga adds to another graph structure: graph_data_trust_sub_level three information for each Agi agent: 1) the agent Agi and its trustworthiness value as a sub-graph node, 2) the nodes Agj representing the agents proposed by Agi , 3) For each agent Agj , the trustworthiness value that Agi assigns to Agj and the number of times that Agi interacted with Agj as arc between Agi and Agj . This information that constitutes a sub-graph of the trust graph will be used to evaluate Agj ’s trustworthiness values using Equation 9.5. These values are recorded in a new structure: new_data_trust . Thus, the structure graph_data_trust_sub_level releases the memory once Agj ’s trustworthiness values are evaluated. This technique allows us to decrease the space complexity of our algorithm.

4- Steps 1, 2 and 3 are applied again by substituting data_trust by new_data_trust , until all the consulted agents offer a trustworthiness value for Agb or until one of the two limits ( Limit_Nbr_Visited_Agents or Limit_Nbr_Witnesses ) is reached.

5- Evaluate the Agb ’s trustworthiness value using the information recorded in the structure graph_data_trust by applying Equation 9.5.

The different events and plans implementing our trustworthiness model and the conversational agent constructor are illustrated by Figure 9.8. Figure 9.9 illustrates an example generated by our prototype of the process allowing an agent Ag1 to assess the trustworthiness of another agent Ag2 in a domain related to the example given in Section 9.4.3. In this example, Ag2 is considered trustworthy by Ag1 because its trustworthiness value (0.79) is higher than the threshold (0.7).

In our system, agents’ knowledge bases contain propositional formulae and arguments. These knowledge bases are implemented as JackTM beliefsets . Beliefsets are used to maintain an agent’s beliefs about the world. These beliefs are represented in a first order logic and tuple-based relational model. The logical consistency of the beliefs contained in a beliefset is automatically maintained. The advantage of using beliefsets over normal Java data structures is that beliefsets have been specifically designed to work within the agent-oriented paradigm.

Our knowledge bases (KBs) contain two types of information: arguments and beliefs. Arguments have the form ([ Support ], Conclusion ), where Support is a set of propositional formulae and Conclusion is a propositional formula. Beliefs have the form ([ Belief ], Belief ) i.e. Support and Conclusion are identical. The meaning of the propositional formulae (i.e. the ontology) is recorded in a beliefset called table_ontology whose access is shared between the two agents. This beliefset has two fields: Proposition and Meaning .

To open a dialogue game, an agent uses its argumentation system. The argumentation system allows this agent to seek in its knowledge base an argument for a given conclusion or for its negation ( "against argument" ). For example, before creating a commitment SC ( Id0 , Ag1 , Ag2 , p ), agent Ag1 must find an argument for p . This enables us to respect the commitment semantics by making sure that agents can always defend the content of their commitments. The argumentation system of an agent is implemented using logical statements , logical members and cursors . Logical statements follow Open World semantics that models real world knowledge. It allows for three truth states: true, false and unknown. Logical members bring elements of logic programming to JackTM. They follow the semantic behavior of variables from logic programming languages such as prolog. That is, they are not place-holders for assigned values like normal Java variables. Rather, they represent a specific, but possibly unknown, value. Conclusions and supports of arguments are logical members, and statements using these conclusions and supports are logical statements. Cursors allow agents to seek an argument for supporting a given conclusion, using the query method of a knowledge base.

Agent communication is done by sending and receiving messages. These messages are events that extend the basic JackTM event : MessageEvent class. MessageEvents represent events that are used to communicate with other agents. Whenever an agent needs to send a message to another agent, this information is packaged and sent as a MessageEvent . A MessageEvent can be sent using the primitive: Send ( Destination , Message ). In our protocol, Message represents the action that an agent applies to a commitment or to its content, for example: Create ( Ag1 , SC ( Id0 , Ag1 , Ag2 , p )), etc.

Our dialogue games are implemented as a set of events ( MessageEvents ) and plans . A plan describes a sequence of actions that an agent can perform when an event occurs. Whenever an event is posted and an agent chooses a task to handle it, the first thing the agent does is to try to find a plan to handle the event. Plans are reasoning methods describing what an agent should do when a given event occurs.

Each dialogue game corresponds to an event and a plan. These games are not implemented within the agents’ program, but as event classes and plan classes that are external to agents. Thus, each conversational agent can instantiate these classes. An agent Ag1 starts a dialogue game by generating an event and by sending it to its interlocutor Ag2 . Ag2 executes the plan corresponding to the received event and answers by generating another event and by sending it to Ag1 . Consequently, the two agents can communicate by using the same protocol since they can instantiate the same classes representing the events and the plans. For example, the event Event_Attack_Commitment and the plan Plan_ev_Attack_commitment implement the defense game. The architecture of our conversational agents is illustrated in Figure 9.10.The different events and plans implementing our dialogue games are given in Figure 9.11. Figure 9.12 illustrates the screen shot of the example presented in Section 9.2.5.

To start the entry game, an agent (initiator) chooses a goal that it tries to achieve. This goal is to persuade its interlocutor that a given propositional formula is true. For this reason, we use a particular event: BDI Event ( Belief-Desire-Intention ). BDI events model goal-directed behavior in agents, rather than plan-directed behavior. What is important is the desired outcome, not the method chosen to achieve it. This type of events allows an agent to pursue long term goals.

In this section, we compare our protocol with some proposals that have been put forward in two domains: dialogue modeling and commitment based protocols.

1- Dialogue modeling . In (Amgoud et al., 2000a, 2000b) and (Parsons et al., 2003) Amgoud, Parsons and their colleagues studied argumentation-based dialogues. They proposed a set of atomic protocols which can be combined. These protocols are described as a set of dialogue moves using Walton and Krabbe’s classification and formal dialectics. In these protocols, agents can argue about the truth of propositions. Agents can communicate both propositional statements and arguments about these statements. These protocols have the advantage of taking into account the capacity of agents to reason as well as their attitudes (confident, careful, etc.). In addition, Prakken (2001) proposed a framework for protocols for dynamic disputes, i.e., disputes in which the available information can change during the conversation. This framework is based on a logic of defeasible argumentation and is formulated for dialectical proof theories. Soundness and completeness of these protocols have also been studied. In the same direction, Brewka (2001) developed a formal model for argumentation processes that combines nonmonotonic logic with protocols for dispute. Brewka pays more attention to the speech act aspects of disputes and he formalizes dispositional protocols in situation calculus. Such a logical formalization of protocols allows him to define protocols in which the legality of a move can be disputed. Semantically, Amgoud, Parsons, Prakken and Brewkas’ approaches use a defeasible logic. Therefore, it is difficult, if not impossible, to formally verify the proposed protocols.

There are many differences between our protocol and the protocols proposed in the domain of dialogue modeling:

1. Our protocol uses not only an argumentative approach, but also a public one. The effects of utterances are formalized not in terms of agents’ private attitudes (beliefs, intentions, etc.), but in terms of social commitments. In opposition of private mental attitudes, social commitments can be verified.

2. Our protocol is based on a combination of dialogue games instead of simple dialogue moves. Using our dialogue game specifications enables us to specify the entry and the exit conditions more clearly. In addition, computationally speaking, dialogue games provide a good balance between large protocols that are very rigid and atomic protocols that are very detailed.

3. From a theoretical point of view, Amgoud, Parsons, Prakken and Brewkas’ protocols use moves from formal dialectics, whereas our protocol uses actions that agents apply on commitments. These actions capture the speech acts that agents perform when conversing. The advantage of using these actions is that they enable us to better represent the persuasion dynamics considering that their semantics is defined in an unambiguous way in a temporal and dynamic logic (see Chapter 7). Specifying protocols in this logic allows us to formally verify these protocols using model checking techniques (see Chapter 8).

4. Amgoud, Parsons and Prakkens’ protocols use only assertion, acceptance, refusal and challenge moves, whereas our protocol uses not only creation, acceptance, refusal and challenge actions, but also justify, attack and defense actions in an explicit way. These argumentation relations allow us to directly illustrate the concept of dispute in this type of protocols.

5. Amgoud, Parsons, Prakken and Brewka use an acceptance criterion directly related to the argumentation system, whereas we use an acceptance criteria for conversational agents (supports of arguments and trustworthiness). This makes it possible to decrease the computational complexity of the protocol for agent communication. The reason is that in the approach proposed by Amgoud, Parsons, Prakken and Brewka, to decide about the acceptance of each argument, we need to find a least fixpoint of a given function. This task is computationally complex. In addition, in the literature there is no implementation of argumentative-based protocols.

2- Commitment-based protocols . Yolum and Singh (2002) developed an approach for specifying protocols in which actions’ content is captured through agents’ commitments. They provide operations and reasoning rules to capture the evolution of commitments. In a similar way, Fornara and Colombetti (2003) proposed a method to define interaction protocols. This method is based on the specification of an interaction diagram (ID) specifying which actions can be performed under given conditions. These approaches allow them to represent the interaction dynamics through the allowed operations. Our protocol is comparable to these protocols because it is also based on commitments. However, it is different in the following respects. The choice of the various operations is explicitly dealt with in our protocol by using argumentation and trustworthiness. In commitment-based protocols, there is no indication about the combination of different protocols. However, this notion is essential in our protocol using dialogue games. Unlike commitment-based protocols, our protocol plays the role of the dialectical proof theory of an argumentation system. This enables us to represent different dialogue types as studied in the philosophy of language. Finally, we provide a termination proof of our protocol and a complexity analysis of our implementation whereas these properties are not yet studied in classical commitment-based protocols.

The protocol that we proposed in this chapter is more flexible than the traditional protocols of agent communication for the following reasons:

1- Our protocol is not specified in a static way, but results from the combination of different dialogue games. How these dialogue games can be combined is not fixed in advance, but depends on the evolution of the communication. Consequently, the protocol automaton is non-deterministic.

2- Agents can reason about the protocol using their argumentation systems and the trustworthiness model. The agents’ choices depend on the current state of the dialogue in terms of the states of the different commitments and arguments (i.e. the current state of the CAN). Therefore, which games agents can play are determined on the fly.

3- Our protocol specifies the combination rules of different dialogue games and how agents can use these rules in a logical way. An interesting consequence of this specification is that the protocol does not have the problem of managing exceptions (messages not specified by the protocol). The reason is that the protocol does not specify a fixed number of possibilities, but only the logical rules that agents can use and reason about in any situations.

The contribution of this chapter is the proposition of a logical language for specifying persuasion protocols between autonomous agents using our commitment and argument approach. This language has the advantage of expressing the public elements and the reasoning process that allows agents to choose an action among several possible actions. Because our protocol is defined as a set of dialogue games, this protocol is more flexible than the traditional protocols such as those used in FIPA-ACL. This flexibility results from the fact that these games can be combined to produce complete and more complex protocols and from the fact that agents can reason about the protocol. We formalized these games as a set of conversation policies, and we described the persuasion dynamics by the combination of five dialogue games. Another contribution of this chapter is the tableau-based termination proof of the protocol. We also implemented this protocol using an agent-oriented language and a logical programming paradigm and we analyzed its computational complexity. Finally, we presented an example to illustrate the persuasion dynamics by the combination of different dialogue games.

© Jamal Bentahar, 2005