Collection Mémoires et thèses électroniques
Accueil À propos Nous joindre

CHAPTER 8: Collect and Analyze the Output Data Generated by The Multiagent Geosimulation

Table des matières

In this chapter, we present the method’s step in which we collect and analyze the output data generated by the geosimulation prototype. We also illustrate this step using the customers’ shopping behavior in a mall as case study.

To be useful, geosimulation applications must return meaningful results. Simulation output generation and analysis is an important step in a simulation study (Anu, 1997). This step is necessary in order to test various ideas and to learn about the simulation model, as well as the corresponding simulated phenomenon. A user must better understand the simulation model’s output, and consequently, he/she needs appropriate tools/techniques to collect and analyze the output data generated by the simulation. Hence, it is relevant to integrate in our method a step aiming to collect and analyze the data generated by the geosimulation . This step is presented and illustrated in this chapter using the shopping behavior as a case study. This chapter is organized as follows: Section 8.1 presents a generic description of the step aiming to generate the geosimultion output data and to analyze it. In Section 8.2, we illustrate the generation of the geosimulation outputs, using the shopping behavior case study. Section 8.3 illustrates the analysis of the output data collected from the geosimulation in the first sub-step. Finally, Section 8.4 discusses the issues presented in this chapter and concludes it.

This step is composed of two sub-steps aiming, respectively, to collect data from the geosimulation and to analyze this data (see Fig 8.1). Details of these sub-steps are presented in the following sub-sections.

In order to collect meaningful output data from the geosimulation execution, we use specific agents called ‘observer agents’. These agents have to perform what we call an ‘ observation mission ’. During this mission the observer agents: must be located in a place within the virtual environment, observe the geosimulation execution, and record the observed data in databases or files. In order to collect geosimulation data using observer agents we must prepare an ‘ observation scenario ’. In this scenario we design the structure/behavior of the observer agents and define their observation mission (the place and time of the observation in the virtual environment, as well as the targets (elements) to be observed during the simulation execution). The observation scenario is defined using the selected platform which is already used to develop the geosimulation prototype (see the fifth step of the method presented in Chapter 7).

Using agents to collect simulation output data is relevant for the following reasons:

- the user can easily define the observer agents’ observation mission. He/she can modify the positions of the observer agents in the simulation environment, the observation duration, the observed targets in the geosimulation, the structure of the generated files (or database) that will contain the observation data (outputs), etc.

- the user can specify several observer agents with different observation missions. For example, in the virtual shopping mall, he/she can create observer agents that gather data concerning shopper agents or observer agents that gather data concerning other agents in the geosimulation (e.g., store agents, door agents, etc.). In addition, the user can gather data in specific places or areas in the virtual environment (as for example the west side of the virtual mall, main corridors, entrance doors, etc.).

- since an observer agent is assigned to a specific geographic place in the virtual environment, the generated data is related to this place and can be analyzed taking into account the geographic features of this place.

In order to be useful to the geosimulation’s users, the collected simulation output data must be analyzed and the results of analysis presented to users. At this point, there are two possibilities: (1) we can develop an analysis tool to analyze the geosimulation output data, or (2) we can choose one of the existing analysis tools or techniques to analyze the geosimulation output data. In the two cases, a question arises: What are the characteristics of the technique or tool which is appropriate to analyze the geosimulation output data? We think that the characteristics of such a tool or technique depend on the characteristics of the geosimulation which are:

- Geosimulation focus on spatial and geographic data. Hence, we need an analysis tool or technique that takes into account spatial data.

- The great potential of the geosimulation is to explain the interactions of a large number of actors in complex social phenomena taking into account the geographic aspects of the simulation environment. Hence, we need an analysis tool or technique that can analyze combinations of variables in order to identify the influence of one or more variables on the others.

- In addition, the complexity of the geosimulation models, as well as their visualization capabilities, make them more realistic and, therefore, closer to users’ mental models. Hence, we need an analysis tool or technique that can present the analysis results in a manner which is close to user’s mental model.

To sum up, in order to analyze the outputs of multiagent geosimulations, we need sophisticated analysis tools or techniques, which:

- take into account the spatial aspects of the output data to be analyzed;

- take into account multi-variables analysis in order to define the influence of some variables on others. This is relevant if we want to understand the interactions between some geosimulation actors and the simulation environment;

- offer better manipulation, exploitation, and visualization of the analysis results which must be realistic and, therefore, closer to users’ mental models;

- are able to analyze data generated from several executions (compare several simulation scenarios and analysis results).

This step is illustrated in Sections 8.3 and 8.4 using the customers’ shopping behavior in a mall case study.

Concerning the shopping behavior geosimulation output data, we are interested in data related to the software shopper agents, their behavior, and interaction with the virtual mall. Hence, we need to collect data about the shopper agents during the course of the simulation. Depending on the nature of data to be collected from the shopping behavior geosimulation, we consider several categories of observer agents:

- Traffic observer agents : These agents have the mission to collect data concerning the traffic flow of the shopper agents inside the virtual mall. Each observer agent, belonging to this category, is established in a corridor and collects information concerning each shopper agent passing through this corridor during its shopping trip. The data collected by this category of agents can be used to compare the corridors’ frequentations for several configurations of the simulated environment;

- Visit observer agents : These agents have the mission to collect data concerning the shopper agents visiting specific places within the virtual mall (e.g., store). Each observer agent, belonging to this category, is assigned to the choosen area ( as for example a store) and collects information concerning each shopper agent visiting this area. The data collected by this category of agents can be used to compare the frequentations of specific area for several configurations of the simulated environment;

- Shopping observer agents : These agents have the mission to collect data concerning the shopper agents’ shopping trip. The mission of these agents is the following: (1) when simulation starts, they are located in specific places within the virtual mall; (2) when a shopper agent enters the mall, they collect data concerning this agent and its planned shopping trip before it starts its shopping trip; and (3) before the shopper agent leaves the mall, they collect data concerning this agent and its performed shopping trip. As we can see, the observer agents belonging to this category mimic the behavior of the ‘ surveyors ’ who were hired to collect geosimulation input data concerning the real shoppers (see Chapter 6) (see Fig 8.2). The data collected by this category of agents can be compared to the input data collected by the ‘ surveyors ’. Hence, the analysis of this data can be used to calibrate or validate the geosimulation models.

In our work, we developed the first category of the observer agents (traffic observer agents). These agents must be established in the corridors of the virtual mall and when they perceive a virtual shopper agent passing by this corridor, they must observe it, and record information about it (e.g., its identification) in a data file or database. The structure/behavior of the observer agents, as well as their observation mission are specified in the MAGS platform. In the following sub-sections, we present the structure/behavior and observation mission of the traffic observer agent used to collect data from the shopping behavior geosimulation.

In the MAGS platform, we need to specify, in very detailed manner, the behavior of the observer agent. When the simulation starts, the observer agent must:

■ Put on a red dressing: In order to be distinguished from the shopper agents, the observer agent performs this specific behavior by putting on a uniform red dressing (see Fig 8.3);

■ Go to the X, Y positions in the virtual mall. These locations are specified in its static states Observer_Pos_X and Observer_Pos_Y: At the beginning of the simulation, the observer agent moves to its assigned observation location within the virtual mall. It is relevant to note that the positions of the observer agents can be defined by the geosimulation user using the scenarios management module of the MAGS platform; and

■ Begin to observe the shopper agents and record information about them in data files until the end of the simulation.

All these behaviors must be specified in the scenarios management module of the MAGS platform, using its formalism (objectives, rules, preconditions, actions, etc.) (see previous chapter). Fig 8.4 presents an observer agent that is affected to a corridor, perceiving certain shopper agents passing by this corridor, and recording information about them. The information is stored in a data files related to the observer agent.

In the case of the prototype of shopping behavior in Square One mall, the positions of the observer agents are presented in Fig 8.5.

The data collected by the traffic observer agents is stored in data files whose the structure is presented in Fig 8.6.

Independently of their category (traffic, visit, or shopping), the observer agents are assigned to a specific geographic place in the virtual shopping mall. Hence, the generated data is related to this place and can be analyzed taking into account the geographic features of this place. In the next section we present the illustration of the second part of the step which aims at analyzing the geosimulation generated data. This illustration is made using the shopping behavior case study.

Our literature review revealed that there exist some analysis techniques and tools that can be used to analyze the collected data generated by the geosimulation using observer agents. In the following sub-sections, we present some analysis techniques and tools which can be used to analyze output data generated from the shopping behavior geosimulation prototype. In each sub-sections we present the advantages and the limits of such tool or technique.

In order to analyze the geosimulation outputs, we tried to use some existing analysis tools or techniques. Hence, we made an in-depth literature survey about the analysis techniques and tools that can be used to analyze simulation outputs. We found out that there exist some researches dealing with simulation output analysis. As examples, several authors (Sanchez, 2001), (Kelton, 1997), (Alexopoulos et al., 1998), (Alexopoulos, 2002), and (Seila, 1992) propose different analysis techniques for simulation outputs. Unfortunately, these techniques which are called ‘ traditional or classical techniques ’ present some limitations. One limitation is the statistical and mathematical aspects of these techniques which make them usually difficult to use by computer scientists in order to build relevant visualizations and outputs for end-users (decision-makers). To overcome this problem (Grier, 1992) proposed a graphical, statistical analysis technique that can be used to analyze simulation outputs. The visual display of the results quickly conveys information about the simulation models. Users, who rely on simulations to support their decisions, prefer graphical analyzes because they are easy to understand. (Blaisdell et al., 1992) proposed SIMSTAT, which is a tool to analyze simulations, based upon a graphical analysis technique, and which is combined with several simulation tools. Using graphical analysis is efficient for several kinds of simulations that do not deal with spatial or geographic data. However, for multiagent geosimulation, spatial data represent an important issue for end-users (decision-makers). In such a simulation fields, a classical or traditional analysis technique based upon tables and graphs, is too limited for spatial analysis (no spatial analysis, no spatial visualization, no map-based exploration of spatial data, etc.). Consequently, we can conclude that traditional or classical analysis techniques are less suited for the analysis of multiagent geosimulation outputs.

To overcome the limitations of traditional analysis techniques, we decided to develop our own analysis tool. This sub-section aims at presenting this tool and at demonstrating how it can be used to analyze and exploit the data generated from the shopping behavior geosimulation prototype. This tool, which is developed using Microsoft Visual Basic 6.0, is coupled with the geosimulation prototype as follows: it exploits the data generated by the observer agents and stores them in files, analyzes them, and presents results to end-users via a friendly user-interface (see Fig .8.7).

The analysis results are stored in some data files (analysis files) whose structures are presented in Fig. 8.8.

In the following points, we present some advantages of our tool that make it more suited, than traditional tools and techniques, for analyzing geosimulation outputs.

It is important to be able to analyze the combinations of non-spatial and spatial variables (multi-variables) in order to understand the interdependency of various factors, and especially to understand how the spatial factors derived from the geographic features of the environment influence the other factors as well as the agents’ shopping behavior. When developing our tool, we took into account this characteristic of the analysis. In the following points, we present some multi-variables analyses that can be done using our analysis tool. This is illustrated with the shopping behavior as case study. In these analyses, we combine non-spatial variables belonging to the shopper agents and spatial variables belonging to the simulated environment.

- Using our tool, the user can get the analysis results of the combination of the spatial variable for corridor frequenting , with the non-spatial variable for gender of the shopper agent . To do that, the user must select, by a simple mouse click, a corridor (or an observer agent related to this corridor) on the map, and select the variable gender from the variables list (at the right in Fig 8.10). Fig 8.11 presents the analysis results of the combination of the spatial variable (corridor X) and the non-spatial variable gender. We can see that corridor X is frequented by 46 shoppers agents. Among these, there are 22 males (47.82%) and 24 females (52.17%) (see Fig 8.12). When the user clicks on the observer agent related to corridor Y, the tool indicates that this corridor is visited by only 45 shopper agents, 26 males (57.77%) and 19 females (42.22%) (see Fig 8.13).

- The user can combine the spatial dimension (i.e. a corridor) that belongs to the environment, with more than one non-spatial variable belonging to the shopper. For example, he/she can combine the non-spatial variables gender and age_goup , belonging to the shopper, with the spatial variable corridor belonging to the geographic environment. To do that, the user needs to select two variables, gender and age_group , and then he needs to select a corridor (or the observer agent related to this corridor). He can directly see the analysis results on the screen. Figs 8.14 and 8.15 present details of these analysis results. For example, for corridor X, we find that among the 46 shoppers, 19 females are between 26 and 36 years old, and 2 males are between 13 and 17 years old. The user can also compare the analysis results of the combination of several variables related to several corridors (where observer agents are located) in order to explore and compare them.

In the previous sub-section, we presented our own analysis tool that is coupled to the geosimulation engine in order to analyze its outputs. This tool offers some interesting functionalities concerning the analysis of geosimulation outputs, but, it presents some limitations concerning the visualization and exploitation of the results. For this reason, we turned to the literature in order to find other tools or techniques that can overcome the limitation of our tool. We selected the technique called OLAP ( On Line Analytical Processing ) which allows users to make uni-variable and multi-variables analysis where each variable is called ‘dimension’. Recently, the OLAP technique has been extended in order to analyze spatial variables or ‘dimensions’. This extension is called SOLAP ( Spatial On Line Analytical Processing ) (Bédard et al., 2001).

This sub-section aims to present the fundamental concepts, on which OLAP and SOLAP are based. It also presents how we used OLAP/SOLAP technique to analyze the outputs of the shopping behavior geosimulation prototype.

OLAP (On Line Analytical Processing) has been defined as «... the name given to the dynamic enterprise analysis required to create, manipulate, animate and synthesize information from exegetical, contemplative and formulaic data analysis models. This includes the ability to discern new or unanticipated relationships between variables, the ability to identify the parameters necessary to handle large amounts of data, to create an unlimited number of dimensions, and to specify cross-dimensional conditions and expressions » (Codd et al., 1993). Other OLAP definitions have since been proposed, including « A software category intended for the rapid exploration and analysis of data based on multidimensional approach with several aggregation levels » (Caron, 1998).

The multidimensional approach is based on two notions: dimensions and measures . Dimensions represent analysis axes which represent a variable to be analyzed, while measures are the numerical attributes being analyzed against the different dimensions. A dimension contains members that are organized hierarchically into levels, each level having a different granularity going from coarse at the most aggregated level, to fine at the most detailed level. The members of one level can be aggregated to form the members of the next higher level. The measures at the finest level of granularity can be aggregated or summarized, following this hierarchy, and provide information at the higher levels according to the aggregation rules or algorithms.

A set of measures, aggregated according to a set of dimensions, forms what is often called a data cube or hypercube (Thomsen et al., 1999). Inside a data cube, possible aggregations of measures on all the possible combinations of dimension members can be pre-computed. This greatly increases query performances, in comparison to the conventional transaction-oriented data structures found in relational and object-relational database management systems (DBMS).

The common OLAP architecture can be divided into three parts: the multidimensionally structured database, the OLAP server that manages the database and carries out the different calculations, and finally, the OLAP client that accesses the database via the OLAP server. This access allows the end-user to explore and analyze the data, using different visualization methods and adapted operators (Bédard et al., 1997), such as drill-down (show-details), roll-up (show a more global picture, also called drill-up ), drill-across (show another theme with the same level of details) and swap (change a dimension for another one).

Finally, it is commonly found in the literature that the multidimensional approach of analysis is more in agreement with the end-user's mental model of the data (Codd et al., 1993)(Yougworth, 1995). Based upon this approach, the interface of a tool exploring the multidimensional paradigm, such as OLAP, is usually intuitive, and the user can perform analysis ranging from simple to complex, mostly by clicking on the data being organized in a meaningful way (Yougworth, 1995). This adds to the fact that the multidimensional data structure is optimized for rapid, ad hoc information retrieval (OLAP Council, 1995), which greatly facilitates the data exploration and analysis process.

Traditional OLAP offers good support for simultaneous usage of descriptive, temporal, and spatial dimensions, in a multidimensional analysis process. Descriptive dimensions are used to describe the data to be analyzed. The temporal ones take into account the temporal aspect of the analysis, while the spatial dimensions allow for the spatial reference of the phenomena under study. However, using traditional OLAP tools, the spatial dimensions are treated like any other descriptive dimension, without consideration for the cartographic component of the data. OLAP tools have serious limitations in the support of spatio-temporal analysis (no spatial visualization, practically no spatial analysis, no map-based exploration of data, etc.).

Data visualization facilitates the extraction of knowledge from the complexity of the spatio-temporal phenomena and processes being analyzed, as well as offering a better understanding of the structure and relationships existing within the dataset. In the context of information exploration, maps and graphics do more than visualizing data; they are active instruments in the end-user's thinking process (Rivest et al., 2001). Without a cartographic display, OLAP tools lack an essential feature, which could help spatio-temporal exploration and analysis. A SOLAP tool remedies this limitation because it supports the geometric spatial aspects of the data to be explored. These spatial aspects are visualized and explored cartographically.

A SOLAP system can be defined as a visual platform built especially to support rapid and easy spatio-temporal analysis and exploration of data. It follows a multidimensional approach that is available in cartographic displays, as well as in tabular and diagram displays (Bédard et al., 2001). This makes a SOLAP tool a good candidate to explore the outputs of our geosimulations.

The outputs of the shopping behavior geosimulation are obtained using software agents called observer agents . These data are then recorded in some files. Unfortunately, they cannot be used directly by the OLAP/SOLAP techniques, but must first be transformed and stored in a specific database structure ( data cube or hypercube ) in order to facilitate their exploration. The transformation of data is not easy and it is made in two steps: (1) we transform the data generated by the traffic observer agents into several files using a program that we developed for this purpose using Microsoft Visual Basic 6.0. Each file contains data about one variable (dimension) of the shopper agent (e.g., age_group, gender, etc.); (2) we create the structure of the data cube using Microsoft SQL Server 7.0; and (3) we copy the data from the files to the data cube using some SQL queries. This transformation requires some expertise concerning databases, OLAP techniques, data warehouse, etc. The content of this data cube is then explored and analyzed using the OLAP/SOLAP techniques and the results are presented to the users (see Fig. 8.16).

Fig 8.17 presents a simplified view of the transformed database for the shopping behavior case study. In this database, we can distinguish two types of tables. The fact table , which contains measures that will be analyzed, and the dimension tables , which contain data about each hierarchy of data (e.g., the hierarchy of gender has one root (All_gender) and two nodes (male and female)). The database structure contains some non-spatial dimensions (age group, gender) of the shopper agents as well as a spatial dimension that contains the stores and corridors of the mall.

In the following points, we present some advantages of the OLAP/SOLAP techniques for exploring and analyzing geosmulation outputs.

- Spatial analysis :

Using the OLAP/SOLAP tool proposed by (Rivest et al., 2004), we can analyze and explore the simulation outputs of the shopping behavior geosimulation. We can perform spatial analyses and visualization of the distribution of shopper agents by store or corridor, and according to the non-spatial variables or ‘ dimensions ’ of these agents. What is important when using the OLAP/SOLAP techniques is that the visualization is made using the geographic entities of the simulated environment. For example, the user can visualize, cartographically, the distribution of shopper agents in five major stores of Square One (Wall-Mart, Sears, Zellers, Old Navy, and The Bay) based upon the age group and gender dimensions. Fig 8.18 shows the distribution of shopper agents that are between 51 and 65 years old, for the five major stores using the SOLAP tool. In this figure, we can only see the non-null distribution (8 for Sears and 15 for Wall-Mart).

- Multi-variables (multidimensional) analysis :

Using the OLAP/SOLAP tool we can perform multi-variables analysis. Therefore, we can analyze the combination of non-spatial variables coming from the shoppers and spatial variables belonging to the spatial simulated environment (mall). This is relevant because it gives the users ideas about the interactions of the shoppers and their environment (mall). For example, if the user wants to see which categories of shoppers visit the other stores (Zellers, Old Navy, and The Bay) in terms of the age group dimension, he/she can use the OLAP/SOLAP drill-up operation on the age group dimension, in order to see the distribution for all the ages (see Fig 8.19). The user can see that Sears and Wall-Mart are also most visited by shopper agents that are between 18 and 25 years of age (54 for Sears and 82 for Wall-Mart). One can also observe that Zellers store is visited by only one shopper agent that is between 36 and 50 years of age. In this figure, the white areas are not visited by this category of shoppers (distribution is null), and colored areas correspond to the most visited ones (orange), and less visited ones (yellow).

- Advanced exploration/visualization modes :

The OLAP/SOLAP tool offers its users advanced exploration and visualization capabilities to present the analysis results. In the following example, we present some of these capabilities.

■ The OLAP/SOLAP tool allows the user to use the chart display in order to study the distribution based upon all the age groups (Fig 8.20). In this case, we only focus on the non-null distributions and we can represent the 6 age groups in the same bar chart.

■ It can also present maps of the distribution for different dimensions such as all genders and age groups. In Fig 8.21, we show a cartographic representation, comprising two maps (one for male and another for female agents), that include superimposed pie charts to present the distribution of each age group.

■ It is also possible to show the same information, but in another way by using multimaps (one for each age group) and pie charts to represent gender (see Fig 8.22).

■ The SOLAP tool also allows the user to visualize maps of the distribution of shopper agents that are based on combinations of members from different dimensions. Fig 8.23 shows 14 maps, each one representing the distribution of shopper agents for a combination of members from the age group and the gender dimensions (7 age groups and 2 genders).

■ The SOLAP tool is very flexible in terms of visualization and exploitation of spatial and non-spatial data. It can display data using maps, pie charts, histograms, or other visualization modes. Fig 8.24 offers an example of this visualization flexibility, where the same analysis is displayed using superimposed pie charts or histograms on maps.

(combining pie charts or histograms on a map).

In this sub-section, we presented how we can use the OLAP/SOLAP technique to analyze multiagent geosimulations. This technique is proftable because: 1) OLAP allows users and analysts to analyze data in the way they think, simultaneously across multiple variables called dimensions (Codd et al., 1993); 2) the multidimensional approach of OLAP is more in agreement with the end user’s mental model (Yougworth, 1995); 3) we can take into account the spatial aspect of the data to be analyzed using a recent extension of OLAP called SOLAP (Bédard et al., 1997); and 4) OLAP/SOLAP techniques present advanced visualization functionality of the analysis results. These results are presented directly on the map (GIS), with different levels of detail and in different modes. This dynamic aspect allows the user to create several displays (maps, tables, and diagrams), using the dataset without having to store each display individually.

Unfortunately, the OLAP/SOLAP technique still presents some limitations because:

- it requires a transformation of the output data generated by the geosimulation in order to create a data cube. This transformation is not obvious and requires expertise in databases, OLAP techniques, and data warehouse, etc.

- it cannot be used to analyze outputs generated by more than one simulation execution at once. What’s more, for each scenario, we need to transform the data generated by the geosimulation into a data cube. Due to this limitation, the user cannot compare, simultaneously, several simulation executions in order to compare different simulation scenarios. This limitation is critical for the geosimulation users because, as we will see in Chapter 10, comparing geosimulation scenarios is the basis of the use of the geosimulation as a decision-making tool.

This chapter aimed to present and illustrate the method’s step in which we collect and analyze output data generated from the geosimulation. In the first sub-step, we showed how we benefit from the agent technology to collect spatial and non-spatial data from the geosimulation. In the second sub-step, we presented how we can analyze the geosimulation outputs using some analysis techniques and tools. Therefore, we presented three analysis techniques and tools and illustrated them using the shopping behavior case study. These tools or techniques are: (1) classical or traditional statistical techniques, (2) our own analysis tool, and the OLAP/SOLAP techniques. For each technique or tool, we presented the advantages and limitations. Table 8.1 summarizes these advantages and limitations based upon some characteristics which must be considered in the geosimulation outputs analysis.

Based upon the results presented in Table 8.1, we can conclude that:

- OLAP/SOLAP technique can be used to manipulate and explore the outputs of one geosimulation;

- Our tool is used to analyze and exploit the geosimulation outputs with limited exploration/visualization capabilities. It also can be used to compare several scenarios simulation executions. This makes the simulation more useful for decision-making process about the phenomenon to be simulated or the simulated environment (see Chapter 10).

To summarize, in this chapter we presented a promising technique that can be used to collect spatial and non-spatial output data from a geosimulation prototype. This technique is based upon the concept of observer agents whose the mission is to observe some aspects of the geosimulation execution and generate data in specific files or databases. The advantages of using such a technique represents a contribution of this work to the simulation field.

This chapter also presented our new analysis technique and tool that can be used to analyze the outputs generated from the geosimulation. This technique/tool differs from the existing techniques/tools such as those presented by (Sanchez, 2001), (Kelton, 1997), (Alexopoulos et al., 1998), (Alexopoulos, 2002), and (Seila, 1992) because it takes into account the spatial aspects of the data to be analyzed, which is fundamental to a geosimulation study. Furthermore, the analysis results are presented and exploited spatially on the simulated environment which is closer to the users’ mental models. The characteristics of such a technique/tool let us consider it as a contribution in the simulation field. The chapter also presented how we exploit an existing technique called OLAP/SOLAP in order to analyze and explore spatial and non-spatial data generated from geosimulation. The idea of coupling a multiagent geosimulation application and OLAP/SOLAP tools is also an original contribution to the simulation field.

The next chapter of the dissertation aims to present the next two steps of our proposed method which seeks, respectively, to (1) verify and validate the geosimulation models and (2) test and document the geosimulation. It also aims at illustrating these steps using the shopping behavior case study.

© Walid Ali, 2006