Virtual City Simulator for Education^ Training, and...
Transcript of Virtual City Simulator for Education^ Training, and...
33
Virtual City Simulator for Education^ Training, and Guidance
Hideyuki Nakanishi
Department of Social Informatics, Kyoto University Kyoto 606-8501, Japan [email protected] http://www.Iab7.kuis.kyoto-u.ac.jp/~nuka/
33.1 Introduction
Since smooth evacuation is important to safeguard our lives, we are taught how to evacuate in preparation for a disaster. For example, fire drills are conducted in schools. However, so sporadic and preestablished real-world trainings can only give us very limited experience. Also, it is rare to conduct fire drills in large-scale public spaces such as central railway stations, even though they are places where vast amounts of people gather. Crowd simulations should be used to compensate for the lack of opportunities to experience this. Even if we have learned to evacuate before an emergency happens, we need appropriate guidance during such emergency. For example, large buildings have to be equipped with many emergency exits and their signs. However, these architectural guidance objects are not very flexible and human guidance is necessary to help an escaping crowd while adapting to the moment's needs. Crowd simulations should also be used to assist such guidance tasks. Crowd simulations having the purpose of learning about evacuation and guidance of escaping people would be very beneficial.
Multi-agent simulations are already known as a technology that deals effectively with the complex behavior of escaping crowds [9, 28]. However, conventional simulations are not tailored for learning or guiding an evacuation. Since they are only designed for analyzing crowd behavior, they do not take human involvement much into account. For example, crowds are usually represented as moving particles. It is not easy to interpret such a symbolic representation. Moreover, it is almost impossible for users to become a part of the simulated crowds and experience a virtual evacuation. So in order to compensate for this incapability, and as part of the Digital City Project [12], we have developed "FreeWalk" [25], a virtual city simulator that allows human involvement. In FreeWalk, multi-agent simulations that include human beings can be designed.
In this paper, I describe Free Walk's design and how it is used for learning and guidance purposes. First, the capability to involve humans is described. Next, an
424 Hideyuki Nakanishi
experiment to evaluate its effectiveness for learning is explained. Finally, the first prototype of a guidance system is introduced.
33.2 FreeWalk
Virtual training environments have already been used for single-person tasks (e.g. driving vehicles) and are becoming popular for multi-party tasks because they can significantly decrease the cost of group training. In these environments, it is easier to gather many trainees since they participate in the training as avatars by entering the virtual space through a computer network. In addition, it is possible to practice a dangerous task repeatedly since trainees are inherently safe. In the virtual environments "social agents", the software agents that have social interaction with people [24], play an important role in the following two ways: 1) By sharing the same group behavior, social agents become colleagues of human trainees and can decrease the number of human participants necessary to carry out a large-scale group training. 2) Social agents can play a predefined role within the training. Scripted training scenarios enable social agents to perform their assigned roles [6]. Human participants can then learn something through the interaction with the social agents, which behave according to the training scenario.
We have developed a platform for simulating social interaction in virtual space called FreeWalk, its primary application being virtual trainings. We integrated diverse technologies related to virtual social interaction, e.g. virtual environments, visual simulations, and lifelike characters [30]. In FreeWalk, lifelike characters enable virtual collaborative events such as virtual meetings, trainings, and shopping in distributed virtual environments. You can conduct distributed virtual trainings [8, 21, 38] where you can use lifelike characters as the colleagues of human trainees [32]. FreeWalk is not only for training but also for communication and collaboration [7, 19, 33]. You can use lifelike characters as the facilitators of virtual communities [11]. These characters and the human participants can use verbal and nonverbal conmiunication skills to talk with one another [5]. And it can also be a browser of 3D geographical contents [20] in which lifelike characters guide your navigation [16] and populate the contents [36].
To allow users to be involved in a multi-agent simulation, each virtual human in FreeWalk can be either an avatar or an agent. 'Avatar' means a virtual human manipulated by a user through the keyboard, mouse, and other devices. 'Agent' means a virtual human controlled by an outside program connected to FreeWalk. FreeWalk has a common interaction model for both agents and avatars while at the same time having different interfaces for them, so they can interact with each other based on the same model. Agents are controlled through the application program interface (API). Avatars are controlled through the user interface (UI). FreeWalk does not distinguish agents from avatars. Figure 33.1 roughly shows the distributed architecture of Free-Walk. An agent is controlled through the platform's API. A human participant enters the virtual space as an avatar, which he/she controls through the UI devices connected to the platform. Each character can be controlled from any client. FreeWalk
33 Virtual City Simulator
Program
425
Fig. 33.1. Architecture of Free Walk
uses a hybrid architecture in which the server administrates only the list of current members existing in the virtual space and each client administrates the current states of all characters. This architecture enables agents and people to socially interact with each other based on the same interaction model in a distributed virtual space.
Currently, Free Walk is connected with the scenario description language "g" [13]. It took a long time to construct agents that can socially interact with people, since such agents need to play various roles and each role needs its specific behavioral repertory. We thought it should be easier to design the external role of an agent instead of its internal mechanism and previous studies had focused on the internal mechanism rather than on the external role [16]. So 2 is a language for describing an agent's external role through an "interaction scenario", which is an extended finite state machine whose input is the perceived cues, whose output is an action, and where each state corresponds to each scene. Each scene includes a set of interaction rules, each of which is a couple made up of a conditional cue and the consequent series of actions. Each rule is of the form: "if the agent perceives the event A, then the agent executes the actions B and C." Free Walk agents behave according to the assigned scenario. Free Walk and the language processor of Q are connected by a shared memory, through which the Q processor calls Free Walk's API functions to evaluate cues and actions described in the current scene.
Free Walk's virtual city makes a multi-agent simulation more intuitive and understandable. The spatial structure and the crowd behavior of the simulation are represented as 3D photo-based models. Since the camera viewpoint can be changed freely, users can observe the simulation through a bird's-eye view of the virtual city and also experience it by controlling their avatars using first-person views. Free Walk does not use neither prepared gait animations nor simplified collision models to keep
426 Hideyuki Nakanishi
the correspondence between crowd behavior and the graphical representation of the virtual city. The VRML model that is basically used for drawing the virtual city is also used as a geometric model to detect collisions with the spatial structure and to generate gait animations. Animations are generated based on the hybrid algorithm of kinematics and dynamics [37]. To reduce the building cost, each VRML model was constructed as the combination of pictures taken by digital cameras and a simple geometric model based on the floor plan. A simple model also helps to reduce the workload of collision detection.
It is also possible to represent the actual state of an existing real-world crowd in realtime. To achieve this, it is necessary to synchronize the events simulated in Free Walk with those occurring in the real world. Free Walk provides an interface used to connect with a sensor network through the Internet. Free Walk uses physical and social rules to robustly synchronize the movements of human figures with those of the real-world crowds. Based on the positions captured by the sensors. Free Walk determines the next position of the corresponding human figure. This next position is modified according to the social rules described in the Q language. (Examples of rules are flocking behaviors such as following others and keeping a fixed distance from them [31], and such cultural behaviors as forming into a line to go through a ticket gate or forming a circle to have a conversation [17].) Then, the next position is modified again based on the pedestrian model to avoid collision with others, walls, or pillars [28]. Finally, the gait animation is generated.
33.3 Learning Evacuation
Free Walk enables users to experience crowd behavior from their first-person view (FP in Figure 33.2.) With this viewpoint they can practice decision making since they control their avatars based on what their personal view informs. But in Free Walk users can also observe the overall crowd behavior from a bird's-eye view (BE in Figure 33.2.) This view is more effective in understanding an overall crowd behavior than first-person views. Since both views have different advantages, we conducted an experiment to compare them, and also to derive their synergic effects in order to find out the best way to learn evacuation and the kinds of superiority of FR In the experiment, we tested each view and the effect of viewing a combination of them both in different orders. We compared four groups: experiencing a first-person view (FP group); observing a bird's-eye view (BE group); experiencing a first-person view before observing a bird's-eye view (FP-BE group); and observing a bird's-eye view before experiencing a first-person view (BE-FP group). The subjects were 96 college students. 24 subjects were assigned to each group.
A previous real-world experiment [34] had given us a gauge to measure the subjects' own understanding of the resulting crowd behavior. This experiment demonstrated how the following two evacuation methods cause different crowd behaviors: 1) In the follow-direction method, the leaders point their arms at the exit and shout out, "the exit is over there!" to indicate the direction. They don't escape until all evacuees have gone out. 2) In the follow-me method, they do not indicate the di-
33 Virtual City Simulator 427
First-person view (FP)
Fig. 33.2. Two views for learning evacuation
rection. To a few of the nearest evacuees, they whisper, "follow me" and proceed to the exit. This behavior creates a flow toward the exit. The evacuation simulation was constructed based on this previous experiment [22]. At the beginning of the simulation, everyone was in the left part of the room, which was divided into left and right parts by the center wall as shown in the BE of Figure 33.2. Four leaders had to lead sixteen evacuees to the correct exit on the right side and prevent them from going out through the incorrect exit in the left part. In the FP simulations, six of the evacuees were subjects and the rest were agents. So, four FP simulations were conducted in each of the three groups that include FP simulations. In the BE simulations, both evacuees and leaders were all agents. In the experiment, subjects observed and experienced the two different crowd behaviors caused by the two evacuation methods explained above. We used the resulting behaviors as questions and the causing methods as answers: In a quiz including 17 questions, subjects read the description of each crowd behavior and chose one of the two methods as an answer. They took the quiz before and after the experiment. We used a t-test to find significant differences between the scores of pre- and post-quizzes. A significant difference meant that the subject could learn the asked nature of crowd behavior through his or her observation and experience.
Table 33.1 summarizes the results of the t-test on nine questions. Since no group could answer the other eight questions correctly, they are omitted. Even though the
428 Hideyuki Nakanishi
results depend on the design of the quiz, it seems clear that a bird's-eye observation was necessary to understand the crowd's behavior. The FP group could not answer the questions from no. 3 to 9, which were related to the overall crowd behavior. However, the first-person experience was not worthless. It is interesting that the BE-FP group could answer the questions no. 6 and 7 that the BE and FP-BE groups could not. This result implies that the background knowledge of the overall behavior enabled the subjects to infer individuals' actions (the following behavior) and its outcome (the formation of a group) from their first-person experiences. They could understand how they interacted with other evacuees because they controlled their avatars by themselves. They could also understand the others interacted with each other the same way as themselves because they knew the overall behavior beforehand. The ranking of the four ways to learn evacuation is illustrated in Figure 33.3. BE-FP was the best way, BE and FP-BE were next, and FP was the worst. We found that the best way is to observe first and then experience.
33.4 Guiding Evacuation
Our living space consists of our home, office and public spaces. Studies on remote communication have predominantly focused on the first two spaces. The primary issue of these studies is how to use computer network technologies to connect distributed spaces. These studies have proposed various designs and technologies but share the same goal, which is the reproduction of face-to-face (FTF) communication environments. For example, media space research tried to connect distributed office spaces [2]. Telepresence and shared workspace research explored a way to integrate distributed deskwork spaces [4, 14]. Spatial workspace collaboration research dealt
Table 33.1. Summary of the results of the quiz (one-sided paired t-test)
No.
1
2
3
4
5
6
7
8
[9
Question (the answer of all items is the foUow-me method.)
Leaders are the first to escape.
Leaders do not observe evacuees.
Leaders escape Hke evacuees.
One's escape behavior is caused by others' escape behavior.
Nobody prevents evacuees from going to the incorrect exit.
Evacuees follow other evacuees.
Evacuees form a group.
Leaders and evacuees escape together.
Evacuees try to behave the same as other evacuees.
FP
4 3***
2.8**
1.6
1.2
1.6
1.3
1.6
0.7
0.9
BE
2.2*
44***
2.2*
2.1*
4 9***
1.0
0.5
2.0*
2.5*
FP-BE
2.3*
4 Q***
1.9*
3.3**
3 y***
0.7
1.2
0.2
1.5
BE-FP
4_Q***
4 2*5f!*
2.9**
2.9**
4.5***
2.1*
1.9*
3.4**
0.9
*p<.05, **p<.01, ***p<.001 (df=23)
33 Virtual City Simulator 429
Effectiveness
<? <a^fc-
BE
Fig. 33.3. Ranking of the four ways to learn evacuation
with spatially configured workspaces [18]. CVE research proposed using virtual environments as virtual workspaces [1]. And this kind of efforts still continue [15]. A recent additional issue is how to use the technologies for enhancing collocated spaces [10].
We tackled a third but increasingly important issue-how to use the technologies to support remote conmiunication in large-scale public spaces such as a central railway station. Those spaces have characteristic participants: staff administrating the space and visitors passing through it. Remote conMnunication between them is important because a vast amount of people gathers and appropriate guidance for crowd control is critical. Currently, surveillance cameras and announcement speakers connect staff in a control room with visitors in a public space. The staff can observe the visitors thanks to the cameras and talk to them through the speakers. This traditional conmiunication system is not enough for individual guidance. The off-site staff in some room can give overall guidance for the whole visitors but on-site staff working in the public space is necessary to give location-based guidance for each visitor. We devised a new way to guide each visitor remotely and a new conmiunication environment for it, since conventional environments that aimed at the reproduction of FTP communication cannot be adapted to the case where every visitor is a candidate for on-demand guidance.
The results of the evacuation learning experiment described in the previous section have several implications on the design of the communication environment. The surveillance cameras enable the staff to watch many fragmentary views of the public space. However, the results showed that bird's-eye views were better than first-person views. Thus, a single global view is better than a collection of fragmentary views. The announcement speakers can convey only uniform information for all the visitors.
430 Hideyuki Nakanishi
However, the results showed that the group experiencing a first-person view could understand the situation much better if they observed the bird's-eye view after their first-person experience. This means that announced information should teach the visitors the overall situation surrounding them. Thus, the visitors need not only overall information, e.g. a fire breaks out, but site-specific information like "too many people are rushing into the stairs in front of you." Another limitation of the announcement speakers is that they cannot support two-way communication. The most interesting result was that the best way to learn the crowd behavior was to observe it first and experience it afterward. This result implies that the staff can derive useful information from the visitors. Thus, two-way communication is better than one-way communication.
We built Free Walk into an evacuation guidance system in which a public space is monitored by vision sensors instead of surveillance cameras and information is transmitted by mobile phones instead of announcement speakers. Figure 33.4 is a snapshot of our guidance system and the escaping passengers on a station's platform, with their mobile phones at hand. You can see a pointing person who stands in front of a large-scale touch screen. Suppose that this person is a station staff officer working in a control room. The screen displays the bird's-eye view of the simulated station visualized thanks to Free Walk. The walking behavior of human figures in the virtual station is generated according to the positional data transmitted from the real station, equipped with vision sensors that track the movements of the real-world escaping passengers. In the snapshot, the man is pointing at a human figure, which represents one of the passengers. When the touch screen detects this pointing operation, the system immediately activates the connection between the officer's headset and the passenger's mobile phone. This trick is possible because the headset is connected to a PC equipped with a special interface card which can control audio connections between the PC and several telephone lines. This simple coupling between pointing operation and audio activation makes it easy for the staff to begin and end an instruction. As described above, the system provides the staff a single global view of the public space and two-way communication channels with particular visitors so that the staff can supply the visitors site-specific information.
Kyoto Station in Kyoto City is a central railway station where the number of visitors per day is more than 300,000. To install our evacuation guidance system in the station, we attached a vision sensor network to the station. We attached 12 sensors to the concourse area and 16 sensors to the platform. Figure 33.5(a) is the floor plan, on which the black dots show the sensors' positions, and Figure 33.5(b) shows how they have been installed. The vision sensor network can track passengers between the platform and the ticket gate. In Figure 33.5(c), you can see a CCD camera and a reflector with a special shape [23]. If we could expand the field of view (FOV) of each camera, we could reduce the number of required cameras. However, a widened FOV causes minus (barrel) distortion in the images taken by conventional cameras. The reflector of our vision sensor can eliminate such distortion. The shape of the reflector can tailor a plane that perpendicularly intersects the optical axis of the camera to be projected perspectively to the camera plane. As shown in Figure 33.5(d), this optical contrivance makes it possible to have a large FOV without distortion. From
33 Virtual City Simulator 431
Fig. 33.4. Evacuation guidance system
the images taken by the cameras, the regions of moving objects are extracted using the background subtraction technique. The position of each moving object is determined based on geographical knowledge, including the position of the cameras, the occlusion edges in the views of the cameras, and the boundaries of walkable areas.
432 Hideyuki Nakanishi
(b) /nstaf/at/on (e) SImufatedpassengers
Fig. 33.5. Virtual Kyoto Station
Figure 33.5(e) is a screenshot of the simulated passengers synchronized with their retrieved positions.
We named the communication form supported by our guidance system "transcendent communication" [26]. In transcendent communication, a user watches the bird's-eye view of the real world to grasp its situation and points at a particular location within the view to select a person or group of people to talk to. Figure 33.6 explains the difference between distributed communication and transcendent communication. In distributed communication, a virtual space is used for connecting real spaces each of which contains its own participant. Therefore, the virtual space is a synthetic space, which transmits nonverbal cues such as proxemics and eye contact [27]. The goal of distributed communication is the reproduction of collocated communication. In transcendent communication, a virtual space is used for visualizing the bird's-eye view of the real space. Therefore, the virtual space is a projective space, which represents the real world situation. The goal of transcendent communication is not the reproduction of collocated conmiunication but the production of asymmetric conmiunication. Collocated communication is symmetric since everyone has his/her first-person view to observe the others and can control conversation floor. In distributed communication, participants should be basically reciprocal since privacy is an important issue and intrusiveness should be avoided [3, 35]. To the contrary, in transcendent communication, the bird's-eye view helps a transcendent participant, e.g. someone from the station staff, become an intrusive observer who can administrate the immanent participants, in this case, the passengers. Transcendent participants need to observe immanent participants but immanent participants
33 Virtual City Simulator 433
do not need to observe transcendent participants. And only transcendent participants can control communication channels.
Real
cil^fc. Collocated Distributed Transcendent
Fig. 33.6. Transcendent communication
Figure 33.7 presents an example of "transcendent guidance", that is, guidance via transcendent communication. In Figure 33.7(a), the staff is watching a virtual station platform where too many people are rushing into the stairs at the right, while the left stairs are not crowded at all. The staff determines that a group of people following behind in the right crowd is not safe and should be guided towards the left stairs. In Figure 33.7(b), the staff connects communication channels with the group to begin guidance instructing them to switch the destination from the right stairs to the left stairs as shown in Figure 33.7(c). The current implementation in Kyoto Station cannot allow us to do the exact same thing as this example due to technological limitations. The image processing function of the vision sensor network does not work if the platform is crowded. And there is no mechanism to automatically track the phone numbers of the passengers. However, the technological advances in perceptual user interfaces (PUI) [29] may soon be able to eliminate these implementation issues.
Communication channel control should be very efficient in order to give good guidance. To explore its interaction design, we implemented two different kinds of user interfaces. In the GUI version shown in Figure 33.4, touching characters and talking are coupled. In the PUI version shown in Figure 33.8, we used an eye-tracking device instead of a touch screen to couple gazing characters and talking. The PUI version gives a much more seamless feeling than the GUI version since a vocal channel is established immediately when a user looks at the character to talk. However, gaze is a single pointing device while a touch screen enables a user to use at least two devices, i.e. his or her two hands. Even if the screen can only detect a single spot being touched at one given time, the two hands are more efficient than a single hand or a gaze.
434 Hideyuki Nakanishi
(a) Watch R ^ a T bl- M
(b) Connect
(c) Guide R ^ H
. ^ n i E C ^ . ^ Fig. 33.7. Transcendent guidance
Fig. 33.8. Gaze-and-talk interaction
33.5 Conclusion
We presented two examples of crisis management applications of our virtual city simulator, Free Walk. The first application is virtual evacuation simulation, where
33 Virtual City Simulator 435
learners can observe multi-agent crowd behavior simulations described in the Q language and also take part in the simulation as avatars. The second application is transcendent guidance systems, which visualize real-world pedestrians in the virtual city and enable location-based remote guidance. The key feature of our simulator is such inclusion of humans in crowd behavior simulations of urban spaces. In the simulations, each person can be simulated as an agent, an avatar, or a projective agent that visualizes context information retrieved from a real-world person walking around a smart environment.
The development of the two applications showed that the design principles of real-world systems could be derived from virtual simulations. We designed the transcendent guidance system based on the result of the evacuation simulation experiment. The transfer of the design principles was made possible by the correspondence between the two different viewpoints (first-person and bird's-eye views) and the two different kinds of users (transcendent and immanent users). We think this study implies a new method of software design.
Acknowledgements. This work was conducted as part of the Digital City Project supported by the Japan Science and Technology Agency. I express my special gratitude to Torn Ishida, the project leader. This work would also have been impossible without the contribution of Satoshi Koizumi and Hideaki Ito. I express my thanks to the Municipal Transportation Bureau and General Planning Bureau of Kyoto city for their cooperation. I received a lot of support in the construction of the simulation environment from Toshio Sugiman, Shigeyuki Okazaki, and Ken Tsutsuguchi. Thanks to Hiroshi Ishiguro, Reiko Hishiyama, Shinya Shimizu, Tomoyuki Kawasoe, Toyokazu Itakura, CRC Solutions, Mathematical Systems, and CAD Center for their efforts in the development and experiment. The source code of Free Walk and Q is available at http://www.lab7.kuis.kyoto-u.ac.jp/freewalk/ and http://www.digitalcity.jst.go.Jp/Q/.
References
1. Benford, S., Greenhalgh, C, Rodden, T. and Pycock, J. Collaborative Virtual Environments. Communications of the ACM, 44(7), 79-85, 2001.
2. Bly, S. A., Harrison, S. R. and Irwin, S. Media Spaces: Bringing People Together in a Video, Audio, and Computing Environment. Communications of the ACM, 36(1), 28-47, 1993.
3. Homing, A. and Travers, M. Two Approaches to Casual Interaction over Computer and Video Networks. International Conference on Human Factors in Computing Systems (CHI91), 13-19, 1991.
4. Buxton, W. Telepresence: Integrating Shared Task and Person Spaces. Canadian conference on Graphics Interface (GI92), 123-129, 1992.
5. Cassell, J., Bickmore, T, Billinghurst, M., Campbell, L., Chang, K., Vilhjalmsson H. and Yan H. Embodiment in Conversational Interfaces: Rea. International Conference on Human Factors in Computing Systems (CHI99), 520-527, 1999.
436 Hideyuki Nakanishi
6. Cavazza, M., Charles, F. and Mead, S. J. Interacting with Virtual Characters in Interactive Storytelling. International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS2002), 318-325, 2002.
7. Greenhalgh C. and Benford S. Massive: A Collaborative Virtual Environment for Teleconferencing. ACM Transactions on Computer-Human Interaction, 2(3), 239-261, 1995.
8. Hagsand, O. Interactive Multiuser VEs in the DIVE System. IEEE MultiMedia, 3(1), 30-39, 1996.
9. Helbing, D., Parkas, I.J. and Vicsek, T. Simulating Dynamical Features of Escape Panic. Nature, 407(6803), 487-490, 2000.
10. Huang, E. M. and Mynatt, E. D. Semi-Public Displays for Small, Co-located Groups. International Conference on Human Factors in Computing Systems (CHI2003), 49-56, 2003.
11. Isbell, C. L., Keams, M., Kormann, D., Singh, S. and Stone, P. Cobot in LambdaMoo: A Social Statistics Agent. National Conference on Artificial Intelligence (AAAI2000), 36-41, 2000.
12. Ishida, T. Digital City Kyoto: Social Information In-frastructure for Everyday Life. Communications of the ACM, 45(7), 76-81, 2002.
13. Ishida, T. Q: A Scenario Description Language for Interactive Agents. IEEE Computer, 35(11), 54-59, 2002.
14. Ishii, H., Kobayashi, M. and Arita, K. Iterative Design of Seamless Collaboration Media, Communications of the ACM, 37(8), 83-97, 1994.
15. Jancke, G., Venolia, G. D., Grudin, J., Cadiz, J. J. and Gupta, A. Linking Public Spaces: Technical and Social Issues. International Conference on Human Factors in Computing Systems (CHI2001), 530-537, 2001.
16. Johnson, W.L., Rickel, J.W. and Lester, J.C. Animated Pedagogical Agents: Face-to-Face Interaction in Interactive Learning Environments. International Journal of Artificial Intelligence in Education, 11, 47-78, 2000.
17. Kendon, A. Spatial Organization in Social Encounters: the F-formation System. A. Kendon, Ed., Conducting Interaction: Patterns of Behavior in Focused Encounters, Cambridge University Press, 209-237, 1990.
18. Kuzuoka H. Spatial Workspace Collaboration: a SharedView Video Support System for Remote Collaboration Capability. International Conference on Human Factors in Computing Systems (CHI92), 533-540, 1992.
19. Lea, R., Honda, Y., Matsuda K. and Matsuda, S. Conamunity Place: Architecture and Performance. Symposium on Virtual Reality Modeling Language (VRML97), 41-50, 1997.
20. Linturi, R., Koivunen, M. and Sulkanen, J. Helsinki Arena 2000 - Augmenting a Real City to a Virtual One. T. Ishida, K. Isbister Ed., Digital Cities, Technologies, Experiences, and Future Perspectives. Lecture Notes in Computer Science 1765, Springer-Verlag, New York, 83-96. 2000.
21. Macedonia, M. R., Zyda, M. J., Pratt, D. R., Barham, P. T. and Zeswitz, S. NPSNET: A Network Software Architecture for Large-Scale Virtual Environments. Presence, 3(4), 265-287, 1994.
22. Murakami, Y, Ishida, T, Kawasoe, T. and Hishiyama, R. Scenario Description for Multi-agent Simulation. International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS2003), 369-376, 2003.
23. Nakamura, T. and Ishiguro, H. Automatic 2D Map Construction using a Special Cata-dioptric Sensor. International Conference on Intelligent Robots and Systems (IROS2002), 196-201, 2002.
24. Nakanishi, H., Nakazawa, S., Ishida, T., Takanashi, K. and Isbister, K. Can Software Agents Influence Human Relations? - Balance Theory in Agent-mediated Communities
33 Virtual City Simulator 437
-. International Joint Conference on Autonomous Agents and Multiagent Systems (AA-MAS2003), 717-724. 2003.
25. Nakanishi, H. Free Walk: A Social Interaction Platform for Group Behavior in a Virtual Space. International Journal of Human Computer Studies, 60(4), 421-454, 2004.
26. Nakanishi, H., Koizumi, S., Ishida, T. and Ito, H. Transcendent Communication: Location-Based Guidance for Large-Scale Public Spaces. International Conference on Human Factors in Computing Systems (CHI2004), 655-662, 2004.
27. Okada, K., Maeda, R, Ichikawa, Y. and Matsushita, Y. Multiparty Videoconferencing at Virtual Social Distance: MAJIC Design. International Conference on Computer Supported Cooperative Work (CSCW94), 385-393, 1994.
28. Okazaki, S. and Matsushita, S. A Study of Simulation Model for Pedestrian Movement with Evacuation and Queuing. International Conference on Engineering for Crowd Safety, 271-280, 1993.
29. Pentland, A. Perceptual Intelligence. Communications of the ACM, 43(3), 35-44, 2000. 30. Prendinger, H. and Ishizuka, M. Life-Like Characters: Tools, Affective Functions, and
Applications. Springer Verlag, 2004. 31. Reynolds, C. W. Flocks, Herds, and Schools: A Distributed Behavioral Model. Interna
tional Conference on Computer Graphics and Interactive Techniques (SIGGRAPH87), 25-34, 1987.
32. Rickel, J. and Johnson, W. L. Animated Agents for Procedural Training in Virtual Reality: Perception, Cognition, and Motor Control. Applied Artificial Intelligence, 13, 343-382, 1999.
33. Sugawara, S., Suzuki, G., Nagashima, Y, Matsuura, M., Tanigawa H. and Moriuchi, M. Interspace: Networked Virtual World for Visual Communication. lEICE Transactions on Information and Systems, E77-D(12), 1344-1349, 1994.
34. Sugiman T. and Misumi J. Development of a New Evacuation Method for Emergencies: Control of Collective Behavior by Emergent Small Groups. Journal of Applied Psychology, 73(1), 3-10, 1988.
35. Tang, J. C. and Rua, M. Montage: Providing Teleproximity for Distributed Groups. International Conference on Human Factors in Computing Systems (CHI94), 37-43, 1994.
36. Tecchia, R, Loscos, C. and Chrysanthou, Y Image-Based Crowd Rendering. IEEE Computer Graphics and Applications. 22(2), 36-43, 2002.
37. Tsutsuguchi, K., Shimada, S., Suenaga, Y Sonehara, N. and Ohtsuka, S. Human Walking Animation based on Foot Reaction Force in the Three-dimensional Virtual World. Journal of Visualization and Computer Animation, 11(1), 3-16, 2000.
38. Waters, R. C. and Barrus, J. W. The Rise of Shared Virtual Environments. IEEE Spectrum, 34(3), 20-25, 1997.