The physiological mirror—a system for unconscious control of a virtual environment through physiological activity

Share Embed


Descripción

Vis Comput (2010) 26: 649–657 DOI 10.1007/s00371-010-0471-9

O R I G I N A L A RT I C L E

The physiological mirror—a system for unconscious control of a virtual environment through physiological activity Christoph Groenegress · Bernhard Spanlang · Mel Slater

Published online: 21 April 2010 © Springer-Verlag 2010

Abstract This paper introduces a system for real-time physiological measurement, analysis, and metaphorical visualization within a virtual environment (VE). Our goal is to develop a method that allows humans to unconsciously relate to parts of an environment more strongly than to others, purely induced by their own physiological responses to the virtual reality (VR) displays. In particular, we exploit heart rate, respiration, and galvanic skin response in order to control the behavior of virtual characters in the VE. Such unconscious processes may become a useful tool for storytelling or assist guiding participants through a sequence of tasks in order to make the application more interesting, e.g., in rehabilitation. We claim that anchoring of subjective bodily states to a virtual reality (VR) can enhance a person’s sense of realism of the VR and ultimately create a stronger relationship between humans and the VR. Keywords Virtual reality · Human–computer interaction · Physiological processing · Whole–body interaction 1 Introduction In this paper, we describe a system that exploits multisensory correlations between humans and a virtual environment, specifically focusing on the unconscious physiological state of the human body. Suppose that changes in the C. Groenegress · B. Spanlang · M. Slater () EVENT Lab, Departament de Personalitat, Avaluació i Tractaments Psicològics, Universitat de Barcelona, Barcelona, Spain e-mail: [email protected] M. Slater ICREA—Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain

environment reflect changes in the physiological state of the participant—their breathing, their heart activity, and their level of arousal. Would this enhance the probability of the illusion that what they are experiencing is “really happening?” In this respect, our general hypothesis can be formulated as follows. People tend to relate more to events and situations that relate back to them and in our example events relate back to their physiological state. In the literature, similar ideas have been expressed as plausibility [23], correlational presence [9], or as a derivative of an early definition of social presence [13] and there is evidence that indeed points toward its validity in VR [11, 22]. Essentially, this concept can be regarded as a physiological mirror where a stream of physiological data from a person is used to control aspects of what is displayed in a virtual environment in a natural and, in principle, recognizable way. While it is difficult to accurately or even effectively visualize some physiological processes (e.g., heart rate) we can choose to express them metaphorically. We constructed a VE that responds to the participant’s current physiological state and displays this in real-time, thus creating a feedback loop. Recently, physiological measurements such as heart rate (HR), heart rate variability (HRV), respiration, skin conductance (GSR), and the electroencephalogram (EEG) have become a popular tool for analyzing people’s performance in real and virtual environments [5, 14], and brain-computer interfaces (BCIs) have been shown to work for simple navigational and interaction tasks [10, 16, 17]. However, most examples require extensive training and are not an appropriate replacement for traditional interaction techniques. More complex tasks require devices that perform well under different conditions and, more importantly, are less timeconsuming during training. In addition, the transitory nature

650

of one’s own physiology, including EEG, coupled with the fact that they express unconscious responses to a presented stimulus, render it difficult to exploit them for arbitrary interaction tasks. Nonetheless, one possible use of physiology in VR relates to storytelling and the attempt to customize events for different people depending on their unconscious physiological response. The outline of this paper is as follows. The next section introduces work relating to physiological measurements in VR. Section 3 gives an overview of the hardware setup and the system architecture. Section 4 explains in detail the software components used for processing physiological data as well as rendering it into the VE. In Section 5, we briefly discuss some performance measures while Section 6 concludes with a discussion of the system.

2 Background Physiological measurements deal with the capture and analysis of a human’s physiological state. In medicine and clinical applications, these are often used to monitor a patient’s health, while psychophysiology is the branch of psychology concerned with the study of physiological responses to behavioral stimuli [2, 7]. The most common measures are summarized in Table 1. In VR, physiological measurements have received growing interest over the past ten or so years. The overwhelming majority of studies that employed some combination of these measures examine factors such as motion sickness while others are used to compare people’s responses to virtual scenarios and real ones, many of which are social in nature. In experiments studying the different immersive configurations, varying latency of the virtual reality system appears to have an effect on the physiological response to a VE [19] and there is a measurable effect when people are exposed to different immersive setups that may or may not provoke motion sickness [1]. Furthermore, HR and SCR have been used to detect arousal in participants when exposed to certain environments, such as flying or driving [15]. HR, SCR, skin temperature, and EMG have been used to assess physiological responses to a VE and whether those are comparable to real world situations that were simulated in the VE. Meehan and colleagues [18] used HR, SCR, and body temperature to evaluate the physiological response to exposure to stressful and nonstressful situations in the virtual pit room [30]. Their results suggest that VEs can cause a physiological response that is comparable to the real-world equivalent. HR and SCR were also employed in a study investigating the human response to virtual characters in a social VE [8]. A similar experiment [4] uses HR, HRV, respiration, and

C. Groenegress et al. Table 1 Common physiological measures Parameter

Measurement

Cardiovascular

Electrocardiography (ECG) with derived measures such as Heart Rate (HR), Heart Rate Variability (HRV)

Skin

Electrodermal activity with derived measures such as the tonal level of skin conductance and the number of skin conductance responses (SCR), temperature

Muscle activity

Electromyography (EMG)

Respiration

Respiratory rate (RR)

GSR to evaluate physiological responses to photorealistic and cartoon-like virtual characters indicating primarily that being exposed to a VE induces mental and physical stress and more photorealistic characters further increased the participants’ stress levels. Another study found that stress levels as deduced from ECG increase during social interaction with virtual characters [25]. In a virtual reprise of Stanley Milgram’s experiments on obedience, HR, HRV, and SCR showed different levels of stress between experimental groups suggesting that people tend to respond realistically at physiological, subjective, and behavioral levels when interacting with virtual characters [24]. EMG activity was monitored for analyzing people’s behavior when walking and balancing on real and virtual beams [3]. In the latter condition, the virtual beam was placed at floor level of a CAVE-like environment while the virtual floor was a few centimeters below the real floor. Analysis of the EMG activity showed that the visual illusion of walking on a beam in the VR condition was sufficient to provide a strong physiological response and both the real and virtual condition had similar results. In the real world, physiological and psychological effects of social interactions have been known for many years. Proxemics, the study of measurable distances between people in social situations, is one example of this [12], and in VR many of these situations have been replicated and studied with considerable success [9, 21]. For example, if participants see a virtual reflection of themselves in a mirror that apparently moves as they do, then they will experience this reflection as somehow a reflection of themselves. There are many possibilities of such multisensory correlations. An excellent example is given by the rubber hand illusion, which has been demonstrated also in VR [26, 27]. Here, correlation between the visual appearance of touch on a virtual arm and actual touch on the participant’s unseen real arm, gives the person the illusion that the virtual arm being apparently touched is owned by them.

The physiological mirror—a system for unconscious control of a virtual environment through physiological

651

3 System overview 3.1 Equipment The VE is displayed on a 3 × 2 m powerwall via two calibrated projectors with a native resolution of 1024 by 768 and 2500 ANSI Lumens. The head is tracked via a six degree of freedom (6DoF) Intersense IS900 motion tracker, attached to a pair of passive stereo glasses that are worn by the participant in order to perceive the scene in 3D. The VE is rendered by using a PC running Microsoft Windows XP Professional with an Intel Pentium 4 CPU 3.20 GHz, 2 GB of RAM, and an NVIDIA GeForce GTX 260 graphics card. The rendering engine used is eXtreme Virtual Reality (XVR) [6] and a 6DOF standard wand with a direction stick and four buttons is used for navigation. For capturing physiological data, we use a g.Mobilab+ from Guger Technologies OG,1 a wireless multipurpose biosignal acquisition device that can be attached to a person’s belt. The captured data is directly and continuously transferred to a PC at 256 Hz via a Bluetooth connection, which is sufficient in order to extract HR, GSR and respiration in real time. Physiology is captured using additional sensors attached to the device. A g.FLOWsensor is used for capturing respiration, a g.GSRsensor for GSR and a set of five electrodes for bipolar ECG. The captured data is sent to a PC running Microsoft Windows XP Professional with an Intel Xeon CPU E5320 @ 1.86 MHz and 3 GB of RAM. We use a Matlab/Simulink program in order to process and analyze the data and a g.HeartRate Simulink module to filter respiration and ECG signals and to estimate basic HR and HRV. See Fig. 1 for images of the device and the sensors. The environment and the virtual characters, which were acquired from Axyz-Design,2 were prepared for XVR in Autodesk 3D Studio Max.3 Section 4 provides more details on this. 3.2 System architecture There are three main components in the system of which each runs on a separate device and at different frame rates. These are the mobile device used to capture the physiological data running at 256 Hz, the machine used for processing it (also 256 Hz), and finally a third computer for rendering the virtual environment with an average refresh rate of 30 Hz. The g.Mobilab sends data to the processing PC via Bluetooth, while the processing and rendering PCs communicate via a UDP socket. Figure 2 outlines the main architecture and data flow. The Physio-PC thus serves as a Bluetooth 1 http://www.gtec.at. 2 http://www.axyz-design.com. 3 http://autodesk.com.

Fig. 1 (1) Image of the wireless device used for physiological measurements. (2) GSR sensor attached to index and middle fingers, (3) Respiration sensor worn around nose and mouth, (4) ECG sensor, each of the five electrodes are attached to various locations across chest and lower arm

Fig. 2 Simple diagram of data flow and networking requirements. The left-hand block (I) is a device for capturing physiological data, which is transferred to a PC (II) via Bluetooth, where the data is processed. The resulting feature vector is then passed to another PC (III) where it is used to animate objects and the entire scene is rendered. Each block also shows the sampling or frame rate at which it operates

client machine to the incoming captured data and as a UDP server hosting the data transfer to the Rendering-PC. Data from the capturing devices is transferred to the Physio-PC. This is done automatically once the device is registered and the Simulink module is running. The fact that the software for rendering the VR runs at a much slower frame rate (i.e., ∼30 fps) than the PC it receives physiological data from (i.e., 256 Hz) neither affects the stability of the software nor the visual quality of the imagery displayed.

652

C. Groenegress et al.

4 Apparatus 4.1 Overview All virtual characters were prepared with the character animation libraries Cal3D and HALCA [28] and the resulting interactive animations and expressions are created as follows. First, we designed a breathing animation by setting morph animations for a combination of bone rotations in the spine and chest region and a blend shape mesh around the chest. The resulting breathing pattern of the avatar was visually similar to that performed by real person. Second, a simple foot tapping animation was controlled by heart rate so that the avatar’s foot tapping onto the floor coincided with one heart beat. In animation, this is simply a matter of rotating the ankle bones and the speed at which the animation is executed is controlled by the period between two consecutive heart beats. Finally, facial blushing was implemented as a function of GSR. Since GSR is less intuitive than the other two measures a high degree of redness in the facial skin region indicates high arousal while fairly normal level of red indicates a more relaxed state as given by the GSR measurement. The color of a particular region in the texture can be controlled by first selecting the designated region in the alpha channel of the RGBA texture. We proceed by applying a multiplier to one or more color channels in a GLSL fragment shader to modify the color of the region marked in the alpha mask. 4.2 Virtual environment The model of the interior and exterior of the room as well as the character animations were prepared by using 3D Studio Max. Since interactive lighting was not available, we baked the desired conditions into the environment’s textures. The entire scene is enclosed by a sky dome and some simple outdoor geometry—cars, grass, and pavement was placed near the main windows. Clearly, the interior is more detailed than the exterior, and a basic view of the populated interior is shown in Fig. 3. The room is characterized by an entrance corridor and a main sitting area, which contains some furniture including sofas, tables, plants, paintings, and lighting. The virtual characters are seated around the table in the middle. The room was designed to be large enough to hold ten life-size virtual characters while also yielding enough space for a real human visitor. 4.3 Physiological measurements Our goal was to capture three physiological measurements as well as process and analyze them in real time. These measures are HR, GSR, and respiration. As stated above, we used a g.Mobilab+ to capture the data. The device is shown

Fig. 3 Final scene: room populated by ten interactively animated virtual characters

in Fig. 1 together with each of the three sensors and an image of a person fully connected to it is shown in Fig. 4 below. Logically, the Simulink model can be split into three distinct blocks: (i) data reception, (ii) individual filtering and analysis of HR, GSR, and respiration, and (iii) finally data transmission to XVR. Each of these is covered separately in the remainder of this section. For each measurement, we will outline how it is processed so that it can be used to control the desired events (i.e., foot tapping, respiration, facial blushes). 4.3.1 Galvanic skin response Physiologically, GSR is a method for measuring the electrical resistance of the skin. Changes in the skin’s electrical properties are caused by events taking place in the environment and a person’s resulting psychological state. It can be measured from the human skin by applying a small but constant voltage to the skin [7]. A recorded GSR signal is normally extremely noisy and requires extensive filtering prior to any analysis. We thus designed a digital infinite impulse response (IIR) filter of order 6 to remove noisy frequency bands, which yields a similar response to a Butterworth filter; see Fig. 5 for comparison between raw and filtered GSR signal. Next, we analyze the GSR data in a windowed sequence of 15 seconds. Recall that the GSR can yield a measure of a person’s level of arousal and in a given signal this is reflected in peaks [20, 31] over some period. See Fig. 6 for an example. Although some algorithms for detecting GSR response exist, for example, using principal component analysis [29], we chose to implement our own technique to find the peaks in the window. Our approach performs an additional layer of spectral filtering to remove high-frequency noise. Then we fit a spline to the signal and find its first and second derivatives. Now, we know from the second derivative which extrema are maxima and which ones are minima so we remove all the minima and are left with the peaks only. Of course, in a fairly degraded signal like the GSR it is not always trivial to find all the exact peaks, especially when

The physiological mirror—a system for unconscious control of a virtual environment through physiological

this needs to be done in real-time, but nonetheless the algorithm performs reasonably well. As a rule of thumb, the shorter the signal the better the algorithm performed in finding the number and location of peaks (the latter, however, is not required in our system). The detected peaks for a sample signal are shown in Fig. 6. The number of peaks per 15 or 20 second window are then used as a measure of arousal and the higher the number the greater the level of arousal. In an ordinary situation, a person would have 12 to 15 such peaks per 15-second window. Under stress, this number is likely to go up and this is often the case when people are exposed to new experiences such a VE. Our intention is to control a color component of an avatar, which usually is in the range of [0.0, 1.0] and in order to scale the number of peaks to a value in this range

Fig. 4 Front and detailed side view of a person wearing the three physiological sensors

Fig. 5 30-second GSR raw (above) sequence filtered using an IIR filter (below)

653

we first perform a baseline reading in the beginning of each experiment and the average number of peaks per 20-second window marks the lower boundary (i.e., 0.0). In order to manipulate the color of a portion in the texture, in our case, we wish to simulate facial blushing given the current state of arousal, we first create a binary mask corresponding to the region in the texture image. The binary mask can be stored in the alpha channel of the RGBA texture and its corresponding region in the texture is shown in Fig. 7. In order to change this color interactively, we apply a multiplier to one or more channels in a GLSL fragment shader, which can be modified at runtime.

Fig. 6 Filtered GSR signal of 4,500 samples. The signal has been downsampled to 32 Hz and, therefore, represents approximately 140 seconds of data. Blue horizontal lines occur at detected peaks. Note that one obvious peak after around 3,700 samples is not detected by the algorithm

654

C. Groenegress et al.

Fig. 7 Texture for head and eyes of a virtual character (left) and the binary mask outlining exactly the region whose color should be interactive (i.e., facial skin regions). The black region will be affected by color changes

Fig. 9 Stepwise time-compressed animation sequence of entire foot tapping movement. (A) shows upward movement for values smaller or equal to 0.5 and (B) the downward sequence for values greater than 0.5. Note that (1) and (6) comprise the same bone configuration while (3) and (4) form a brace around the midpoint of the animation (i.e., 0.5)

Fig. 8 Ten-second interval of ECG data (bottom). The QRS complex occurs around the large peaks shown in the middle graph and the topmost graph shows the calculated heart rate

4.3.2 Heart rate and foot tapping Heart rate relates to the number of heart beats per minute (bpm) and in a normal adult it is commonly between 60 and 100 bpm in a relaxed state, though it can vary significantly based on factors such as gender, fitness, or age. We use standard ECG measurement techniques whereby electrodes measuring an electrical impulse generated by the heart are placed on certain points on the skin. We used proprietary software courtesy of Guger Technologies OG for generating heart rate analyses and will not go into further detail here. The algorithm computes HR, HRV, and other data in realtime and computations are based on a one-minute interval. Every heart beat is characterized by ventrical activity which results in a visible spike in the ECG signal called the QRS complex and the average number of such spikes per minute is essentially equal to one’s heart rate. The time difference between two such QRS peaks can be used to describe the duration of a full foot tapping animation. An ECG sample is shown in Fig. 8.

Since it refers to the time between two heart beats, we designed a complete foot tapping animation to describe foot resting, foot lifting and foot lowering until foot rests on ground again (Fig. 9). The terminal points, i.e., 0.0 in the beginning and 1.0 at the end of each animation cycle, occur at two consecutive heart beats. However, in this particular animation, 0.0 and 1.0 are actually the same in terms of the bone rotations and positions, describing a complete animation cycle with two heart beats. Foot tapping is thus only computed once per heart beat by taking the time difference  between current and previous beat and by dividing this by a pre-defined step size s (i.e., the number of steps the animation is broken down to) yielding a step timer θ and the animation is incremented s times every θ ms. This is summarized in (1) and (2). θf (t) =

f (t) S

(1)

where f (t) = f (t) − f (t − 1)

(2)

The constant S refers to the number of steps we want the animation to take per iteration. Given the QRS difference in ms and a known and fairly constant frame rate, say 30 fps, we can thus easily work out an increment by which the animation progresses at every frame, thus going from 0.0 to 1.0 in a finite number of steps of equal increment. 4.3.3 Respiration Measuring respiration, the continuous intake of oxygen and outlet of CO2 through the lungs can be done in several ways.

The physiological mirror—a system for unconscious control of a virtual environment through physiological

Fig. 10 Ten-second interval of captured respiration signal. The topmost curve has been filtered and the bottom graph shows the approximate locations of the zero-crossings

In our case, the g.FLOWsensor respiration sensor is used to monitor the changes of temperature of breathing from nose and mouth. This allows us to infer a signal of approximately sinusoidal appearance, where minima and maxima refer to the exhaled and inhaled states of the lungs, respectively. The signal is filtered by using a standard Butterworth filter and then the zero-crossings and their first derivative are computed. If the signal around the zero-crossing moves from negative to positive it indicates an upcoming maximum and vice versa for a local minimum. The actual locations of the maxima in the signal are then estimated as follows. A local maximum is defined to occur halfway between a positive and a negative zero-crossing and a local minimum occurs midway between a negative and a positive zero-crossing. Figure 10 shows how we can fairly accurately determine the duration as well as beginning and end of an inhalation period. In order to model real-time respiration associated with a person’s real breathing we need to know the duration and the rough start and end points of the current inhalation period. The procedure is similar to the one described on heart rate and foot tapping. In this example, however, we are not only interested in calculating time differences but rather our intention is to adjust for variations in the extrema that occur throughout the measurement process, because inhalation and exhalation periods can not only vary in terms of their length but also in terms of depth. Going back to Fig. 10, it can be seen that while the difference between each consecutive maximum and minimum, respectively, may not appear very large, small variations do amount to differences in breathing patterns. Some refer to more intake of oxygen and result in deeper and longer respiration than others and this is exactly why the respiration signal presented is not a perfect sinusoid but only an approximation of it. Calculating only the time step at which maxima and minima occur therefore does not fully capture the quality of the signal—heart rate in this sense is much simpler as we are really only interested in time intervals between two beats. If we played back the entire breathing animation for

655

every minimum-maximum pair keeping track only of duration would show no major differences between deep and shallow breathing rhythms, for example, so we need to come up with a different solution that takes into account also the depth of the breathing pattern. Remember again that an animation can be performed using an arbitrary number of steps that can be uniquely mapped to the animation sequence which ranges from 0.0 to 1.0, where the former refers to the initial bone configuration at time t = 0 and the latter to the final configuration at the end of the animation, t = 1. The value 0.5 would result in the display of the midpoint of the animation and so on. Now, in addition to the duration of one respiration cycle, we also keep track of a running average of maxima and minima which we then use as a basis to clamp the current values to animation endpoints between 0.0 and 1.0 (equal to the running average). The running average describes the longterm performance of the maxima and essentially smoothes the values. This way heavy breathing results in endpoints clamped to number very close the maximal morph animation 0.0 and 1.0 and almost the entire animation is played back within the given time frame. Shallow breathing, on the other hand, results in endpoints that are closer to the midpoint 0.5 and further away from 0.0 and 1.0, so that only a portion of the animation is actually played within given time constraints. The resulting animations appear much more realistic and closer to the real breathing pattern performed by the human participant.

5 Results 5.1 Pilots We successfully tested the system with over 40 participants and the system worked reliably with every volunteer. The captured data was passed on to the machine processing it, which in turn transmitted it to the rendering machine where the data were used to animate a virtual character. A future study will focus on the efficiency of the command and selfidentification through physiological parameters. The portable capturing device is fairly unobtrusive as shown in Fig. 4. The sensors can be attached within a few minutes and the capturing device is then clipped to the volunteer’s belt. Even though it feels unnatural in the beginning, most volunteers commented that they forgot about wearing the device and sensors soon after entering the VE. 5.2 Robustness The system is very robust to noise and data loss. Since each incoming signal is filtered separately, short-term noise that might be introduced in the data can normally be corrected.

656

For example, small user movements may affect the sensors and introduce unwanted spurious peaks in any of the three signals discussed previously. While ECG data is usually less prone to noise, respiration, and especially GSR signals can be greatly distorted by minor movements. Given previous input, the system can also cope with complete lack of input in all dimensions over longer periods by simply repeating the last n seconds of active data until the signal is recovered.

6 Conclusions We introduced a VE that enables its visitors to interactively induce changes to the environment via their own physiology. The environment in turn reflects on the actual but unconscious psychophysiological response to the environment creating a closed loop. Our VE contains a number of virtual characters which can either respond interactively to the participant’s or other prerecorded physiological states as determined by his or her heart rate, respiration, and GSR, or an arbitrary combination of these. We developed and employed several algorithms for realtime physiological processing and analysis in order to use these data to control prebuilt animations that were looped throughout the experience and scaled in various ways, affected only by the participant’s physiology. The system is stable and very robust to noise. Depending on the physiological measurement employed, it can compensate for missing input. In a future study, we intend to show the feasibility of our initial hypothesis that people relate more to virtual events if they relate back to them. Acknowledgements This work was started under the European Union FET project PRESENCCIA Contract Number 27731, and completed under the European Project MIMICS 215 756.

References 1. Akizuki, H., Uno, A., Arai, K., Morioka, S., Ohyama, S., Nishiike, S., Tamura, K., Takeda, N.: Effects of immersion in virtual reality on postural control. Neurosci. Lett. 379(1), 23–26 (2005) 2. Andreassi, J.L.: Psychophysiology Human Behavior and Physiological Response. Erlbaum Associates, Hillsdale (2000) 3. Antley, A., Slater, M.: The effect on lower spine muscle activation of walking on a narrow beam in virtual real. IEEE Trans. Vis. Comput. Graph. (2010). http://doi.ieeecomputersociety.org/10. 1109/TVCG.2010.26 4. Brogni, A., Vinayagamoorthy, V., Steed, A., Slater, M.: Responses of participants during an immersive virtual environment experience. Int. J. Virtual Real. 6(2), 1–10 (2007) 5. Cardillo, C., Russo, M., LeDuc, P., Torch, W.: Quantitative EEG changes under continuous wakefulness and with fatigue countermeasures: implications for sustaining aviator performance. In: Schmorrow, D., Reeves, L. (eds.) Foundations of Augmented Cognition. Lecture Notes in Computer Science, pp. 137–146. Springer, Berlin (2007)

C. Groenegress et al. 6. Carrozzino, M., Tecchia, F., Bacinelli, S., Cappelletti, C., Bergamasco, M.: Lowering the development time of multimodal interactive application: the real-life experience of the XVR project. In: ZhiYing, S.Z., Ping, L.S. (eds.) Proc. of the 2005 ACM SIGCHI Intern. Conf. on Advances in Comput. Entertain. Technol., pp. 270–273. Valencia (2005) 7. Dawson, M.E., Schell, A.E., Filion, D.L.: The electrodermal system. In: Cacioppo, J.T., Tassinary, L.G., Berntson, G.G. (eds.) Handbook of Psychophysiology, pp. 200–223. Cambridge Univ. Press, Cambridge (2007) 8. Garau, M., Slater, M., Pertaub, D.P., Razzaque, S.: The responses of people to virtual humans in an immersive virtual environment. Presence: Teleoper. Virtual Environ. 14(1), 104–116 (2005) 9. Gillies, M., Slater, M.: Non-verbal communication for correlational characters. In: The 8th Annu. Intern. Workshop on Presence, vol. 8, pp. 103–106 (2005) 10. Groenegress, C., Holzner, C., Guger, C., Slater, M.: Effects of BCI use on reported presence in a virtual environment. Presence: Teleoper. Virtual Environ. (2010, in press) 11. Groenegress, C., Thomsen, M.R., Slater, M.: Correlations between vocal input and visual response apparently enhance presence in virtual environments. Cyberpsychol. Behav. 12(4), 429– 431 (2009) 12. Hall, E.T.: The Hidden Dimension. Doubleday, Garden City (1966) 13. Heeter, C.: Being there: the subjective experience of presence. Presence: Teleoper. Virtual Environ. 1(2), 262–271 (1992) 14. Huang, R.S., Jung, T.P., Makeig, S.: Event-related brain dynamics in continuous sustained-attention tasks. In: Schmorrow, D.D., Reeves, L.M. (eds.) Augmented Cognition. Lecture Notes in Computer Science, vol. 4565, pp. 65–74. Springer, Berlin (2007). doi:10.1007/978-3-540-73216-7_8 15. Jang, D.P., Kim, I.Y., Nam, S.W., Wiederhold, B.K., Wiederhold, M.D., Kim, S.I.: Analysis of physiological response to two virtual environments: driving and flying simulation. Cyberpsychol. Behav. 5(1), 11–18 (2002) 16. Leeb, R., Kleinrath, C., Guger, C., Scherer, R., Friedman, D., Slater, M., Pfurtscheller, G.: Using a BCI as a navigation tool in virtual environments. In: 2nd Intern. Brain-Comput. Interface Workshop and Train., pp. 49–50 (2004) 17. Leeb, R., Settgast, V., Fellner, D., Pfurtscheller, G.: Self-paced exploration of the Austrian national library through thought. Int. J. Bioelectromagn. 9(4), 237–244 (2007) 18. Meehan, M., Insko, B., Whitton, M.C., Brooks, F.P.: Physiological measures of presence in stressful virtual environments. ACM Trans. Graph. 21(3), 645–652 (2002) 19. Meehan, M., Razzaque, S., Whitton, M.C., Brooks, F.P.: Effect of latency on presence in stressful virtual environments. In: IEEE Virtual Real. pp. 141–148 (2003) 20. Nagai, Y., Critchley, H.D.: Novel therapeutic application of galvanic skin response (GSR) biofeedback to a neurological disorder: mechanisms underlying biofeedback in epilepsy management. In: New Research on Biofeedback, pp. 1–31. Nova Science, Hauppauge (2007) 21. Pan, X., Gillies, M., Slater, M.: Male bodily responses during an interaction with a virtual woman. In: Prendinger, H., Lester, J.C., Ishizuka, M. (eds.) Intell. Virtual Agents, pp. 89–96. Springer, Berlin (2008) 22. Pertaub, D.P., Slater, M., Barker, C.: An experiment on public speaking anxiety in response to three different types of virtual audience. Presence: Teleoper. Virtual Environ. 11(1), 68–78 (2002) 23. Slater, M.: Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Phil. Trans. R. Soc. B 364(1535), 3549–3557 (2009) 24. Slater, M., Antley, A., Davison, A., Swapp, D., Guger, C., Barker, C., Pistrang, N., Sanchez-Vives, M.V.: A virtual reprise of the

The physiological mirror—a system for unconscious control of a virtual environment through physiological

25.

26.

27.

28. 29.

30.

31.

Stanley Milgram obedience experiments. PLoS ONE. http://www. plosone.org/article/fetchArticle.action?articleURI= info%3Adoi%2F10.1371%2Fjournal.pone.0000039 (2006). Accessed 12 January 2010 Slater, M., Guger, C., Edlinger, G., Leeb, R., Pfurtscheller, G., Antley, A., Garau, M., Brogni, A., Friedman, D.: Analysis of physiological responses to a social situation in an immersive virtual environment. Presence: Teleoper. Virtual Environ. 15(5), 553– 569 (2006) Slater, M., Pérez Marcos, D., Ehrsson, H., Sanchez-Vives, M.V.: Towards a digital body: the virtual arm illusion. Front. Hum. Neurosci. 2(6), 1–8 (2008) Slater, M., Spanlang, B., Sanchez-Vives, M.V., Blanke, O.: First person experience of body transfer in virtual reality. Plos ONE (2010, in press) Spanlang, B.: HALCA Hardware Accelerated Library for Character Animation. Technical report 2009-1, Event Lab (2009) Tarvainen, M.P., Koistinen, A.S., Valkonen-Korhonen, M., Partanen, J., Karjalainen, P.A.: Analysis of galvanic skin responses with principal components and clustering techniques. IEEE Biomed. Eng. 48(10), 1071–1079 (2001) Slater, M., Usoh, M., Steed, A.: Taking steps: the influence of a walking technique on presence in virtual reality. ACM Trans. Comput.-Hum. Interact. 2(3), 201–219 (1995) Venables, P.H., Christie, M.J.: Electrodermal activity. In: Martin, I., Venables, P.H. (eds.) Techniques in Psychopsychology, pp. 2– 67. Wiley, New York (1980) Christoph Groenegress is a Postdoctoral Research Fellow in the EVENT Lab at Universitat de Barcelona. He received a B.Sc. in Artificial Intelligence from Manchester University and an MRes in Computer Vision, Image Processing, Graphics, and Simulation from University College London. His research interests include Whole– Body Interaction, Augmented Reality, Computer Vision, Storytelling, Artificial Intelligence, HCI, and Psychology.

657 Bernhard Spanlang is a Research Fellow in the EVENT Lab at Universitat de Barcelona, Spain. He received a Diplom Ingenieur degree in Informatik from the Johannes Kepler University Linz and he holds an Engineering Doctorate degree in Vision, Imaging, and Virtual Environments from University College London. His research interests are virtual clothing, virtual character animation, human-avatar interaction, motion tracking, virtual embodiment, and virtual reality in general.

Mel Slater is an ICREA Research Professor at the University of Barcelona where he leads the EVENT Lab (www.event-lab.org). He became Professor of Virtual Environments at University College London in 1997. He was a UK EPSRC Senior Research Fellow from 1999 to 2004. Twenty three of his Ph.D. students have obtained their Ph.D.s since 1989. In 2005, he was awarded the Virtual Reality Career Award by IEEE Virtual Reality “In Recognition of Seminal Achievements in Engineering Virtual Reality.” He leads several European Projects and has been awarded a European Research Council grant.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.