Exp Brain Res (2014) 232:637–646 DOI 10.1007/s00221-013-3772-1

RESEARCH ARTICLE

Perceptual scaling of visual and inertial cues Effects of field of view, image size, depth cues, and degree of freedom B. J. Correia Grácio · J. E. Bos · M. M. van Paassen · M. Mulder 

Received: 22 February 2013 / Accepted: 11 November 2013 / Published online: 29 November 2013 © Springer-Verlag Berlin Heidelberg 2013

Abstract  In the field of motion-based simulation, it was found that a visual amplitude equal to the inertial amplitude does not always provide the best perceived match between visual and inertial motion. This result is thought to be caused by the “quality” of the motion cues delivered by the simulator motion and visual systems. This paper studies how different visual characteristics, like field of view (FoV) and size and depth cues, influence the scaling between visual and inertial motion in a simulation environment. Subjects were exposed to simulator visuals with different fields of view and different visual scenes and were asked to vary the visual amplitude until it matched the perceived inertial amplitude. This was done for motion profiles in surge, sway, and yaw. Results showed that the subjective visual amplitude was significantly affected by the FoV, visual scene, and degree-of-freedom. When the FoV and visual scene were closer to what one expects in the real world, the scaling between the visual and inertial cues was closer to one. For yaw motion, the subjective visual amplitudes were approximately the same as the real inertial amplitudes, whereas for sway and especially surge, the subjective visual amplitudes were higher than the inertial amplitudes. This study demonstrated that visual characteristics affect the scaling between visual and inertial motion which leads to the hypothesis that this scaling may be a good metric to

B. J. Correia Grácio (*) · M. M. van Paassen · M. Mulder  Faculty of Aerospace Engineering, Control and Simulation Division, Delft University of Technology, P. O. Box 5058, 2600 GB Delft, The Netherlands e-mail: [email protected] J. E. Bos  TNO Behavioural and Societal Sciences, P. O. Box 23, 3769 ZG Soesterberg, The Netherlands

quantify the effect of different visual properties in motionbased simulation. Keywords  Self-motion perception · Visual-vestibular interaction · Field of view · Scene content

Introduction Motion simulators provide pilots and drivers with inertial and visual cues similar to the ones experienced in a real vehicle. Generally, the inertial cues used in motion simulators are scaled and filtered versions of the vehicle inertial cues (Nahon and Reid 1990). However, simulators capable of large displacements are able to perform certain maneuvers one-to-one (i.e., without scaling or filtering the original inertial cue). When using one-to-one motion, the amplitude of the visual cue displayed in the virtual world should be equal to the amplitude of the simulator inertial cue. Although one-to-one simulation is possible for specific maneuvers, it is often perceived as incorrect by subjects (Groen et al. 2001, 2007; Pretto et al. 2009a; Feenstra et al. 2009). Studies have found that the amplitude of the inertial cue has to be lowered to prevent subjects from perceiving motion as “too strong” (Groen et al. 2001, 2007; Pretto et al. 2009a; Feenstra et al. 2009). Two driving studies (Pretto et al. 2009a; Feenstra et al. 2009) using a slalom maneuver showed that the preferred inertial condition had a motion gain (i.e., the ratio between the inertial and visual cues) of approximately 0.6. Similar results were found by Groen et al. (2001) when simulating a takeoff maneuver, where a motion gain lower than one was used since pilots perceived one-to-one motion as unrealistic. In two other experiments (Correia Grácio et al. 2010, 2013), subjects

13

638

had to choose the inertial amplitude of a sinusoidal motion profile that best matched the movement being displayed via the simulator projectors. In both studies, subjects chose an inertial amplitude lower than the visual amplitude. In all these studies, researchers found that in order to have a realistic simulation, the motion gain had to be lower than one, independently of the type of task (flying or driving). This necessity to lower the motion gain occurred for surge and sway motion but not for yaw motion (Groen et al. 2007; Van der Steen 1998). Wallach (1987) found similar results when studying the compensation mechanism that humans use to perceive an environment as stationary. Yaw head movements are accompanied by a visual shift of the environment in the opposite direction, while head forward movements are accompanied by a visual expansion of the environment. Wallach (1987) found that for yaw head movements, shifts in the environment that exceeded the yaw movement by more than 3 % would be detected by subjects as a non-stationary environment whereas for surge, changes of the objects in the environment could go up to 40 % until subjects detected it as a non-stationary environment. An issue that seems to be ignored in the literature on simulator motion fidelity is the effect that the visual cues delivered by the projection system (e.g., field of view, resolution, luminance, contrast, spatial resolution) may have on the motion gains. An example is the Sinacori motion fidelity criteria (Sinacori 1977; Schroeder and Grant 2010), which compares the simulator and simulated motion without taking the simulator visuals into account. Self-motion perception, however, has been shown to be affected by visual cues like field of view (FoV) (Brandt et al. 1973; Duh et al. 2001; Chung et al. 2003; Pretto et al. 2009b), the use of stationary objects in the visual scene (Howard and Howard 1994), and the availability of size and depth cues (Andre and Johnson 1992; Duh et al. 2001; Riecke et al. 2007; MacNeilage et al. 2007), among others. In Addition, humans were shown to underestimate their speed in virtual environments (Redlick et al. 2001; Jaekl et al. 2002, 2005). Therefore, if self-motion perception is affected by these visual cues, we should investigate whether the motion gains are also affected. FoV is a visual cue that might have an effect on the motion gains. Duh et al. (2001) showed that a larger FoV increased the perceived self-motion. This was shown to occur due to the reduction in the visual flow used by the peripheral vision (Brandt et al. 1973; Berthoz et al. 1975; Howard and Heckmann 1989). However, the effect of the FoV in self-motion perception depends on the degree-offreedom. Pretto et al. (2009b) showed that large FoVs are not necessary to estimate the amplitude of visual rotations in yaw but that horizontal FoVs of at least 60° are advisable for speed perception in surge. Most of the studies

13

Exp Brain Res (2014) 232:637–646

investigating the effect of FoV on self-motion perception were performed in fixed-base conditions (i.e., without inertial motion), which means that the visual-only results might be affected by visual-vestibular interactions when inertial motion is present (Berthoz et al. 1975; Dichgans and Brandt 1978; Wallach 1987). Size and depth cues were also shown to influence selfmotion perception, and, therefore, might also affect the simulator motion gains. MacNeilage et al. (2007) used objects of known size to allow humans to estimate their linear speed and acceleration. This scale ambiguity of optic flow (Longuet-Higgins and Prazdny 1980; Royden et al. 1994) occurs when there are not enough visual cues to create a distance estimate, necessary to estimate self-linear velocity and acceleration. Therefore, self-motion information taken from optic flow needs objects of known size to disambiguate between high-speed (e.g., flying over a flat desert) and low-speed movements (e.g., walking over the same desert) (Redlick et al. 2001). Nevertheless, more abstract cues like a horizon line or random dots to create more optic flow cues during self-motion were shown by Berger et al. (2007) to help pilots in stabilizing a simulated helicopter. Because both FoV and size and depth cues have been shown to affect self-motion perception, we expect changes in these parameters to be reflected in the simulator motion gains. As suggested (but not studied) by Groen et al. (2007), having visual characteristics closer to the ones available in real life should lead to motion gains with values closer to one. The goal of this study is to test whether different visual cues like FoV and size and depth cues influence the scaling between visual and inertial amplitude in a simulation environment. An experiment was conducted where subjects had to match the visual amplitude with the perceived inertial amplitude, similar to the studies of Jaekl et al. (2002, 2005). This is different than what we did in our previous studies (Correia Grácio et al. 2010, 2013) where subjects had to match the inertial amplitude with the perceived visual amplitude. Changing the task in this experiment allowed us to, on the one hand, compare the differences between the two approaches and, on the other hand, create experimental conditions independent from the inertial amplitude that might be chosen by subjects. The amplitude matching task in this experiment was performed for two FoVs, three visual scenes, and three degrees-offreedom of inertial motion. The results help understand the influence of different visual cues on the motion gain. The simulator visual and inertial systems can then be adjusted to make the perceived motion gain closer to one, which may improve the realism of the simulation. Alternatively, the visual gain may be used to quantify the effect of different visual properties, which together determine the quality in terms of realism of the artificial imagery, in

639

Exp Brain Res (2014) 232:637–646

inducing a veridical percept of self-motion in a simulation environment.

Method Subjects Nineteen subjects participated in this experiment (11 male and 8 female). The subjects had an average age of 29 years with a standard deviation of 11 years ranging between 18 and 63 years. None of the subjects had any vestibular or motor skill deficit. The experiment was approved by the local ethics committee, following the Declaration of Helsinki on ethical principles for medical research involving human subjects. One subject was not able to finish the experiment due to scheduling constraints. Apparatus The experiment was conducted at the Desdemona research simulator (Fig. 1) located at the TNO institute in Soesterberg, the Netherlands. This simulator features a centrifugebased design with six degrees-of-freedom (DoF) (Roza et al. 2007). In this study, the 8-m horizontal actuator of the simulator was used to generate surge and sway motion. Yaw motion was also used in this experiment and was performed by the simulator yaw gimbal which has an unlimited angular displacement. The simulator visual system has three DLP beamers projecting on a three-part flat screen. The total FoV is 120 horizontal degrees and 32 vertical degrees. Each beamer displays a resolution of 1400 × 1050 pixels and has a refresh rate of 60 Hz. Besides the visual system, the simulator cabin contains an F-16 cockpit with realistic throttle, side-stick, and rudder pedals. The side-stick was the only control input used in this experiment.

Fig. 1  Desdemona research simulator

Experimental design The experiment had three different experimental blocks, one for each degree-of-freedom. The DoFs tested were surge, sway, and yaw. The order in which the experimental blocks were performed by subjects was randomized using a Latin square design. In each block, we tested two FoVs, three visual scenes, and two different initial conditions (see below). A Latin square design was also used to randomize the order in which the FoVs, visual scenes, and initial conditions were conducted by each subject. The two FoVs were generated using one or three flatscreens. For one screen, the horizontal FoV was 41°, while for three screens, the horizontal FoV was 120°. The vertical FoV was kept at 32° for both FoV configurations. The three visual scenes are shown in Fig. 2. Size and depth cues were varied by changing the location and/ or number of objects in the virtual world. The first visual scene is located near a rural road outside a city (OC), the second scene is located in a city center (CC), and the third scene is located in a city center with balloons randomly placed in the air (CCB). In the OC scene, subjects could only see a road, a city at a large distance, and a tree. In this scene, movement is mainly perceived by the movement of the ground and sky textures. The CC scene contained objects that can be easily found in a city like buildings, a road, garbage bins, advertisement posters, among others. These objects help to scale the perceived movement. The CCB visual scene had balloons to create extra optic flow cues. Two different initial conditions were used in this experiment. In one initial condition, the visual amplitude was higher than the inertial amplitude (high initial condition), whereas in the other, the visual amplitude was smaller than the inertial amplitude (low initial condition). In our previous studies (Correia Grácio et al. 2010, 2013), we found that different initial conditions led to motion gains statistically different from each other, which means that the preferred motion gain did not converge to a single value, but rather formed an interval or zone, bounded by the motion gains obtained from the two different initial conditions. Therefore, for methodological reasons, it would be incorrect to have only one initial condition in this experiment since it would bias the results. For this study, the visual input in the high initial condition was a random value between values 40 and 60 % higher than the inertial amplitude. The visual input in the low initial condition was a random value between values 40 and 60 % lower than the inertial amplitude. For surge and sway, the inertial motion profile was a sinusoid with an amplitude of 2 m and a frequency of 1 rad/s. For the yaw DoF, the inertial motion profile was a sinusoid with an amplitude of 20° and a frequency of

13

640

Exp Brain Res (2014) 232:637–646

Fig. 2  Visual scenes where OC is the scene outside the city, CC is the city center scene, and CCB is the city center scene but with balloons randomly placed in the air

1 rad/s. All sinusoids had fade-in and fade-out periods to guarantee that the motion platform started and finished with zero position, velocity, and acceleration. Both the fade-in and fade-out lasted for one size period (2π s). The total duration of the motion sinusoids was variable and depended on the time it took the subjects to complete the task. In summary, there were 36 experimental conditions (3 DoFs × 2 FoVs × 3 visual scenes × 2 initial conditions) divided in three experimental blocks of 12 conditions each. Procedure Subjects started the experiment by reading the briefing form and signing an informed consent. Then, they were seated in the simulator and secured by a five-point safety harness. They wore an active noise cancelation headset where white noise was played to mask the sound of the simulator’s actuators. This headset was also used for

13

communication with the experiment supervisor. Before starting with the experimental measurements, each subject had several practice runs in the three different DoFs until they felt acquainted with the task. Their task was to obtain the best match between the visual and inertial amplitude. For that, they had to vary the visual amplitude by means of two directional buttons in the side-stick while perceiving the inertial amplitude. One directional button was used to increase or decrease the visual amplitude by 15 %, while the other was used to increase or decrease the visual amplitude by 5 % (fine-tuning). The measuring phase started with one of the three experimental blocks: surge, sway, and yaw. The initial visual amplitude depended on whether it was a high or low initial condition. Subjects were neither informed about the different initial conditions, nor that the inertial amplitude was in fact constant between experimental conditions within the same DoF. Each experimental condition started

641

Exp Brain Res (2014) 232:637–646 Table 1  The MIsery SCore (MISC) rating scale (Bos et al. 2005) used to measure motion sickness Symptom No problems Slight discomfort but no specific symptons Dizziness, warm, headache, stomach awareness, sweating, etc.  Vague  Some  Medium  Severe Nausea  Some  Medium  Severe  Retching Vomiting

MISC 0 1

2 3 4 5 6 7 8 9 10

by pressing the “fire” button of the side-stick, which made the simulator move visually and inertially with the same phase and frequency but different amplitude. Then, subjects used the directional buttons to obtain the best match between the visual and inertial amplitude. When satisfied with the amplitude value, subjects pressed the “fire” button to stop the simulation. At this moment, subjects told the experimenter their current MIsery SCore (MISC, see Bos et al. 2005) according to Table 1. This scale was visible in the cabin interior. The experiment was aborted if subjects reported a MISC higher than six. After completing all 12 experimental conditions for the first experimental block, subjects proceeded in the experiment by conducting the same task for the two remaining experimental blocks. The experiment took approximately 90 min. Subjects were allowed to take a 5–10 min break between experimental blocks if needed. Data analysis The subjective visual amplitudes measured during the experiment were divided by the corresponding simulator inertial amplitudes to yield visual gains (Jaekl et al. 2002, 2005). Although differences between visual gains obtained with high and low initial settings were observed in other studies (Correia Grácio et al. 2010, 2013), we just defined “the” visual gain per subject and condition to be the average of each pair of matching high and low settings. This was done because the main objective of this research is not to study a visual gain zone but the effect of different visual cues on the visual gains. Therefore, for every subject, we averaged the values between the high and low initial conditions. Because the visual gains are ratios, we applied a

logarithmic transformation (log10) to the average visual gains of every subject. This transformation allows to weigh the visual gain data according to a ratio scale (Keene 1995). We then averaged over subjects for each experimental condition (DoF, FoV, Visual scene). A repeated-measures ANOVA was conducted to test whether the DoF, FoV, or scene content had an effect on the transformed visual gains. A Greenhouse-Geisser (G-G) correction was applied whenever sphericity was violated, resulting in corrected p values that were more conservative. All statistical tests were performed with SPSS 19. For interpretation, the results obtained from the statistical analysis and shown in section “Results” were re-transformed to a linear scale.

Results Motion sickness Besides the subject that was not able to finish the experiment due to time constraints, another four subjects felt motion sick during the experiment and were not able to finish all experimental conditions. These four subjects felt motion sick during the surge condition. The remaining fourteen subjects were able to conduct the experiment without serious motion sickness. The total mean MISC for these subjects was of 0.55. Visual gains Figure 3 shows the average visual gains for all experimental conditions. A visual gain of 1 means that the selected visual amplitude is equal to the inertial amplitude. A visual gain higher than 1 means that the visual amplitude is higher than the inertial amplitude, whereas a visual gain lower than 1 means that the visual amplitude is lower than the inertial amplitude. The results from the repeated-measures ANOVA are shown in Table 2. The results of the repeated-measures ANOVA (Table 2) showed a significant main effect of the DoFs on the transformed visual gains. From Fig. 3, we observe that the surge has the highest visual gains, while yaw has the lowest. The average visual gains between DoFs were 4.16, 2.55, and 1.04, respectively, for surge, sway, and yaw. A post hoc test using a Bonferroni correction showed that the transformed yaw visual gains were significantly lower than the transformed surge (p = 0.001) and sway (p = 0.003) visual gains, and that the sway transformed visual gains were significantly lower (p = 0.029) than the surge. Additionally, a one-sample t test (Table 3) conducted on the transformed visual gains showed that the surge and sway visual gains were statistically different from 1, but not the yaw visual gain.

13

642

Exp Brain Res (2014) 232:637–646

Fig. 3  Average visual gains for the three degrees-of-freedom. The error bars indicate the 95 % confidence intervals and are non-symmetric due to the logarithmic transformation Table 2  Repeated-measures ANOVA results for the transformed visual gains Independent variables

Correction F-ratio

DoF

G-G

FoV



Visual scene



DoF × visual scene



FoV × visual scene

G-G

p

Significance

F(1.32,17.21) = 0.000 ** 18.68 F(1,13) = 12.77 0.003 ** F(2,26) = 4.65 F(4,52) = 3.06

0.019 * 0.024 *

F(1.36,17.63) = 0.027 * 5.16

** Highly significant (p ≤ 0.01) * Significant (0.01  0.05)

Discussion

There was also a significant main effect of the FoV on the transformed visual gains. The condition with three screens had significantly lower visual gains than the condition with one screen. The average absolute difference between these conditions was 0.31. Figure 3 shows that this difference is mainly caused by the surge visual gains. From Fig. 3, though less clear as compared to the effects described above, we observe that the visual gain increases when the number of size and depth cues decreases. The repeated-measures ANOVA (Table 2) showed a significant main effect of the visual scene on the transformed visual gains. The average visual gains were 2.12, 2.20, and 2.38

The main objective of this paper was to study the effect of FoV and size and depth cues on the visual gains (i.e., the ratio between the visual and inertial amplitudes) for surge, sway, and yaw in a motion-based simulator. Results showed a significant main effect of the FoV on the visual gains. The condition using three screens showed visual gains closer to unity than the condition with one screen, meaning that the visual amplitudes chosen for the three screens experimental condition were closer to the simulator inertial amplitude. It may be assumed that the visual amplitude will be closer to the inertial amplitude when the FoV becomes wider and closer to the human effective FoV of approximately 200° horizontally and 150° vertically (Arthur 1999). For

13

643

Exp Brain Res (2014) 232:637–646

(a)

(b)

(c)

Fig. 4  Example of the optic flow field for surge, sway, and yaw. For surge and sway, the image velocity varies with the distance to a certain point in the visual scene

example, Pretto et al. (2009b) found that the central FoV should be higher than 60° for correct linear speed perception. It was also shown before (Alfano and Michel 1990; Andre and Johnson 1992; Arthur 1999; Chung et al. 2003; Berger et al. 2007; Pretto et al. 2009b) that a narrower FoV degrades human performance in navigation, perception of size and space, and spatial awareness. This is related to the role of the peripheral view in the perception of self-motion. By narrowing the FoV, we are limiting the amount of visual flow used by the peripheral vision and, therefore, reducing perceived self-motion (Brandt et al. 1973; Berthoz et al. 1975; Duh et al. 2001). In our study, the wider FoV made subjects perceive themselves as moving faster, making them lower the visual amplitude to values closer to the inertial amplitude. In this experiment, subjects were also exposed to three different visual scenes: a city center scene containing random balloons in the air (CCB), the same city center scene but without the balloons (CC), and a visual scene in a grass field where the city is seen at a large distance (OC). In order to estimate linear speed from optic flow, humans need to have objects of known size in their visual field (Monen and Brenner 1994; MacNeilage et al. 2007), a requirement known as scale ambiguity of optic flow (Longuet-Higgins and Prazdny 1980; Royden et al. 1994). The visual scenes in this study were generated such that the scale ambiguity should be smaller for the CCB, due to the numerous objects used to create more depth variation and therefore more optic flow speed variation, than for the OC scene, which contained few depth cues. The results showed that this scale ambiguity had an effect on the visual gains, since these significantly increased with the ambiguity, being higher and further from one for the OC scene than for the CCB scene. The visual gains can then be used as a scale ambiguity measure, where we expect a visual scene with enough size and depth cues to induce visual gains close to one. The visual gains were affected differently by the degrees-of-freedom, being larger than one for surge and sway but not for yaw. This linear motion overestimation of

the visual gains might be again related to the size and depth cues discussed before and their effect on the optic flow. Figure  4 shows an example of optic flows produced for surge, sway, and yaw self-motion. For surge and sway, the optic flow depends on the distance between the observer and the objects shown in the visual scene. The optic flow will then be a cue that subjects can use to interpret depth. For yaw, on the other hand, the optic flow is constant and independent from the distance between the observer and the objects shown in the visual scene. This means that for yaw motion, the self-motion information taken from a generic visual scene might be the same as the information taken from a realistic visual scene because yaw optic flow presents no depth perception cues (Cornilleau-Pérès and Gielen 1996). Therefore, it is less probable that yaw self-motion information is interpreted incorrectly from the visual scene. This would explain why the yaw visual gains are approximately one and why Valente Pais et al. (2010b) found no differences between the coherence zones (i.e., zones where inertial and visual amplitudes are perceived as coherent although their values might be different) obtained for a star-field visual scene or the ones obtained for an airport visual scene. For surge and sway, on the other hand, depth perception is crucial for the perception of selfmotion from visual information (Cornilleau-Pérès and Gielen 1996). Similar overestimation for surge was already reported when judging travelled distances from optic flow alone (Redlick et al. 2001), where the authors concluded that optic flow can only be used for navigation when “the visual cues are strong.” Depth perception depends on visual cues that can be grouped into three categories (Sweet and Kaiser 2011): –– Primary depth cues, like accommodation, convergence, and stereopsis. –– Pictorial depth cues like perspective, texture gradient, relative sizes, occlusion, atmospheric perspective, lighting and shading, and blur. –– Motion-induced cues like motion parallax and optic flow.

13

644

In the real world, these depth perception cues exist naturally and although some might be impoverished or absent, they are never in conflict (Sweet and Kaiser 2011). In a virtual world, on the other hand, these depth cues might conflict with each other due to the properties of synthetic displays (Sweet and Kaiser 2011). We already saw that image size, an important depth cue (MacNeilage et al. 2007; Sweet and Kaiser 2011), influenced the visual gains. However, there were other important depth cues missing or provided incorrectly in this study, like accommodation and convergence since far away objects were not focused at infinity. Examples of other missing depth cues in this study were stereopsis, which is sufficient to estimate relative depth (Medina Puerta 1989), shadow stereopsis, known to be important for depth perception (Medina Puerta 1989), shadows and shading, which help humans to recognize depth and geometrical relations (Bülthoff and Mallot 1988; Medina Puerta 1989; Sugano et al. 2003; Sweet and Kaiser 2011), and motion parallax, important to process depth when scaling information is available (Ono et al. 1986; Dokka et al. 2011). Therefore, a conflict in depth perception cues could create uncertainties in the self-motion information extracted from the displays, influencing then the surge and sway visual gains as shown by the high confidence intervals found for these DoFs. Thus, reducing the conflicts in the depth perception cues is hypothesized to create visual gains of approximately one, similar to what we found in yaw where there is no conflict. This also shows that the visual gain seems to be a reliable variable to measure the underestimation of visual velocity caused by different visual cues, like FoV and size and depth cues. These visual cues, all together, thus determine the quality of the imagery with respect to their capability of inducing a veridical percept of self-motion. It is this quality that hence seems to affect the visual gains studied here. With the visual gain, it is possible to mutually compare different visual aspects and access their effect on simulation realism. This makes the visual gain an important tool in investments aiming at improving virtual reality since it can be used as a decision factor when there is a trade-off between different visual aspects. The results found for the different DoFs seem similar to what was reported in the literature. In a real environment, Wallach (1987) found that visual yaw could only exceed inertial yaw in 3 % before being detected by subjects, which is similar to our mean visual gain for yaw that exceeded the inertial motion in 4 %. However, our visual gains for surge motion were much higher than the 40 % excess of visual motion found by Wallach (1987). This discrepancy might be explained by the differences in depth perception discussed before, since Wallach (1987) used a real environment whereas we used a virtual environment. Jaekl et al. (2002) also found smaller visual gains for rotational

13

Exp Brain Res (2014) 232:637–646

motion (1.26) than for translational motion (1.45), without specifying, however, whether the yaw visual gains values were significantly different than 1. Contrary to our results, Jaekl et al. (2002) found that the surge visual gains were significantly lower than the sway visual gains. Differences between our results and the ones found by Jaekl et al. (2002) might be explained by differences in visual scene: a sphere versus a landscape, visual display: a head mounted display versus a projection screen, task: active versus passive head movements, and frequency of the motion profile: 0.5 versus 0.16 Hz (1 rad/s). Other studies also found that the motion gain (which is the inverse of the visual gain used in this study) was underestimated for lateral (Groen et al. 2007; Pretto et al. 2009a; Feenstra et al. 2009; Correia Grácio et al. 2010) and longitudinal (Groen et al. 2001; Harris et al. 2000) linear motion but not for angular motion (Van der Steen 1998). By converting our mean visual gains to motion gains, we found a motion gain of approximately 0.24, 0.39, and 0.96, respectively, for surge, sway, and yaw. The surge results are similar to what Harris et al. (2000) found when subjects matched visual motion to a target distance presented physically, whereas the yaw results are in line with what Van der Steen (1998) and Valente Pais et al. (2010a) found when measuring coherence zones. However, in these studies, the different degrees-of-freedom were studied in separate environments impeding their comparison. In our study, the same subjects and apparatus were used, allowing for a valid comparison. The sway motion gains found in this study are smaller than what we found previously (Correia Grácio et al. 2010, 2013). The differences in the motion gains were possibly caused by differences in the measuring task. In our previous studies, subjects tried to find an inertial amplitude that would match their visual perception. In this study, however, subjects tried to obtain a visual amplitude that matched their perception of inertial motion. Therefore, it seems that the perceptual information taken from reference visual cues is different than the perceptual information taken from reference inertial cues even if both the inertial and visual cues have the same physical amplitude. This hypothesis needs, however, to be validated experimentally since it is based on results taken from different studies. Nevertheless the study from Harris el al. (2000) seems to show that there is indeed a difference between using visual or inertial motion as a reference. Although using a measuring task different from the one used in our studies, Harris el al. (2000) found differences in distance estimation when, in one condition, subjects had to match physical motion to a visual target, whereas in other condition, subjects had to match visual motion to a physical target. In this study, there were also some subjects that could not finish the experiment due to motion sickness. Interestingly, these dropouts occurred all during the surge conditions.

Exp Brain Res (2014) 232:637–646

According to Bos and Bles (1998), it is expected to have motion sickness during motion conditions that affect the direction and magnitude of the subjective vertical, which happens during the surge, sway, and heave, but not during yaw, especially for frequencies around 1 rad/s. However, their model predicts the same amount of motion sickness for surge and sway while in our experiment, the motion sickness issues we had occurred during surge. Even so, there could be ecological reasons making humans more prone to motion sickness induced by surge motion than sway motion, since lateral falls, for example, can be prevented more easily than forward and especially backward falls.

Conclusion In this study, we tested whether the FoV and size and depth cues influenced the perceptual scaling between visual and inertial surge, sway, and yaw motion. Results showed that a wider FoV of an artificial imagery led to visual amplitudes closer to the inertial amplitudes of a motion platform. We also found that the visual scene incorporating more size and depth cues showed visual amplitudes closer to the inertial amplitude. When comparing degrees-of-freedom, yaw showed visual amplitudes closer to the real inertial amplitude, while sway and especially surge were largely overestimated, with surge also leading to simulator sickness. This difference between degrees-of-freedom was possibly caused by the different optic flows shown in each visual scene and their relation to depth perception. With this study, we can conclude that visual characteristics, like FoV and size and depth cues, affect perceived motion in a simulation environment and therefore affect the scaling between visual and inertial amplitude. This result is very important for motion simulation since it shows that poor visual characteristics may influence motion perception even when a high-fidelity motion platform is used. The visual gains (i.e., the ratio between the visual and inertial amplitudes) revealed to be a good metric to take into account when measuring “image quality” in motion simulators. Acknowledgments  This work was partly supported by the Dutch defence research program V937 “Improved Performance at Motion.”

References Alfano PL, Michel GF (1990) Restricting the field of view: perceptual and performance effects. Percept Mot Skills 70(1):35–45 Andre AD, Johnson WW (1992) Stereo effectiveness evaluation for precision hover tasks in a helmet-mounted display simulator. In: Systems, Man and Cybernetics, Chicago, IL Arthur KW (1999) Effects of field of view on task performance with head-mounted displays. PhD thesis, University of North Carolina at Chapel Hill

645 Berger DR, Terzibas C, Beykirch KA, Bülthoff HH (2007) The role of visual cues and whole-body rotations in helicopter hovering control. In: AIAA modeling and simulation technologies conference and exhibit, Hilton Head, SC, AIAA 2007-6798 Berthoz A, Pavard B, Young LR (1975) Perception of linear horizontal self-motion induced by peripheral vision (linearvection) basic characteristics and visual-vestibular interactions. Exp Brain Res 23(5):471–489 Bos JE, Bles W (1998) Modelling motion sickness and subjective vertical mismatch detailed for vertical motions. Brain Res Bull 47(5):537–542 Bos JE, MacKinnon SN, Patterson A (2005) Motion sickness symptoms in a ship motion simulator: effects of inside, outside, and no view. Aviat Space Environ Med 76(12):1111–1118 Brandt T, Dichgans J, Koenig E (1973) Differential effects of central versus peripheral vision on egocentric and exocentric motion perception. Exp Brain Res 16(5):476–491 Bülthoff HH, Mallot HA (1988) Integration of depth modules: stereo and shading. J Opt Soc Am A 5(10):1749–1758 Chung WWY, Sweet BT, Kaiser MK, Lewis E (2003) Visual cueing effects investigation for a hover task. In: AIAA modeling and simulation technologies conference and exhibit, Austin, TX, AIAA 2003-5524 Cornilleau-Pérès V, Gielen CCAM (1996) Interactions between selfmotion and depth perception in the processing of optic flow. TRENDS Neurosci 19(5):196–202 Correia Grácio BJ, Van Paassen MM, Mulder M, Wentink M (2010) Tuning of the lateral specific force gain based on human motion perception in the Desdemona simulator. In: AIAA modeling and simulation technologies conference and exhibit, Toronto, Ontario Canada, AIAA 2010-8094 Correia Grácio BJ, Valente Pais AR, Van Paassen MM, Mulder M, Kelly LC, Houck JA (2013) Optimal and coherence zone comparison within and between flight simulators. J Aircr 50(2):493–507 Dichgans J, Brandt T (1978) Visual-vestibular interaction: effects on self-motion perception and in postural control, vol 8, Handbook of Sensory Physiology, Springer, Heidelberg, pp 755–804 Dokka K, MacNeilage PR, DeAngelis GC, Angelaki DE (2011) Estimating distance during self-motion: a role for visual-vestibular interactions. J Vis 11(13):1–16 Duh HBL, Lin JJW, Kenyon RV, Parker DE, Furness TA (2001) Effects of field of view on balance in an immersive environment. In: IEEE (ed) Virtual Reality, Yokohama, Japan, pp 235–240 Feenstra P, Wentink M, Correia Grácio BJ, Bles W (2009) Effect of simulator motion space on realism in the desdemona simulator. In: DSC 2009 Europe, Monaco Groen EL, Valenti Clari MSV, Hosman RJAW (2001) Evaluation of perceived motion during a simulated takeoff run. J Aircr 38(4):600–606 Groen EL, Smaili MH, Hosman RJAW (2007) Perception model analysis of flight simulator motion for a decrab maneuver. J Aircr 44(2):427–435 Harris LR, Jenkin M, Zikovitz DC (2000) Visual and non-visual cues in the perception of linear self motion. Exp Brain Res 135(1):12–21 Howard IP, Heckmann T (1989) Circular vection as a function of the relative sizes, distances, and positions of two competing visual displays. Perception 18(5):657–665 Howard IP, Howard A (1994) Vection: the contributions of absolute and relative visual motion. Perception 23(7):745–751 Jaekl PM, Allison RS, Harris LR, Jasiobedzka UT, Jenkin HL, Jenkin MR, Zacher JE, Zikovitz DC (2002) Perceptual stability during head movement in virtual reality. In: Proceedings of the IEEE virtual reality Jaekl PM, Zikovitz DC, Jenkin MR, Jenkin HL, Zacher JE, Harris LR (2005) Gravity and perceptual stability during translational

13

646 head movement on earth and in microgravity. Acta Astronaut 56(9–12):1033–1040 Keene ON (1995) The log transformation is special. Stat Med 14(8):811–819 Longuet-Higgins HC, Prazdny K (1980) The interpretation of a moving retinal image. Proc R Soc Lond B Biol Sci 208(1173):385–397 MacNeilage PR, Banks MS, Berger DR, Bülthoff HH (2007) A Bayesian model of the disambiguation of gravitoinertial force by visual cues. Exp Brain Res 179(2):263–290 Medina Puerta A (1989) The power of shadows: shadow stereopsis. J Opt Soc Am A 6(2):309–311 Monen J, Brenner E (1994) Detecting changes in one’s own velocity from the optic flow. Perception 23:681–690 Nahon MA, Reid LD (1990) Simulator motion-drive algorithms: a designer’s perspective. J Guid Control Dyn 13(2):356–362 Ono ME, Rivest J, Ono H (1986) Depth perception as a function of motion parallax and absolute-distance information. J Exp Psychol 12(3):331–337 Pretto P, Nusseck HG, Teufel HJ, Bülthoff HH (2009a) Effect of lateral motion on driver’s performance in the MPI motion simulator. In: DSC 2009 Europe, Monaco Pretto P, Ogier M, Bülthoff HH, Bresciani JP (2009b) Influence of the size of the field of view on motion perception. Comput Graph 33(2):139–146 Redlick FP, Jenkin M, Harris LR (2001) Humans can use optic flow to estimate distance of travel. Vision Res 41(2):213–219 Riecke BE, Cunningham DW, Bülthoff HH (2007) Spatial updating in virtual reality: the sufficiency of visual information. Psychol Res 71(3):298–313 Royden CS, Crowell JA, Banks MS (1994) Estimating heading during eye movements. Vision Res 34(23):3197–3214

13

Exp Brain Res (2014) 232:637–646 Roza M, Wentink M, Feenstra P (2007) Performance testing of the desdemona motion system. In: AIAA modeling and simulation technologies conference and exhibit Schroeder JA, Grant PR (2010) Pilot behavioral observations in motion flight simulation. In: AIAA modeling and simulation technologies conference and exhibit, Toronto, Ontario Canada, AIAA 2010-8353 Sinacori JB (1977) The determination of some requirements for a helicopter flight research simulation facility. Tech. Rep. CR-152066, NASA Sugano N, Kato H, Tachibana K (2003) The effects of shadow representation of virtual objects in augmented reality. In: Proceedings of the 2nd IEEE and ACM international symposium on mixed and augmented reality, pp 76–83 Sweet BT, Kaiser MK (2011) Depth perception, cueing, and control. In: AIAA modeling and simulation technologies conference and exhibit, Portland, OR, AIAA 2011-6424 Valente Pais AR, Van Paassen MM, Mulder M, Wentink M (2010a) Perception coherence zones in flight simulation. J Aircr 47(6):2039–2048 Valente Pais AR, Van Paassen MM, Mulder M, Wentink M (2010b) Perception of combined visual and inertial low-frequency yaw motion. In: AIAA modeling and simulation technologies conference and exhibit, Toronto, Ontario Canada, AIAA 2010-8093 Vander Steen H (1998) An earth-stationary perceived visual scene during roll and yaw motions in a flight simulator. J Vestib Res 8(6):411–425 Wallach H (1987) Perceiving a stable environment when one moves. Annu Rev Psychol 38:1–27

Perceptual scaling of visual and inertial cues: effects of field of view, image size, depth cues, and degree of freedom.

In the field of motion-based simulation, it was found that a visual amplitude equal to the inertial amplitude does not always provide the best perceiv...
523KB Sizes 0 Downloads 0 Views