The contribution of virtual reality to research on sensory feedback in remote control

9
ORIGINAL ARTICLE Barry Richardson Mark Symmons Dianne Wuillemin The contribution of virtual reality to research on sensory feedback in remote control Received: 29 July 2005 / Accepted: 13 February 2006 / Published online: 3 March 2006 Ó Springer-Verlag London Limited 2006 Abstract Here we consider research on the kinds of sensory information most effective as feedback during remote control of machines, and the role of virtual reality and telepresence in that research. We argue that full automation is a distant goal and that remote control deserves continued attention and improvement. Visual feedback to controllers has developed in various ways but autostereoscopic displays have yet to be proven. Haptic force feedback, in both real and virtual settings, has been demonstrated to offer much to the remote control environment and has led to a greater under- standing of the kinesthetic and cutaneous components of haptics, and their role in multimodal processes, such as sensory capture and integration. We suggest that many displays using primarily visual feedback would benefit from the addition of haptic information but that much is yet to be learned about optimizing such displays. Keywords Haptic Feedback Active Passive Kinesthetic Cutaneous Virtual reality Perception 1 Automation or remote control? Autonomous robots capture the imagination but not much else—yet. There is no doubt that machines are better than humans at certain complex tasks. Some controls of modern fighter aircraft, for example, require reaction times beyond human capability [1]. At the other extreme there are tasks so repetitive that humans be- come bored and error-prone as a consequence of re- duced vigilance [2]. Between these two extremes there are myriad operations that can be performed either by hu- mans using remotely controlled devices or autonomous machines, and though the choice is typically based on cost-effectiveness, it is not always easy to choose because technological advances in automation are often poten- tially useful in remote control, and vice versa. For example, better sensors on autonomous robots can be valuable in ‘‘master–slave’’ applications, particularly when haptic information is added to the visual infor- mation the master receives [3]. It could be argued that neural networks and other ways of making machines intelligent will speed up the automation process. While this may true, we argue that, in the meantime, autonomous robots remain best suited to relatively predictable and/or repetitive tasks requiring minimal adaptation to changes in the environment. Remote control is a pragmatic alternative to automation because: It is invariably cheaper and less prone to technical failure than full automation It makes use of the flexibility and adaptability of hu- mans (useful in unexpected situations or when there is a high degree of variation in the task environment) Experience gained in remote control can teach us relatively inexpensive lessons about the features that a fully automated system might eventually have. Automation and remote control are not, of course, mutually exclusive. Remotely operated vehicles (ROVs) used in underwater or extraterrestrial settings, for example, may be required to travel through environ- ments with obstacles so hard to predict that remote control (e.g., from a nearby base) is called for. Manip- ulation and analysis of samples collected by the same machine could then be done automatically so that in one machine there may be both autonomous and remote control. The manner in which unexpected challenges are successfully dealt with in remote control can be used to guide the design of an autonomous robot to do the same job in the future. Whatever the decision about the relative merits of full automation or remote control, virtual reality is playing an increasingly important role in the evaluation process, as we seek to illustrate in this paper. In addition, we B. Richardson (&) M. Symmons D. Wuillemin Bionics and Cognitive Science Centre, Monash University, Gippsland Campus, 3842, Churchill, Australia E-mail: [email protected] Tel.: +61-3-51226342 Fax: +61-3-51226590 Virtual Reality (2006) 9: 234–242 DOI 10.1007/s10055-006-0020-z

Transcript of The contribution of virtual reality to research on sensory feedback in remote control

Page 1: The contribution of virtual reality to research on sensory feedback in remote control

ORIGINAL ARTICLE

Barry Richardson Æ Mark Symmons Æ Dianne Wuillemin

The contribution of virtual reality to research on sensory feedbackin remote control

Received: 29 July 2005 / Accepted: 13 February 2006 / Published online: 3 March 2006� Springer-Verlag London Limited 2006

Abstract Here we consider research on the kinds ofsensory information most effective as feedback duringremote control of machines, and the role of virtualreality and telepresence in that research. We argue thatfull automation is a distant goal and that remote controldeserves continued attention and improvement. Visualfeedback to controllers has developed in various waysbut autostereoscopic displays have yet to be proven.Haptic force feedback, in both real and virtual settings,has been demonstrated to offer much to the remotecontrol environment and has led to a greater under-standing of the kinesthetic and cutaneous components ofhaptics, and their role in multimodal processes, such assensory capture and integration. We suggest that manydisplays using primarily visual feedback would benefitfrom the addition of haptic information but that much isyet to be learned about optimizing such displays.

Keywords Haptic Æ Feedback Æ Active Æ Passive ÆKinesthetic Æ Cutaneous Æ Virtual reality Æ Perception

1 Automation or remote control?

Autonomous robots capture the imagination but notmuch else—yet. There is no doubt that machinesare better than humans at certain complex tasks. Somecontrols of modern fighter aircraft, for example, requirereaction times beyond human capability [1]. At the otherextreme there are tasks so repetitive that humans be-come bored and error-prone as a consequence of re-duced vigilance [2]. Between these two extremes there aremyriad operations that can be performed either by hu-mans using remotely controlled devices or autonomousmachines, and though the choice is typically based on

cost-effectiveness, it is not always easy to choose becausetechnological advances in automation are often poten-tially useful in remote control, and vice versa. Forexample, better sensors on autonomous robots can bevaluable in ‘‘master–slave’’ applications, particularlywhen haptic information is added to the visual infor-mation the master receives [3].

It could be argued that neural networks and otherways of making machines intelligent will speed up theautomation process. While this may true, we argue that,in the meantime, autonomous robots remain best suitedto relatively predictable and/or repetitive tasks requiringminimal adaptation to changes in the environment.Remote control is a pragmatic alternative to automationbecause:

• It is invariably cheaper and less prone to technicalfailure than full automation

• It makes use of the flexibility and adaptability of hu-mans (useful in unexpected situations or when there isa high degree of variation in the task environment)

• Experience gained in remote control can teach usrelatively inexpensive lessons about the features that afully automated system might eventually have.

Automation and remote control are not, of course,mutually exclusive. Remotely operated vehicles (ROVs)used in underwater or extraterrestrial settings, forexample, may be required to travel through environ-ments with obstacles so hard to predict that remotecontrol (e.g., from a nearby base) is called for. Manip-ulation and analysis of samples collected by the samemachine could then be done automatically so that inone machine there may be both autonomous and remotecontrol. The manner in which unexpected challenges aresuccessfully dealt with in remote control can be used toguide the design of an autonomous robot to do the samejob in the future.

Whatever the decision about the relative merits of fullautomation or remote control, virtual reality is playingan increasingly important role in the evaluation process,as we seek to illustrate in this paper. In addition, we

B. Richardson (&) Æ M. Symmons Æ D. WuilleminBionics and Cognitive Science Centre, Monash University,Gippsland Campus, 3842, Churchill, AustraliaE-mail: [email protected].: +61-3-51226342Fax: +61-3-51226590

Virtual Reality (2006) 9: 234–242DOI 10.1007/s10055-006-0020-z

Page 2: The contribution of virtual reality to research on sensory feedback in remote control

argue that the contribution of haptics to effective remotecontrol of machines has yet to be fully appreciated.

2 Problems with remote control

It is not hard to find examples of skilled operation ofmachinery, but removed from the machine and givenremote controls, an operator’s efficiency can be mark-edly reduced [4–6]. This may be due to less then optimalvision [7], lack of haptic feedback [8], and/or reducedauditory information [4, 9, 10]. It has also been notedthat impoverished feedback reduces the sense of ‘‘beingthere’’ or what is often called ‘‘presence’’. While pres-ence is clearly critical in virtual reality applications (sinceit is defined in terms of subjective realism), it is notalways associated with improved performance. In onestudy, for example, force feedback was judged by par-ticipants to be useful in a virtual assembly task and toimprove realism, but it was also associated with anincrease in collision errors [11]. However, it is not clearwhether subjects in this study were told to avoid colli-sions, and they may therefore have sought them assources of haptic confirmation of the objects’ location,much as a blind person might use a cane to detectobstacles.

Even if presence is useful, it could be argued that it isnot necessary for teleremote operation of machines,particularly when the critical information is largelyvisual. However, there is evidence that presence is pos-itively correlated with performance in a variety of cir-cumstances, including tasks involving assembly [10],collaboration [12], surgical teleoperation and othermedical applications [13, 14], and clearing of buildings invirtual military and emergency settings [15]. Thus, howreal a remote control situation is perceived to be may beone way of predicting the effectiveness of that system.

3 Presence and distal attribution

Any relationship between presence and performancedepends on the sensory information available and this inturn affects the process of distal attribution or ‘‘exter-nalisation’’ of the percept. This is an aspect of realismand refers to our tendency to attend not to the receptorsactually stimulated (the proximal stimuli) but to theobjects ‘‘out there’’—the distal stimuli—which we per-ceive as being responsible for our receptor states. It is aremarkable skill that allows us to perceive objects inthree-dimensional space provided that certain sensoryconditions are met.

Using a device called the Tactile Vision SubstitutionSystem (TVSS), Bach-y-Rita et al. [16] and colleaguesgave blind people tactile patterns whose regions ofactivity corresponded to light (or dark) areas of a visualfield scanned by a TV camera. Behind the camera lenswas an array of photoelectric cells that could activate a

vibrator in contact with the blind subject’s skin. At first,the tactile patterns were felt as just that, but when givencontrol over the camera, so that movements made by thesubjects caused correlated apparent movements acrossthe tactor array, some subjects reported the percept to be‘‘out there’’ rather than located on the skin.

It may in fact be misleading to classify touch as aproximal sense, in contrast to vision and hearing asdistal senses. Although proximal tactile stimuli can beattended to (in a manner not easily mirrored in vision, orechoed in audition), we often pay more attention to thedistal attributes of a tactile stimulus. For example, whenriding a bicycle on a rough surface, contact with thesaddle, pedals, and handlebars can be our attentionalfocus, or we can instead perceive the surface of the roaditself as the distal stimulus, and in fact the one ofgreatest interest [17]. We may go even further and arguethat the handlebars, though proximal in the sense thatthey touch the skin, are distal in the sense that they makeup an object external to us, to which we pay moreattention than the ‘‘state’’ of our skin receptors. Thus,most haptic percepts may be regarded as enjoying distalattribution similar to that characteristic of vision andhearing. Presence and distal attribution may be closelyrelated perceptual phenomena because they both have todo with how we represent reality.

If this is correct, then haptic displays should, as far aspossible, give rise to percepts (of objects for example)that agree with information from the other senses, ratherthan conflict with them, or bear no relation to them. Inthis way, advantages of multimodal processes can berealized. Such processes have been shown to enhancepresence and performance in a variety of tasks relevantto remote control [18–20] and Miner et al. [19] suggestthat distal attribution involves the integration of multi-modal sensory experience into ‘‘single sense-makingoccurrences in the external world’’ and suggest thatrealism might be achieved by interaction betweenmodalities dealing with the same stimulus.

Does it follow from this that displays consisting ofcodes that do not allow multimodal cross referencing, orthat restrict stimuli to arrays that discourage interaction,may not be optimal? This question is central to designersof feedback in remote control.

4 Improving visual depth perception

Although depth perception relies on many sources ofinformation, one of the most powerful is the ‘‘retinaldisparity’’ that results from having two spatially sepa-rated eyes and therefore seeing slightly different aspectsof an object, or objects [21]. The greater the disparity,the nearer the object, although we are not conscious ofusing this information in judging depth. Three-dimen-sional displays in movies deliver a different image toeach eye of an observer. This difference mimics the ret-inal disparity present in normal binocular vision and can

235

Page 3: The contribution of virtual reality to research on sensory feedback in remote control

be achieved with special glasses worn by the observer[22]. Each lens of the spectacles has a filter of some kind(e.g., polarisation, colour, or shutter) to produce thedisparity, and the brain fuses the images into one toyield a single percept in which extent of disparity isinversely proportional to perceived depth (or distance)of the viewed object. Systems using such spectacles anddisplays have been developed to improve motor per-formance in dynamic telepresence environments [23].

Recently there have been significant developments inautostereoscopic displays. These flat-screened LCDmonitors yield stereoscopic images without the need forglasses [24]. Essentially the observer’s right and left eyesreceive disparate views because the source image on thescreen is covered with an invisible vertical grid that sep-arates the two views. Alternate columns of pixels projecteither to the right or to the left eye only. The differentviews are recorded by up to nine cameras, the outputs ofwhich are combined into ‘‘double image’’ displays bypurpose-built software. Recent versions have overcomethe restriction of having to view the monitor fromdirectly in front only (the single ‘‘sweet’’ spot) and cannow be viewed from several directions [24, 25], althoughthere is still variation in opinions about how compellingthe three-dimensionality is and debate about howmuch itaffects performance [26]. As a form of feedback duringteleoperation, autostereoscopic displays have the poten-tial to improve depth perception while reducing thenegative reactions to head-mounted displays and glasses[27]. Which kind of visual display is preferable, andindeed whether three-dimensional vision is better thantwo-dimensional plus haptics and/or audition, remainsan empirical question. However, a device using three-dimensional technology has been used effectively forremote control in underwater exploration [28].

Another consideration is general visibility. In well-litareas without ‘‘clutter’’, stereoscopic vision may be anunnecessary luxury, but in a novel or unpredictableenvironment, or where safe travel between way-points isneeded, three-dimensional vision may be crucial becauseother visual cues may be absent or impoverished. Forexample, one powerful monocular cue to depth—the‘‘retinal size’’ of an object—relies on previous experi-ence. A tennis ball for instance is of known size and anovel object of a different size (e.g., a little green man, ora rock on Mars) can be judged in relation to the tennisball if both are visible at the same time. But there is no‘‘standard’’ size of a rock to remember, so its retinal sizeis not much help in unfamiliar settings, such as under-ground mining tunnels or other worlds. In contrast, cuesthat use retinal disparity do not require such experience,and their use is probably innate [21].

Despite improvements in stereoscopic feedback, therewill be times when vision alone is insufficient, and hapticfeedback is then likely to prove extremely useful. For in-stance, the throttles on patrol boats used by the Norwe-gian Navy became harder to use when they were digitisedand analogue haptic information was lost [4]. The prob-lem was exacerbated by the dark and noisy environment

of a patrol boat—the lost haptic information could notreadily be made visual or auditory. Similar problems maybe encountered in a mining site or tunnel where dust isunavoidable. The utility of haptic feedback in mining-likeenvironments was demonstrated some time ago [29], andthe feasibility of teleoperation ofminingmachinery is nowwell established [7]. However, only recently has the min-ing industry shown interest in the use of haptic feedback inaddition to visual feedback [27].

5 Haptic feedback

In relation to teleoperated robots, Elhajj et al. [8] state:‘‘Many tasks done easily by humans turn out to be verydifficult to accomplish with a teleoperated robot. Themain reason for this is the lack of tactile sensing’’(p. 295). These authors argue that with visual informa-tion alone, teleoperation resembles an open loop system,but becomes closed loop with the introduction of hapticfeedback (although visual feedback may still be present).The loop is closed, they suggest, when the operator’smanipulation of the controls causes not only corre-sponding movement on the machine, but the sensors onthe machine feed back the consequences of the operator’scommands. This often leads to improvements in per-formance and can be attributed to kinesthetic informa-tion (typically provided in the form of force feedback),or tactile information delivered to the skin as cutaneousstimulation.

5.1 Force feedback

A significant component of haptic information can beconveyed with force feedback devices such as thePhantom. The user holds a probe connected to shaftslinked through several joints giving a total of six degreesof freedom. Sensors indicate the position of the probe inrelation to a virtual object, the surfaces of which aredefined by spatial coordinates as part of a softwarecollision algorithm. When the probe (seen on the PCscreen as a cursor) ‘‘reaches’’ the virtual surface, motorsat linkage points are activated and resistance is felt in amanner analogous to a blind person sensing a surface atthe end of a cane. The motors can also ‘‘drive’’ theprobe, and therefore the hand holding it, as wouldhappen if the object being explored is elastic andbounces back, or is in motion so as to push the probe.Virtual objects of great variety, complexity, and texturecan be computer-generated, explored, and moved inhaptic space within limits set only by the software thatdefines the object’s characteristics, and the capacity ofthe human kinesthetic system to detect and interpret theinputs. Two users in different countries can share thesame virtual space and ‘‘shake hands’’ (or sticks) overthe Internet by holding and moving their respectiveprobes [30].

236

Page 4: The contribution of virtual reality to research on sensory feedback in remote control

This technology can be adapted for use with a remotemachine that has sensors for movement and vibrationsthat are transmitted to controls felt by the operator. Thehaptic feedback may be the same as that available at theremote machine (the usual choice), or possibly a trans-formation or enhancement of it. For example, vibratorytactile feedback has been used successfully to navigatethrough a list of items on a hand-held computer, withmovement through the list controlled by tilting the de-vice [31]. Vibration has also been used to convey altitudeto aircraft pilots [32], and tactile stimulation of the torsohas been used to indicate attitude in zero gravity [33].

In visual displays, a virtual image may be superim-posed on one delivered by the camera to create aug-mented reality—a mixture of real and virtualimages—that conveys extra information such as tem-perature. Similarly, haptic feedback may be made betterthan that available in a real setting. As an example,hydraulic controls could be augmented with feedbackindicating mass or resistance at the workface of a miningmachine [27], and information that may be lost due todecoupling of the machine and operator, as described byBjelland et al. [4], could be restored in either a faithful oran augmented form. Hybrids are possible too such that aROV may have controls that allow it to be driven by aperson, or it may be remotely controlled for greatersafety, duration, and distance. Alternatively, the ma-chine may not be manually ‘‘drivable’’ at all. The con-trols could be present at the control site only, withcommands and feedback signals received and transmit-ted by specialised systems on the machine.

Whatever the remote system envisaged, feedback islikely to enhance performance. The boom used onspacecraft for maintenance and repair of exterior parts isan example of a remote system likely to benefit fromhaptic feedback. For instance, a screw head in which ascrewdriver has repeatedly slipped, or a rounded hex-agonal bolt head, may become extremely difficult orimpossible to remove. However, a skilled mechanic cansometimes feel (though not see) when there is risk ofsuch damage and make adjustments requiring fine con-trol over pressure and torque while testing for feedbackindicating incipient slip. In general, the contribution ofhaptic information in such demanding environments hasbeen neglected [34], partly because, as Bjelland et al. putit, ‘‘People’s haptic interaction with the world is oftensubconscious, and many of the qualities of touch areeasily overlooked, as they are subtle and difficult toverbalise’’ [4, p. 509]

5.2 Tactile information

Tactile vibratory feedback has been shown to improveperformance and a sense of presence in a variety of realand virtual spatial tasks [35, 36], and in a mixture of thetwo in which a real object is touched to improve thesense of reality in a visually virtual environment [37]. Tobe useful as information, the tactile experience need not

be a faithful copy of what would be felt by the skin in thereal situation. For example, and previously mentionedthe Tactile Situation Awareness System (TSAS) [38] is avest containing several tactual vibrators, activation ofwhich helps aircraft pilots and astronauts overcomespatial disorientation, or incorrect perception of atti-tude, altitude and motion. In this example, touch is usedas a conduit for information either unavailable orunreliable through other senses, but this application mayhave limited value because it requires conscious (cogni-tive) attention to the locus of stimulation [33], and this issometimes undesirable if cognitive load is already high.Many sensory psychologists argue that haptic displaysshould be as intuitive as possible and that the more‘‘natural’’ a perceptual process is, the less cognitive ef-fort is required. According to this view, perceptualprocesses typically occur at the unconscious level whereparallel processing is possible, in contrast to the morepedestrian sequential, conscious, cognitive processes.The TVSS system is an example of such an intuitivevibratory display.

Tactile information may also accompany kinestheticfeedback that is present when using exoskeletons such asthe CyberGrasp [39] or Exograsp [40]. The motors inthese devices, instead of applying a force to open thehand, tighten cables to prevent the wearer’s fingers fromclosing to the point that the space occupied by a virtualobject is invaded. These devices lack the degrees offreedom of the Phantom but do offer some advantages.For example, tactile information (e.g., temperature andpressure) can be delivered at the same time that ‘‘con-tact’’ with the virtual object is registered via kinesthesis[40, 41]. A single Phantom typically offers a single pointof contact with the virtual object; however, two or morePhantoms can be attached to fingers to allow a pinchingaction similar to that possible with hand-worn exoskel-etons [26].

These developments in visual and haptic virtualenvironments promise rich multimodal stimuli for manyapplications, including remote control. However, thefact that stimuli are virtual means that congruence oragreement between the inputs provided is not guaran-teed, as it tends to be with real stimuli, and discordancemay lead to conflict. Though this could result in nothingmore serious than mild annoyance, such as that experi-enced by movie-goers when lip-movements and speechsounds are desynchronised, it can have more seriousconsequences, such as disorientation and nausea;so-called ‘‘simulator sickness’’ [42, 43]. Disagreementbetween inputs can also result in a compromise perceptsuch as that observed when a TV shows a person’s lipssaying ‘‘ga’’, and delivers the sound ‘‘ba’’, but the finalpercept is ‘‘da’’ [44]. There can be competition within asense such that, for example, a colour may change fromred to blue with intermediate hues if the red and bluedots are presented in a way that engenders apparentmovement between them [45]. Another possibility is thatone sense may dominate or capture another—a phe-nomenon exploited by ventriloquists.

237

Page 5: The contribution of virtual reality to research on sensory feedback in remote control

6 Sensory capture in real and virtual environments

When we ‘‘hear’’ words coming from the mouth of aventriloquist’s dummy or when we hear the soundcoming from the direction of a screen in front of us,when it is in infact coming from the projector behind us,we are experiencing visual capture [46].

In most situations involving sensory conflict, vision isreported to ‘‘capture’’ or ‘‘dominate’’ haptics if infor-mation from these two modalities is discrepant [47].Touch has rarely been reported to capture vision,although the sharpness of a knife is best tested with theskin, counterfeit money can be detected because itstexture is ‘‘wrong’’, and the haptic sense can dominatewhen visual acuity is reduced [46, 48]. In an experimentalsituation that took full advantage of what virtual realitycan offer, Miner et al. [19] reported that haptics ‘‘cap-tured’’ vision and audition separately, and in combina-tion, when a virtual surface looked and sounded soft,but felt hard.

In current theories, intersensory conflicts have beenreinterpreted in terms of models of multimodal pro-cessing that place less emphasis on competition andmore on integration of information [49]. Thus, one sensemay appear to dominate another when a more accurateanalysis is that more attention is directed to one modal-ity’s input than another, according to the salience of theinput of that modality in that particular circumstance.This has become known as the ‘‘modality appropriate-ness’’ interpretation of sensory conflict [50]. Looked atthis way, there is not so much a conflict between visionand touch when the sharpness of a knife is being tested,but rather a choice of which sense is best suited for theparticular job. More attention is directed to the relevantchannel.

Attentional preference may appear intramodally too.We asked what would happen when attention could bedirected to either the kinesthetic or cutaneous aspect of asingle raised line drawing (RLD).

We used a device called the Tactile Display System(TDS) see [51] for full details, to separate cutaneous andkinaesthetic aspects of the same stimulus presented intwo orientations, each 180� rotations of the other (e.g., pand d, 6 and 9). While exploring a 6 in the lower tray ofthe TDS with one finger, the subject’s other finger felt a 9move under it (see Fig. 1). If the RLD of the 6 was re-moved from the lower tray, but the moving finger con-tinued to control what the upper (stationary) finger felt,subjects indicated that they were attending to the cuta-neous information provided at a stationary fingertip (the9) in preference to the kinesthetic information simulta-neously present at the other finger (a 6) [52]. This resultwas surprising because previous research suggested thatkinesthesis is a more powerful component of haptics thantouch and that the primary role of touch during RLDexplorations is to signal ‘‘you are on or off the line’’ whilethe kinesthetic system bears the responsibility of regis-tering the shape depicted [53]. This finding was, however,

consistent with the results of an earlier study in which wefound cutaneous information to offer as much as kin-aesthetic information about raised line drawings [54]. Ifinformation delivered via dynamic displays to a sta-tionary fingertip offers as much as movement (kinesthe-sis) then ‘‘passive’’ touch may offer more as a channel ofcommunication than hitherto supposed.

The TDS is capable of recording the movements of afinger, gripped lightly on the sides by a cradle attachedto the machine, while a RLD is being explored. Thesemovements (in the x–y plane) can be played back toguide another person’s finger over the same drawing,with speed and direction of movements precisely mat-ched.

Using the TDS in this way we have found passive-guided performance to be better than active for outlineshapes from an infinitely large set—a task with signifi-cant cognitive demands [55]. This finding agreed withearlier research reported in Nature [53]. However, whencognitive load was reduced by requiring only discrimi-nation among nonsense shapes, the passive-guidedsuperiority disappeared [55]. We also found that draw-ings were identified just as well when passed under astationary fingertip (cutaneous information only), asthey were with only kinesthetic information present (aswas the case when the subject was guided along apathway corresponding to the outline shape, but with

Fig. 1 The tactile display system (TDS) is shown here being used toallow a subject to explore a raised line drawing (RLD) of a 6 on thelower tray and have these movements cause a RLD of a 9 to be feltunder the stationary fingertip above. Subjects were asked to reportunder varying conditions. If the raised line 6 is removed from thelower tray, leaving kinesthetic information as the major cue for‘‘6’’, and cutaneous RLD information as the major cue for ‘‘9’’,most subjects reported a 9, indicating attention to cutaneous ratherthan kinesthetic information

238

Page 6: The contribution of virtual reality to research on sensory feedback in remote control

nothing under the fingertip) [54]. This finding challengedthose of the earlier researchers [53].

We are currently investigating the extent to whichdiscrepancies in the visually and haptically perceived sizeof virtual objects might affect capture and have found thatsizes of real spheres were judged with small errors that didnot differ significantly for either the visual or hapticmodalities. However, virtual haptic spheres were per-ceived larger than their virtual visual counterparts [56].Thus, it cannot be assumed that sizes of objects as de-picted by designers of virtual environments will be per-ceived in the same way across modalities. However, bymanipulating and measuring the extent to which visualand haptic images agree, we can determine tolerance for adegraded image in one modality given higher fidelity inanother. Virtual environments can, for instance, be usedto assess the value of haptic feedback when vision is poor,such as in dusty conditions or low light.

Some of this research would not have been possiblewithout the availability of a virtual environment. Forexample, it is hard to imagine how a real surface couldbe made to look and sound soft but feel hard [19], orhow spheres could be made to change haptically whilealways looking the same. Interestingly, research onsensory capture/dominance seemed to languish after the1980s, but with the coming of virtual environments,interest has returned and fresh perspectives, such asthose based on multisensory integration and modalityappropriateness, have emerged. Other research has notdirectly depended on virtual reality but has been stim-ulated by it. Some of these topics have theoretical as wellas practical implications.

7 Theoretical issues

An implicit argument in the above is that multisensorydisplays are better than unimodal displays [57]. How-ever, to take advantage of the benefits of multimodalsignals, it is not enough to simply ‘‘throw in’’ otherstimuli or dimensions and hope for the best. The rela-tionship between the state of the machine, or robot, andhow that state is conveyed to the remote operator isimportant, particularly when autonomy is shared be-tween controller and robot, as is increasingly common.For example, in one study, a mobile robot was pro-grammed to avoid close-range obstacles, and relayedvisual information to the controller from a front-facingcamera. In addition, distance from obstacles could beregistered as force feedback. Presence and performance(e.g., fewer collisions) was better with the haptic feed-back than without it [58].

In this kind of interaction, the feedback is intuitiveand informs the controller, in different ways, about thesame environment. In general, congruent or redundantinputs can be expected to complement each other andmay substitute for one another to improve perceptionwhen fidelity is poor. However, optimal human–machineinterfaces remain elusive. For example, with respect to

the feedback described above in the remote controlscenario, could modulation of tactile vibration or soundfrequency, or locus of tactile pulses, have served just aswell as the haptic force feedback that was actually used?If a tactile display is chosen instead of force feedback,should elements of the display be mentally separable anddistinct, or should they be spatio-temporally close en-ough to allow interaction and integration?

There seems to be two fundamentally different app-roaches to this question and they cannot always be ex-plained in terms of different purposes or applications.

7.1 ‘‘Make it complex’’ versus ‘‘keep it simple’’

Some researchers include as much sensory information aspossible in order to approximate normal conditions inwhich attention is directed to selected features of complexarrays containing, as they typically do, many redun-dancies and potential distractors. The brain does theselection and is suited to this task, which it performs at anunconscious level. Examples of these displays include theTVSS system [16], the Optacon [59] which delivers atactile image of print and pictures to a blind person’sfingertip, and the Queen’s Vocoder, which tranducesspeech sounds onto an array of vibrotactors [60].

This approach contrasts with attempts to simplifystimulus arrays with the intention of removing redun-dancy, complexity, and distraction, so as to leave (ide-ally) only the essential elements that can be dealt with asdistinct entities for combination into meaningful se-quences. This approach aims to relieve selection andattention mechanisms of their responsibility so thatconfusion among pattern elements is minimized.

Proponents of the complex approach argue that thesimple approach risks overly impoverished displays thatdeny the haptic system the chance to do what all sensesdo well—selecting what is important from a complexarray. White et al. said, in the context of the TVSSsystem:

We would never have been able to say that it waspossible to determine the identity and layout in 3dimensions of a group of familiar objects if we haddesigned our system to deliver 400 maximally dis-criminable sensations to the skin. The perceptualsystems of living organisms are the most remarkableinformation reduction mechanisms known. They arenot seriously embarrassed in situations where anenormous proportion of the input must be filtered outor ignored, but they are invariably handicapped whenthe input is drastically curtailed or artificially en-coded. (pp 57–58)

The argument has a long history and is well illustratedby two very different views about how to use the skin toconvey linguistic or symbolic information. Designers ofthe Optohapt [61] argued for the need to keep all pointsof tactile information distinct and separable. To achieve

239

Page 7: The contribution of virtual reality to research on sensory feedback in remote control

this, fewer than ten points throughout the whole bodywere found, and none could be contralateral sites if theirperceptual distinctiveness was to be assured. Stimuli thatwere too close in time or space suffered masking effectsand localization errors. To the opposing camp, the ex-treme difficulty in assuring this distinctiveness was a hintthat an inappropriate method was being used and thatone encouraging interaction among the sites of tactilestimulation was preferable [62]. Paradoxically, the latterargument was ultimately supported by the finding thatthe optimal rate of stimulation for elements of the Op-tohapt was a rate that promoted temporal integration oftactile stimuli, producing apparent movement, not theirseparation or individual distinctiveness [62].

In a more contemporary study, vibratory patterns of200 ms duration were presented at up to three (at a time)of seven widely separated body points, and it was ob-served that subjects had difficulty in detecting changes ifthe interval between successive patterns was 800 ms orless. The authors suggested that these limitations may beeven more severe in real life situations where perceptualload is likely to be greater [63]. This prediction seems tobe born out by the finding that localization of tactors onthe torso was worse under higher cognitive load [33].

However, it might be argued that the limitations aremore in the type of display chosen than in the tactilesense. This Optohapt involves cognitive effort, and is notintuitive. It seems designed to thwart any tendency to-wards integration among elements and might be likenedto asking someone palpating an object to report notwhat the object is, but differences in the sequence ofsensations at each part of the fingers and hands. Thismight not be impossible to do but it would hardly betaking advantage of the haptic system’s special abilities.

Equally contemporary is fresh evidence for improvedtactile acuity on the torso, even for closely spaced sites,provided that the temporal parameters are within arange that elicits apparent movement [64]. Theresearcher concludes, ‘‘even the most advanced displayshave not reached the borders of the processing capacityof the torso’’ (p. 80).

The choice of display may be important for realism orpresence and it would appear that those preserving somekind of isomorphism with the real world and avoidingartificial codes are more likely to promote presence thanare displays that require some kind of cognitive trans-lation or conscious attention. Unfortunately, we knowless about haptic perception than we do about visionand hearing, though that is changing rapidly, partlythanks to fresh approaches made possible with virtualenvironments.

8 Conclusions

Virtual reality technology has made it possible to test theeffectiveness of visual and tactile displays as modality-specific perceptual processes, and as contributors in

multimodal settings. Much can now be achieved withlaboratory simulations, or virtual environments, in theplace of more expensive real world trials. In particular,reports concerning haptic devices offering force feedbackhave stimulated long overdue attention to the benefits ofsuch feedback intrinsically (e.g., to improve presence orexternalisation) and in conjunction with other senses formultimodal inputs that improve performance in remotecontrol. This focus has, in turn, prompted a new look atmultimodal processes (capture and integration) and acloser examination of haptic components (kinesthesisand touch), in active and passive conditions. The in-crease in knowledge has broad implications and appli-cations in remote control of machinery, teleremotesurgery, simulators for training, and devices to alleviatesensory handicap.

Acknowledgements The research conducted at the Bionics andCognitive Science Centre of Monash University was supported by agrant from the Australian Commonwealth Government’s Sustain-able Regions Programme.

References

1. Schmitt VR, Morris JW, Jenney GD (1998) Fly-by-wire.Society of Automotive Engineers Philadelphia

2. Wallace JC, Vodanovich SJ, Restino R (2003) Predicitng cog-nitive failures from boredom proneness and daytime sleepinessscores: an investigation within military and undergraduatesamples. Pers Individ Dif 34:635–644

3. Diolaiti N, Melchiorri C (2002) Tele-operation of a mobilerobot through haptic feedback. In: HAVE, IEEE internationalworkshop on haptic virtual environments and their applica-tions

4. Bjelland HV, Roed BK, Hoff T (2005) Studies on throttle sticksin high speed crafts—haptics in mechanical, electronic andhaptic feedback interfaces. In: Proceedings of world hapticsconference. IEEE Computer Society, Los Alamitos, pp 509–510

5. Rastogi A, Milgram P, Drascic D (1996) Telerobotic controlwith stereoscopic augmented reality. In: Bolas M, Fisher S,Merritt J (eds) Stereoscopic displays and virtual reality systems,vol III. Proc SPIE 2635:115–122

6. Gupta R, Sheridan T, Whitney D (1997) Experiments usingmultimodal virtual environments in design for assembly anal-ysis. Presence Teleop Virtual Environ 6(3):318–338

7. Hainsworth DW (2001) Teleoperation user interfaces for min-ing robotics. Auton Robots 11(1):19–28

8. Elhajj I, Xi N, Fung WK, Liu YH, Li WJ, Kaga T, Fukuda T(2001) Haptic information in internet-based teleoperation.IEEE/ASME Transactions on Mechatronics 6(3): 295–304

9. Roberts JW, Slattery OT, Swope B, Volker M, Comstock T(2002) Small-scale tactile graphics for virtual reality systems.In: Woods AJ, Merrit JO, Benton SA, Bolas MT (eds) Ste-reoscopic displays and virtual reality systems, IX. Proc SPIE4660:422–429

10. Grohn M (2002) Is audio useful in immersive visualization? In:Woods AJ, Merrit JO, Benton SA, Bolas MT (eds) Stereo-scopic displays and virtual reality systems, IX. Proc SPIE4660:411–421

11. Edwards GW, Barfield W, Nussbaum MA (2004) The use offorce feedback and auditory cues for performance of anassembly task in an immersive virtual environment. VirtualReal 7:112–119

12. Sallnas EL Rassmus-Grohn K, Sjostrom C (2001) Supportingpresence in collaborative environments by haptic feedback.ACM Trans Comput–Hum Interact 7(4):461–476

240

Page 8: The contribution of virtual reality to research on sensory feedback in remote control

13. Kazi A (2001) Operator performance in surgical telemanipu-lation. Presence 10:495–510

14. Raspolli M, Avizzano CA, Facenza G, Bergamasco M (2005)HERMES: an angioplasty surgery simulator. In: Proceedingsof world haptics conference. IEEE Computer Society, LosAlamitos, pp 148–156

15. Jiang L, Girotra R, Cutkosky MR, Ullrich C (2005) Reducingerror rates with low-cost haptic feedback in virtual reality-based training applications. In: Proceedings of world hapticsconference. IEEE Computer Society, Los Alamitos, pp 420–425

16. White BW, Saunders FA, Scadden L, Bach-y-Rita P, CollinsCC (1970) Seeing with the skin. Percept Psychophys 7:23–27

17. Kennedy JM, Richardson BL, Magee LE (1980) The nature ofhaptics. In: Hagen M (ed) The perception of pictures. Aca-demic, New York

18. Baier H, Buss M, Freyberger F, Hoogen J, Kammermeier P,SchmidtG (1999)Distributed PC-based haptic, visual and acoustictelepresence system experiments in virtual and remote environ-ments. In: Proceedings of IEEE virtual reality conference, p 118

19. Miner N, Gillespie B, Caudell T (1996) Examining the influenceof audio and visual stimuli on a haptic interface. In: Proceed-ings IMAGE conference, pp 23–35

20. Grane C, Bengtsson P (2005) Menu selection with a rotarydevice founded on haptic and/or graphic information. In:Proceedings of world haptics conference. IEEE ComputerSociety, Los Alamitos, pp 475–476

21. Palmer S (2002) Vision sciences. Bradford Books Cambridge22. Goldstein BE (1999) Sensation and perception 5th edn. Brooks

Cole, Pacific Grove23. Bradshaw MF, Elliot KM, Watt SJ, Davies IR (2002) Do

observers exploit binocular disparity information in motortasks within dynamic telepresence environments? In: WoodsAJ, Merrit JO, Benton SA, Bolas MT (eds) Stereoscopic dis-plays and virtual reality systems, IX. Proc SPIE 4660:331–342

24. Schmit A, Grasnik A (2002) Multiviewpoint autostereoscopicdisplays from 4D-vision. In: Woods AJ, Merrit JO, Benton SA,Bolas MT (eds) Stereoscopic displays and virtual reality sys-tems, IX. Proc SPIE 4660:212–221

25. Perlin K, Paxia S, Kollin J (2000) An autostereoscopic display.In: Proceedings of SIG-GRAPH, ACM conference on com-puter graphics and interactive techniques, pp 319–326

26. McKnight S, Melder N, Barrow AL, Harwin WS, Wann JP(2005) Perceptual cues for orientation in a two finger hapticgrasp task. In: Proceedings of World Haptics Conference.IEEE Computer Society, Los Alamitos, pp 549–550

27. Richardson BL, Wuillemin DB, Symmons MA (2004) Sensoryfeedback and remote control of machines in mining and extra-terrestrial environments. J Aust Inst Mining Metall 2:53–56

28. Woods A (2003) Seeing in depth at depth. Newsletter of theCentre for Marine Science & Technology, Sept. Available at:http://www.cmst.curtin.edu.au/brochures/cmstnewsletter4.pdf

29. Kugah DA (1972) Experiments evaluating compliance andforce feedback effect on manipulator performance. Genral ElecCorp NASA – CR 128605 Philadelphia

30. Gunn C, Hutchins M, Adcock M, Hawkins R (2003) Trans-world Haptic collaboration, In: Proceedings of the SIG-GRAPH Conference, Sketches and Applications, p 1

31. Oakley I, O’Modhrain S (2005) Tilt to scroll: evaluating amotion based vibrotactile mobile interface. In: Proceedings ofWorld Haptics Conference. IEEE Computer Society, LosAlamitos, pp 40–49

32. Nojima T, Funabiki K (2005) Cockpit display using tactilesensations. In: Proceedings of World Haptics Conference.IEEE Computer Society, Los Alamitos, pp 501–502

33. Bhargava A, Scott M, Traylor R, Chung R, Mrozek K, WolterJ, Tan HZ (2005) Effect of cognitive load on tactor locationidentification in zero-g. In: Proceedings of World HapticsConference. IEEE Computer Society, Los Alamitos, pp 56–62

34. Rauterberg M (1999) New directions in user-system interac-tion: augmented reality, ubiquitous and mobile computing. In:Proceedings of IEEE symposium on human interfacing, pp105–133

35. Lindeman RW, Sibert JL, Mendez-Mendez E, Patil S, Phifer D(2005) Effectiveness of directional vibrotactile cuing on abuilding-clearing task. In: Proceedings of ACM CHI, pp 271–280

36. Richardson BL, Wuillemin DB, Saunders F (1978) Tactilediscrimination of competing sounds. Percept Psychophys 24:546–550

37. Hoffman H, Groen J, Rousseau S, Hollander A, Winn W,Wells M, Furness T (1996) Tactile augmentation: enhancingpresence in virtual reality with tactile feedback from real ob-jects. In: Meeting of the American Psychological Society, SanFrancisco. Available at: http://www.hitl.washington.edu/pub-lications/p-96–1/

38. McGrath BJ, Estrada A, Braithwaite MG, Raj AK, RupertAH, (2004). Tactile situation awareness system flight demon-stration final report USAARL Report 2004–10, March

39. See details of cybergrasp at: http://www.immersion.com/3d/products/cyber_grasp.php

40. Richardson BL, Wuillemin DB, Symmons MA, Accardi R(2005) The Exograsp delivers tactile and kinaesthetic informa-tion about virtual objects. In: IEEE Tencon conference,November, Melbourne

41. Kammermeier P, Kron A, Hoogen J, Schmit G (2004) Displayof holistic sensation by combined tactile and kinaestheticfeedback. Presence 13(1):1–15

42. Seay AF, Krum DM, Hodges L, Ribarsky W (2001) Simulatorsickness and presence in a high FOV virtual environment. In:Proceedings of the virtual reality 2001 conference, pp 299–300

43. Stott R (2002) Interaction between the senses: vision and thevestibular system. In: Roberts D (ed) Signals and perception.Palgrave Macmillan, New York

44. McGurk H, MacDonald T (1976) Hearing lips and seeingvoices. Nature 264:746–748

45. Kohlers P, von Grunau M (1976) Shape and color in apparentmotion. Vis Res 16:329–335

46. Soto-Faraco S, Spence C, Kingstone A (2004) Cross-modaldynamic capture: congruency effects in the perception of mo-tion across sensory modalities. J Exp Psychol Hum PerceptPerform 30(2):330–345

47. Shiffman HR (1996) Sensation and perception, 4th edn. Wiley,New York

48. Heller MA (1983) Haptic dominance in form perception withblurred vision. Perception 122:607–613

49. Soto-Faraco S, Kingstone A (2004) Multisensory integration ofdynamic information. In: Calvert G, Spence C, Stein B (eds)Handbook of multisensory processes. MIT Press Cambridge

50. Caclin A, Soto-Faraco S, Kingstone A, Spence C (2002) Tactile‘‘capture’’ of audition. Percept Psychophys 18:55–60

51. Richardson BL, Symmons MA, Accardi R (2000) The TDS: Anew device for comparing active and passive-guided touch.IEEE Trans Rehabilit Eng 8:414–417

52. Symmons MA, Richardson BL, Wuillemin DB, VanDoorn GH(2005) Kinaesthetic and cutaneous contributions to raised-linestimulus interpretation. In: World haptics conference, 18–20March, Pisa. Video clip at http://www-personal.mo-nash.edu.au/�msymmons/images/6_9_qt.mov

53. Magee LE, Kennedy JM (1980) Exploring pictures tactually.Nature (London) 283:287

54. Richardson BL, Symmons M, Wuillemin DB (2004) The rela-tive importance of cutaneous and kinesthetic cues in raised linedrawing exploration. In: Ballesteros S, Heller MA (eds) Touch,blindness, and neuroscience. Universidad Nacional de Educa-cion a Distancia, Madrid, pp 247–250

55. Symmons M, Richardson BL, Wuillemin DB (2004) Activeversus passive touch: Superiority depends more on the taskthan the mode. In: Ballesteros S, Heller MA (eds) Touch,blindness, and neuroscience. Universidad Nacional de Educa-cion a Distancia, Madrid, pp 179–185

56. Wuillemin DB, VanDoorn GH, Richardson BL, Symmons MA(2005) Haptic and visual size judgements in virtual and realenvironments. In: Proceedings of world haptics conference.IEEE Computer Society, Los Alamitos, pp 86–89

241

Page 9: The contribution of virtual reality to research on sensory feedback in remote control

57. Sarter N (1998) Turning automation into a teamplayer: The devel-opment of multisensory and graded feedback for highly automated(flight deck) systems.WillardAirportAviationResearchLab, http://www.nsfworkshop.engr.ucf.edu/papers/Sarter.asp/

58. Lee S, Sukhatme GS, Kim GJ, Park C (2005) Haptic teleop-eration of a mobile robot: a user study. Presence 14(3):345–365

59. Bliss JC, Katcher MH, Roger CH, Shepard R (1970) Optical-to-tactile image conversion for the blind. IEEE Trans Man–Mach Syst MMS-11:58–64

60. Brooks PL, Frost BJ, Mason JL, Gibson DM (1987) Word andfeature identification by profoundly deaf teenagers using theQueen’sUniversityTactileVocoder. J SpeechHearRes 30:137–141

61. Geldard FA (1966) Cutaneous coding of optical signals: theOptohapt. Percept Psychophys 1:377–381

62. Richardson BL, Frost BJ (1977) Sensory substitution and thedesign of an artificial ear. J Psychol 96:259–285

63. Gallace A, Tan HZ, Spence C (2005) Tactile change detection.In: Proceedings of world haptics conference. IEEE ComputerSociety, Los Alamitos, pp 12–16

64. van Erp JBF (2005) Vibrotactile spatial acuity on the torso:effects of location and timing parameters. In: Proceedings ofworld haptics conference. IEEE Computer Society, LosAlamitos, pp 80–85

242