Proceedings of the 1st UK VR-SIG Conference

Held on the 14th March 1994 at Nottingham University

Organised by Robin Hollands, Rob Ingram, Sean Clark and Chris Hand.


Copyright © 1994 The United Kingdom Virtual Reality Special Interest Group and Contributors. All rights reserved.


Contents

Virtual Reality as a Tool for Language Teaching
N.Williams & J.Grove, Sheffield Hallam University
Medical Applications of Virtual Environments
N.J.Avis, University of Hull
The Use of VR for Enhanced Visualisation in Systems Simulation
R.Hollands & N.Mort, University of Sheffield
From Dreams to Reality
C.Hand, De Montfort University
The Teaching of Virtual Reality in the United Kingdom
N.Avis, R.Macredie & D.Wills, University of Hull
Virtual World Design: Interactive Creation and Manipulation of Virtual Objects and Worlds in 3D Space
R.Abbott, K.Chisholm, P.Johnson & R.Pacy, Napier University
A Review of VR Resources on the Internet
S.Clark, Loughborough University
Presence in Immersive Virtual Environments
A.Steed, M.Slater & M.Usoh, Queen Mary and Westfield College
Virtual Reality and the Missing Senses
H.Durrant, BT Labs
Guided Tours Using REND386
W.Mitchell & D.Uhrmasher, Manchester Metropolitan University

Two of the papers presented at the conference are not in these proceedings. These are:

A Multi-User Virtual Reality System
L.Fahlen, Swedish Institute of Computer Science
Managing Mutual Awareness in Collaborative Virtual Environments
S.Benford, J.Bowers & L.Fahlen, Nottingham University

Virtual Reality as a Tool for Language Teaching

Dr Noel Williams and Jonathan Grove

CIRG (Comunication & Information Research Group) Communication Studies Mundella House Collegiate Campus Sheffield Hallam University S10 2BP.

VR as an Educational Tool

New technology often forms the basis for major innovations in education. For example, the early 1980s saw the development of the affordable microcomputer, and today the micro is a widely used and often indispensable aid to teaching at all levels. And just as the micro has had a major effect on education, many academics are predicting that a new technological advance called Virtual Reality (VR), will have an equal or even bigger impact.

Initially VR's usefulness as an educational tool may seem limited. However, according to a number of researchers (see Rheingold, 1991, p379; Johnstone, 1990, p2; McClusky, 1991, p5; and Bylinsky 1991, p106 among others) this technology has clear educational potential. Meredith Bricken (p178) suggests that VR has five characteristics which particularly suit it to educational applications:

- VR is experiential. Many educational theorists including Piaget, Bruner and Papert have stressed the importance of experiential learning. VR may add a new experiential element to educational computing.

- VR allows natural interaction with information. It allows learners to manipulate and examine computer generated objects physically, as if they were real objects in the real world.

- VR allows the manipulation of context. The flexibility of VR allows the computer user to control many aspects of the learning environment, from the size of the virtual space to its amount of gravity, thus allowing the creation of worlds that are not available in reality.

- VR can be tailored to the demands of the individual. Teachers could present material that is compatible with the requirements of the task, the student's learning style and if required, with their physical limitations. - VR can be a shared experience. Several people can simultaneously enter a virtual environment thus making learning a shared experience, the educational advantages of which are noted by Belkin, Vygotsky (in Bricken, p179) and others.

It seems then that VR has a number of exciting implications for education, implications that are being explored at various institutions world-wide.

For example in the U.K. the Nottingham University based VIRART group has developed a number of VR environments for use with special needs children, one of which allows children to interact with various VR environments such as a virtual kitchen, thus helping them develop the practical skills necessary for them to function in the outside world. And this approach to VR, i.e. using the technology to teach the computer user practical skills for use in 'concrete' reality seems, as far as we can tell, to be the most popular area of exploration for those interested in the educational applications of the technology. However, as suggested earlier, it is well documented that VR also has the potential to be a powerful tool for the teaching of more abstract concepts. Bricken illustrates the educational possibilities of VR with a quote from Papert:

"If you can be a gear, you can understand how it turns by projecting yourself into its place and turning with it....As well as connecting with the formal knowledge of mathematics, it also connects with the body knowledge, the sensory-motor schemata of the child. It is this double relationship - both abstract and sensory - that gives a transitional object the power to carry mathematics into the mind" (in M. Bricken, p178)

The notion that experiential learning can facilitate the understanding of mathematics has been explored in direct relation to VR by William Bricken (1993), who has developed virtual environments for use in teaching algebra. However we believe that VR can also be used to teach another fundamental abstract skill, language. Language skills have become the focus of some attention in the light of recent reports which suggest that increasing numbers of children are failing to reach an adequate standard in fundamental academic skills (i.e. the '3R's'). This failure, we believe, could be addressed not by a return to the 'chalk & talk' pedagogic style but by the development of more imaginative and interactive approaches to teaching, approaches which could be realised through the use of Virtual Reality.

Superficially VR technology seems unsuited to the teaching of language skills, after all the only visual representation of language is the written word, and there seems little point in filling a virtual environment with words when the 'real' world is full of books. However, in the virtual world words and other linguistic units can be approached from a new perspective. Through VR the relationship between words, phrases and their meanings can become more amorphous, as the limitations of pen and paper are replaced by the limitless possibilities of the virtual environment.

It is the aim of our research team at Sheffield Hallam University to develop a PC based, non-immersive virtual environment for language teaching which explores the exciting possibilities offered by the 'Virtual Word Space' concept.

The Environment: The Virtual 'Word Space'

In the virtual word space words can be treated like objects, objects which can be manipulated and shaped in a number of ways, as in the following scenario:

Imagine an empty room containing a collection of words, one of which is the word 'science'. You pick up the word and hear a voice pronounce it for you. You then grab it at both ends and stretch the word until it becomes 'scientific' and keep stretching it until it becomes 'scientifically' . You then turn the word around to see if it is a palindrome (it isn't), and twist it so that it changes into a synonym (if it has one) or a related word such as 'laboratory'.

In this brief example words are lifted from the page and given life, learning language becomes not simply a case of grasping a set of transient symbols, it shifts from the abstract to the concrete. Furthermore the virtual representation of words offers other possibilities, for example in the virtual world words can 'look' how they 'mean'. To demonstrate let us continue the above scenario:

You hang your new word, 'laboratory' on the wall and take a different word, lets say 'small'. This word is printed in a type style which relates to its meaning, like this:

small {text shown in small font}

You then grab the word at the top and bottom and stretch it upwards until it changes to:

BIG {text shown in large font}

Thus the key to the meaning of the word becomes not only the combination of letters used, but the visual attributes 'owned' by the word itself. Such potential can be developed further by giving words 'behaviours'. For anyone who has seen the US television programme 'Sesame Street' this approach to language teaching should be familiar. For example, the word 'worm' could be made to behave in a 'worm like' way, reinforcing the relationship between the word sign and its meaning.

One stage further from this is the rebus potential of the Virtual Word Space. A rebus is a visual representation of a word or its meaning, as a form of visual pun, e.g:

STAND

______ = misunderstand

MIS

Like the attributes described earlier, the VR rebus can also be dynamic, as words transform themselves to correspond to their meaning, in a process of three dimensional semantic modelling. Naturally we are still faced with the problem that some words are easier to represent in this way than others (e.g. 'grow' and 'spiral' are easier than 'truth' or 'idea'), largely as a function of the 'concreteness' of their meaning. Nevertheless it should be possible to create a language environment rich in words that have 'attitude'.

Furthermore the manipulation of language in the ways described earlier is not restricted to words, it can also be applied to smaller and larger linguistic units. Let us again continue the above scenario:

You then put the word BIG next to the word 'laboratory' to create the phrase 'big laboratory'. As 'big' approaches 'laboratory' the latter gets bigger until you have BIG LABORATORY. Next you reach into your virtual 'toolbox' and take out a pair of virtual scissors which allow you to snip a word into smaller portions. You cut the word 'laboratory' into two sections, take the word 'oratory' and twist it into the related word 'chapel'. You then attempt to attach your new word 'chapel' to the word segment 'lab'. As you bring the word segments together, invisible forces, like the opposite poles of a magnet attempt to push the segments apart.

We have already mentioned how in VR words can be given behaviours which relate to their semantic properties, however as the above example highlights it is also possible to envisage the word objects as having another form of behaviour which helps the user establish the compatibilities of linguistic elements. In the above example the 'emotive' behaviour of 'chapel' and 'lab' highlights the incompatibility of these word segments, and it is also conceivable that compatible elements could be given the reverse effect so that they attract each other in some way (applying a magnetic metaphor).

In fact words could be made to interact in a number of ways. Take the spiralling 'spiral', place it next to the stepped 'staircase', and watch its transformation into a spiral staircase. Linguistically this single association (whereby a noun acquires the properties of its modifying adjective) can lead to a wealth of investigative possibilities for the learner, who might then test constructions like:

big spiral staircase

little spiral staircase

big little

bit little staircase

spiral big staircase

etc.

to discover the effect.

Because this approach it effectively object-oriented semantics, in which words (as objects) are designed with their own internal coherence and individual behaviours that may affect other words, such an environment that contains any non-trivial number of words would yield more experimental richness for language play than any amount of structured language education1.

Furthermore the system could also be given a degree of 'intelligence'. For example if the learner exhibits regular problems with manipulating a particular linguistic unit or combination of units, the computer could be programmed, using a relatively simple algorithm to supply linguistic material which focuses on that particular problem area. If we then link this in with the magnetic metaphor mentioned above, we can conceive of a situation in which the degree of repulsion or attraction exhibited by particular linguistic elements is related to the aptitude of the learner, thus supplying them with an indicator as to the compatibility of linguistic elements. This notion is one we have not yet explored in detail. However, given that, within the total semantic space of the language, all words will collocate with others with different frequencies (i.e. co-occurrence of words is not by random distribution across usage ), the notion of 'linguistic' distance might be given 'physical' (VR) form. For example, you might put a random collection of words into a sack, shake them up and tip them out, then watch them sort themselves into 'families' or 'relationships'. 'Rancid' will want to be very close to 'butter'. 'House' might settle under 'domicile'. 'Easy' might move as far as possible from 'difficult' (these represent high frequency collocation, subordination and antonymy respectively). A three dimensional space would be needed to carry out such modelling, and the VR user could then wander about within this space, exploring the network of links from within.

It is also conceivable that users might 'define' their own words by assigning properties to word forms they have built. Assigning meanings would not be problematic as both 'meanings' and 'properties' are themselves words, so the user could construct elaborate meanings by the synthesis of semantic primitives. So, given the primitives 'big', 'female' and 'machine' we might coin a new word, for example 'femach' that was defined as 'meaning' these three primitives. An example of a phrase which uses this new construction would be:

My IBM behaves like a femach!

A second user could now 'enter' the Virtual Word Space and discover the new word. She wants to know what it means, how it works within language. She can do this experimentally by associating it with existing words and seeing how it behaves, not unlike the way in which we learn new terms - by attempting to fit them into known linguistic contexts and seeing what the results are. She may extend the term (e.g. turn it into an adjective by adding a syllable: 'femachish') or use it as a starting point for her own invention ('femach', opposite of 'machomach'), introducing another creative element into the language learning process.

Also with some development it may also be possible to allow more than one user to work simultaneously in the same virtual space. Thus language learning would become a shared experience as the users jointly 'work' on language in a way that is impossible in the conventional classroom environment The ideas discussed here are just a few of the many possible ways of treating language in the virtual world. One of the principle benefits to this approach is that language learning becomes 'cross-modal', as Meredith Bricken remarks:

"the VR learning environment is a context that includes the multiple nature of human intelligence: verbal/linguistic, logical/mathematical, auditory, spatial, kinaesthetic, interpersonal and intrapersonal." (p179).

For the teacher there are also potential secondary payoffs to an environment of this kind. For example, it would be a relatively simple matter for the computer to report on the child's motor performance as well as linguistic performance, helping the teacher isolate those pupils with motor as well as linguistic problems.

And this is the VR environment we want to develop here at SHU, a learning space which allows one or more computer users to manipulate words and other linguistic units as if they were amorphous objects.

The Practicalities of the Project

Of course the above scenario is imaginary, VR is still underdeveloped and it is questionable whether even the most advanced (and most expensive) technology is capable of producing all the features we describe here. However we believe that it is possible with today's powerful Personal Computers, to develop an educationally useful virtual environment of this kind, a notion confirmed by a recent conversation with a member of staff at one of the UK's leading VR companies who maintains that the development of a virtual world space is within the capabilities offered by their virtual world creation software.

The Challenges

Even given appropriate technology, the development of this virtual environment is likely to present a number of challenges to the team at SHU. Issues are likely to include:

Interface Design - Clearly, given the characteristics of the Virtual Word Space a glove input device would be ideal. However such devices are generally tailored to fit adult hands, are notoriously unreliable and prohibitively expensive. Consequently the research group at SHU will have to develop an effective interface solution using 2D peripherals such as a joystick and/or mouse and comparatively limited 3D devices such as the Spaceball.

How, then, using this technology can we develop an interface which allows the language learner to manipulate on-screen objects intuitively or an interface which can also be used by individuals with limited manual dexterity (e.g. young children & people with special needs)?

Furthermore, the effectiveness of the interface will critically depend on the appropriateness of the metaphors employed for manipulating language. We have talked glibly above of 'associating' and 'twisting' words, but these metaphors may well be counter-intuitive. For example, if the user truly feels she is twisting the language in order to understand it, she may feel that her learning is being distorted.

Linguistic\Programming issues - We envisage that developing ways of organising word elements and ascribing 'behavioural' attributes to linguistic units in a coherent and logical pattern will be a major challenge for our team. For example, the semantic model which underlies many of the ideas above, though a traditional one, is restrctive, with a number of flaws (e.g. there are no true semantic primitives). The linguistics of the system must be structural, to facilitate design2, but must not, in consequence, violate the users' intuitions about their own language. Environment Design - The word space environment should be both visually interesting and easy to work with. At this present stage in the project's development we have very few ideas about how the environment should actually 'look'. However it is conceivable that whatever the word space's final design it should be flexible enough to allow the educationalist to configure the system to the demands of the individual learner or groups of learners if necessary.

Evaluation - There is also the problem of evaluating the environment as a learning tool, clearly the fundamental question is "does the environment offer a real advance on established paper based language teaching methods?" How do we go about answering this question?

Conclusion

The idea behind the Virtual Word Space is that, by removing some of the limitations on language, permitted through VR technology, the user (no longer really a reader) will interact with text in a novel way and thereby acquire a new experience of language. The motivation behind the project is essentially an educative one (aiming to motivate learners who find language an uninteresting topic, and to provide insights into the workings of words which some learners have difficulty with). However, it could equally well be a 'creative' one, as the ideas which we are seeking to develop could equally well create a 'literary' experience (in a very loose sense of course) which cannot be had in any other way.

Virtual Word Space (c) 1994 Sheffield Hallam University

Bibliography and References

Bricken, M. (1991) "Virtual Reality Learning Environments: Potential and Challenges" in Computer Graphics Vol:25 issue:3

Bricken, W. (1993) "Spatial Representation of Elementary Algebra" in Proceedings, 1992 IEEE workshop on Visual Languages. IEEE Computing Society Press.

Bylinsky, G. (1991) "The Marvels of Virtual Reality" in Fortune Magazine Vol:123, Issue:11

Close, A.M. (1991) "Engineering Visualisation: A New Approach to Teaching Concepts" in IEE Colloquium on Real World Visualisation Digest no:197

Jacobson, L. (1991) "Virtual Reality: A Status Report" in AI Expert Issue: August.

Johnstone, B. (1990) " Through the Looking Glass" in The Far Eastern Economic Review Vol:150 Issue:44

McCluskey, J. (1991) "Virtual Reality: The "Fifth?" Dimension" in Multimedia Review Vol:2 Issue:1

Nugent, W.R. (1991) "Virtual Reality: Advanced Imaging Special Effects Let You Roam in Cyberspace" in Journal of the American Society for Information Science Vol:42 Issue: Sept 1991

Rheingold, H. (1991) Virtual Reality, Secker & Warburg.

Romkey, J. (1991) "Wither Cyberspace?" in Journal of the American Society for Information Science Vol:42 Issue: Sept 1991

Stuart, R. & Thomas, J.C. (1991) "The Implications of Education" in Cyberspace Multimedia Review Vol:2 Issue:2

Setzer, V (1989) Computers in Education Floris Books.

Stampe D., Roehl B & Eagan J. (1993) Virtual Reality Creations: Explore, Manipulate & Create Virtual Worlds on Your PC, Waite Group Press.


Medical Applications of Virtual Environments

N J Avis

Department of Computer Science School of Engineering and Computing University of Hull Hull HU6 7RX UK.

Abstract

The combination of Virtual Reality or more appropriately, Virtual Environments and medicine appears to be a compelling mix which has attracted much media interest recently. A brief review of the applications of Virtual Environments in medicine is presented. Due to the limitations of space it is assumed that the reader is familiar with the capabilities of present day Virtual Environment systems. Furthermore this paper is not intended to be a comprehensive, formal review of all the applications and literature pertaining to the use of Virtual Environments in medicine, but aims to convey a flavour of the potential of Virtual Environments in medicine together with the challenges that must be surmounted to ensure future Virtual Environments can fulfil their promise in a wide range of medical applications.

1. Introduction

Virtual Environments have the potential to revolutionise many scientific and engineering disciplines by providing novel methods of visualising and interacting with complex data sets. The emergence of Virtual Environments and their use in medical applications is especially exciting as this follows the huge advances afforded by the development of various medical imaging modalities, volume visualisation and the recent transformation in some surgical procedures by the adoption of minimally invasive, or key-hole surgery.

Several Virtual Environment systems have already been constructed to assist the acquisition of the skills necessary for minimally invasive surgery using laparoscopes and endoscopes (Satava, 1992). Such activities have lead to considerable media interest regarding the future uses of Virtual Environments in medicine and speculation about the next generation cyber or nintendo surgeons. Whilst there is considerable scope for the application of Virtual Environments in minimally invasive surgery, such activities are only one example of how Virtual Environments can be utilised in medicine.

The following sections present a brief review of some of the current medical applications of Virtual Environments. Deficiencies of some of these systems are highlighted and the paper concludes with some comments and suggestions as to how these deficiencies may be addressed to provide more effective Virtual Environments for medicine.

2. Education and Training

New surgical procedures are emerging every year and it is difficult for clinicians to keep up to date with new techniques. Perhaps the most immediate potential benefit of Virtual Environment systems will be in improving teaching and training procedures for doctors and surgeons.

2.1 Teaching of anatomy

Historically medical students have acquired their practical experience of gross human anatomy by the dissection of cadavers and have gained their theoretical background from reference to anatomy textbooks and atlases. A single cadaver is often shared by a group of medical students which means that the students are unlikely to be exposed to the full range of biological variabilities and/or pathologies. Furthermore, once a dissection has been performed by one student, the same procedure cannot be satisfactorily performed again by another student in the group. New multimedia teaching packages are now appearing to assist student learning of anatomy. The ultimate form of these would be a Virtual Environment in which students could repeatedly dissect a computational or numerical cadaver.

ADAM (Animated Dissection of Anatomy for Medicine) is an example of a numerical cadaver. ADAM allows the user to strip away the layers of the body to reveal muscles, blood vessels, bones and other anatomical structures. The system allows the user to select a sagittal, coronal or axial cross-section through the body. Users can also call up supporting material in the form of histology or radiology images of specified anatomic sites. Although only presenting 2D images to the user, the effect of stripping away various layers effectively results in a 2.5D representation. Ideally ADAM should be extended to provide a full 3D numerical cadaver and other groups including the Human Interface Technology Laboratory at the University of Washington and the National Library of Medicine in the USA are working on the construction of 3D numerical cadavers (Weghorst, 1992). However, the construction of such numerical cadavers is a non-trivial task. First the image data must be obtained in the form of a stack of 2D medical tomographs. These tomographs may come from a range of imaging modalities such as X-ray Computed Tomography (X-ray CT) and Magnetic Resonance Imaging (MRI) to reveal the hard and soft tissues respectively. Next the various anatomical features within a single tomograph need to be identified and segmented so the various structures can be extracted, tagged and stored. These structures need to be compared with those extracted from adjacent tomographs to ensure interslice coherence and a consistent representation of the extracted structure is developed. Finally a database and rendering system must be constructed to allow the visualisation of the anatomical structures which preserves their spatial orientation and relationships with other structures. With all this is mind it is not surprising that the construction of the ADAM system with its 40 layers of anatomy and 18,000 separate structures, which can be viewed from a number of cross-sections, proved a very time consuming task taking a team of anatomists and medical illustrators roughly the same number of man-months as it takes to make a feature length Disney animation (Todd, 1992). Nevertheless, the availability of these numerical cadavers is central to the future development of Virtual Environments in a wide range of medical applications.

2.2 Minimally invasive surgery training

Minimally invasive surgical techniques appear to be very much in vogue. Instead of making large incisions to gain access to the worksite, minimally invasive surgery involves making several small incisions into which specially designed instruments are introduced. Surgeons operate without directly seeing or touching the patient's body. Such operations represent one end of the spectrum of telesurgery, a form of telepresence, since the surgeon is watching on a monitor the video images conveyed from the end of the endoscope or laparoscope giving a close-up view of patient's internals while manipulating other remotely guided instruments from outside the patient's body.

Proponents of minimally invasive surgery highlight the potential advantages of the technique by pointing to a reduction in trauma to the patient and the decreased post-operative recovery time together with the associated reductions in cost. Expectations are that in ten years from now some 70% of surgical operations will be conducted using these techniques (Cuscheiri, 1993). However, such operations require new skills to be learnt by the surgeon. Recently there have been concerns over the outcomes of some minimally invasive procedures performed by inexperienced surgeons.

This has led to the formation of regional training centres in minimally invasive surgery throughout the UK. Virtual Environments promise to assist the widespread introduction of minimally invasive surgery techniques by:

- providing training systems to allow surgeons to acquire the skills necessary for the adept manipulation of laparoscopes and endoscopes

- extending the capabilities of such systems by providing improved visualisation systems and more natural ways of manipulating surgical instruments such as endoscopes

Virtual Environment systems have been constructed and marketed by a variety of firms including Cine'-Med (Sims, 1993) and Ixion (Hon, 1992). These systems involve the use of physical manikins into which spatially tracked minimally invasive surgical instruments are introduced and manipulated. The spatial positions of these instruments are conveyed to a powerful graphics workstation (in the case of the Cine'-Med system, a Silicon Graphics Crimson Reality Engine) and the appropriate view of the numerical body parts in the field of view of the instruments rendered. Force or kinaesthetic feedback is conveyed to the user by the instruments straining or pushing against structures within the manikin to provide non-visual cues for navigating through the internal structures of the body and reactions to the simulated procedures performed.

3. Diagnosis

3.1 Visualisation and navigation of medical image data sets

A physician is typically presented with a wide and diverse range of diagnostic information on which to base a diagnosis. The emergence of a number of medical imaging modalities in the last 20 or so years has proved an effective diagnostic tool for a variety of conditions. However, the information contained within these images is usually presented as a succession of 2D tomographs. Physicians and radiologists become very adept at determining the 3D relationships of complex structures given these 2D views. To help convey these complex relationships to colleagues or the patient who do not possess such skills requires a more direct means of displaying 3D relationships. Ideally such data could be viewed and explored within an immersive Virtual Environment.

The rapid development of data visualisation systems arising from the falling cost and increased performance of graphics workstations provide a means of displaying these 3D data sets by the use of volume or surface rendering techniques. However, the large amount of data to be displayed typically leads to poor interaction times on all but the very fastest computers. Special purpose hardware platforms have been proposed, usually employing parallel processing techniques, to provide systems capable of rendering the data in response times that allow effective interaction with the data set. If such systems become widely available they could be directly linked to the imaging system to provide real-time imaging of the patient. This combined with the emergence of open-magnet MRI systems presents the possibility of real-time interventional MRI systems, whereby an image of the data acquired from the MRI system is rendered in real-time and superimposed on the patient, above whom the surgeon is poised ready to conduct an operation (Anzai et al, 1993). Given access to the correct Virtual Environment navigational tools it is possible, for instance, that a surgeon could fly up through the patient's aorta to look at the operation of a faulty heart value, slow down the motion to assess the cause and consider the best course of treatment. Whilst such systems may appear somewhat far fetched, a similar type of system is presently being researched by the group at the University of North Carolina, Chapel Hill only involving an ultrasound imaging system to visualise the unborn fetus in pregnant women (Bajura et al, 1992).

3.2 Assessment of balance disorders

Patients suffering from dizziness commonly present to Ear, Nose and Throat (ENT) surgeons. Assessment of such patients is difficult with the main worry both for the doctor and the patient is failure to establish the presence of central neurological disease. Careful clinical observation of eye movements in dizzy patients can provide valuable information about the presence of central neurological disease. The examination includes observation of spontaneous eye movements, movements on command and those precipitated by optokinetic, position and caloric stimuli.

Research at Hull has begun to investigate ways in which quantifiable tests can be constructed by detecting movements of the eyes, whilst visual stimuli are presented to the patient using a Virtual Environment system. In addition to this work Tesio (1992) is investigating patients with neuro-motional problems, especially those affected by atassy. Tesio is studying patients by immersing them in a virtual environment whose object's conform to a different set of physical laws to those of the real-world. In this way patients can be subjected to situations which result in large sensory conflicts and by studying the reactions of the patients useful diagnostic information regarding the patient's condition may be obtained.

4. Treatment

4.1 Reconstructive surgery

Traditionally surgeons have used a variety of simple, yet effective tools to help plan surgical operations. Such aids included the use of material patches to predict the amount of skin needed to cover a wound and clay to model the effect of reconstructing a nose. More complex treatment planning tools are required to assist the successful completion of more sophisticated operations and today CAD like systems are already in common use for orthopaedic and reconstructive surgery.

Whilst such systems are beneficial to the planning of certain operations, in many cases they lack the ability to model functional as well as spatial relationships. Researchers at the Dartmouth-Hitchcock Medical Centre and MIT have been working on the construction of a virtual leg to model its biomechanical behaviour (Chen et al, 1992). In this model the joints, muscles and tendons are modelled to represent the overall behaviour of the leg. In this way the effects of dysfunction or the effect of surgery on the performance of the leg can be simulated and used to help plan surgical operations. In the longer term the researchers intend to extend this model to other parts of the human body and investigate ways in which the model can be customised to produce a computational model of a particular patient suffering from a given condition.

4.2 Radiation treatment planning

Many techniques are used for the curative and palliative treatment of cancer. One such technique is the irradiation of the tumour from ionising radiation sources. These sources can either be placed surgically within the body to irradiate the affected area or by the targeting of an external radiation beam. In the case of treatment by an external radiation source, a treatment plan is constructed from multiple applications of the external radiation source aligned to intersect at the area of interest. The aim is to find an arrangement of beams such that they intersect at the area of interest ensuring that the tumour receives sufficient dose whilst minimising the dose received by the surrounding healthy tissue.

Presently such treatment plans are determined by the radiologist with the aid of 2D tomographs of the patient and a computer workstation to calculate the resulting iso-dosage graphs. Since the problem is truly three dimensional, researchers at the University of North Carolina, Chapel Hill have developed a Virtual Environment to assist the radiologist by providing a three dimensional model of the patient and a graphical representation of the radiation beam (Aukstakalnis and Blatner, 1992). The radiation beam may be positioned by use of a virtual hand controller and the radiologist can visualise the effect from any position, including the beam's source. In this way it is hoped that the placement of multiple beams and the visualisation of their net effect will be more intuitive. Research is now proceeding to give the user a display of the inside of a virtual thorax to visualise the resulting dosages received by the internal organs of the patient from a particular arrangement of radiation beams.

5. Rehabilitation

Virtual Environments are being used as a means for patients to help confront and control their fears and phobias (Whalley, 1993). Patients suffering from arachnophobia can be presented and invited to interact with virtual spiders whose attributes could be changed to identify what it is that the patient finds so repellent. As well as their therapeutic uses, such systems would appear to raise important issues regarding the potential misuses of Virtual Environments.

Virtual Environments also have a role to play in empowering differently abled people in the community since they can be used to help assess the design of buildings for wheelchair access and the use of facilities before they are built, helping to ensure easy access and compliance with government directives.

In addition to these uses, various medically related projects have also been proposed which involve the use of Virtual Environment technology rather than the Virtual Environments themselves.

5.1 Use of virtual environment technology to assess movement

Rehabilitation science is concerned with assessing how a disability alters specific physiological functions and anatomical structures and the principles by which residual function or capacity can be measured and used to restore function. Various rehabilitation workstations have been proposed which use Virtual Environment technology to assess the degree of movement (both voluntary and involuntary) patient's have in their hands. The same workstations are used to track the effectiveness of physiotherapy regimes or the degeneration of function.

For instance, researchers at Rutgers University have constructed a diagnostic-rehabilitation system integrating the Teletact Glove developed by the Advanced Robotics Research Centre at Salford with the Rutger's Portable Force Feedback Hand Master (Burdea et al, 1992). The Teletact glove is used to record the range of patient movements and forces as a means of diagnosis. The researchers report increased accuracy of this technique over more traditional methods using goniometers and strain gauges gripped by the patient together with a reduction in the data acquisition time. A rehabilitation plan is then designed involving the patient exercising using the Force Feedback Hand Master whose forces are generated by the system in response to the patient grasping a virtual object. Physicians at Loma Linda University have also been working with patients with spinal-cord injury, stroke and traumatic brain injury using Virtual Environments to manipulate virtual objects and practice specific skilled motor tasks (Warner and Jacobson, 1992).

5.2 The prosthetic use of advanced computer interfaces.

Another use of Virtual Environment technology is as a means of providing novel computer interaction devices and the assessment of interface design for the differently abled.

The Glove Talker system developed by Greenleaf Medical Systems allows a person with speech difficulties to communicate by means of hand gestures whist wearing a DataGlove virtual hand controller (Greenleaf, 1992). Gesture recognition software determines the positions of the fingers whilst gesturing as measured by the virtual hand controller and converts them into text that is presented on a screen and speech via a voice synthesiser. The gesture vocabulary is customised for each patient, so maximising the use of existing abilities. The Gesture Control System also developed by Greenleaf Medical Systems allows severely disabled individuals to control suitably adapted devices via gestures. The Biomuse system developed by BioControl Systems represents the next possible step in the development of such systems (Knapp and Lusted, 1992). The Biomuse is a general purpose bioelectric signal processor which gives the user volitional control over external electronic devices directly from nervous system signals. In this way eye movements recorded from the electrooculargram (EOG) can be recorded and processed in real-time to control a cursor on a screen. Such systems have obvious utility for the severely disabled and may eventually change the way we all interact with computers.

6. Discussion

Whilst the potential of Virtual Environments in the medical domain is enormous as indicated by the range of systems reported above, careful consideration must be paid to the needs of surgeons and of other medical personnel in order to develop effective Virtual Environments. Limitations in current technology and techniques needs to be identified and used to direct future research. Over and above the advances in technology that will increase the utility of Virtual Environments for all applications such as enhanced processing/graphics capabilities and increased resolution displays, more domain specific issues need to be addressed to help promote the future uses of Virtual Environments in medicine. A brief overview of some of these issues is given below.

Numerical Cadavers - The software tools associated with the construction of numerical cadavers require further development and it is anticipated that the issues and techniques involved in the construction of numerical cadavers will emerge as a critical aspect of Virtual Environment research in the medical domain in the short term. Without the widespread availability of these numerical cadavers, at reasonable cost research in many medical applications of Virtual Environments will be severely hampered. In the longer term these tools will need to be extended to allow the rapid development of patient specific body parts as required by surgical simulators to practice the operation on a particular patient prior to surgery. To be of maximum utility to medicine such systems will also have to address the considerable challenges associated with the integration of physiological as well as anatomical information into these systems.

Surgical Simulators - Whilst systems have been constructed to provide training environments for surgeons learning minimally invasive surgical techniques the provision of surgical simulators for conventional open surgery is likely to prove very much more challenging. This is because such systems will need to provide very precise and high resolution haptic display systems. This in turn will require the computer modelling of non-rigid structures within the body such as the liver, bowels and stomach together with realistic reaction to procedures conducted on them, i.e. deform when probed, bleed or burst when cut etc. The construction of high resolution haptic displays and the representational issues associated with the accurate modelling of biological systems presents many additional challenges to researchers of such Virtual Environments.

Visualisation of medical data sets - It is very unlikely that surgeons will accept the use of immersive Virtual Environments where the scene within their whole field of view is synthetically generated. When dealing with actual patients, they may insist on see-through or augmented display systems whereby the Virtual Environment information is overlaid on the surgeon's view of the patient. Issues associated with data fusion and the stability of such displays need to be investigated.

Telesurgery - This paper has not touched on the potential uses of teleoperation systems in medicine although such systems have been demonstrated. Since these systems involve the remote control of robotic systems performing procedures on the patient they present many health and safety concerns. Such systems must be safety critical and able to cope with the addition problems associated with time lags in the transmission of movements over the communications channel and unreliable communication links. Given these additional concerns it may be more prudent to first fully understand the problems associated with robot assisted surgery and develop solution to these problems before further extending these systems to allow teleoperation.

7. Conclusions

The medical community is curious about the possible benefits that Virtual Environments may bring. In many respects the expectations of the medical community have been raised beyond the capabilities of present day systems by some of the more speculative reporting of the media. It is now time to assess the achievements to date and the capabilities of present day Virtual Environments so that the fundamental issues concerning the delivery of effective systems for a wide range of medical applications can be identified and addressed.

References

Anzai Y, Desalles A A F, Black K L, Sinha S, Farahani K, Behnke E A, Castro D J and Lufkin R B, Interventional MR Imaging, RadioGraphics, 13:4, July 1993.

Aukstakalnis S and Blatner D, Silicon Mirage: The art and science of virtual reality, Peachpit Press, 1992, pp 210-212.

Bajura M, Fuchs H and Ryutarou O, Merging virtual reality with the real world: Seeing ultrasound imagery within the patient, Computer Graphics Proceedings of SIGGRAPH '92, 26:4, 1992.

Burdea G, Langrana N, Silver D, Stone, R and Dipaolo D M, Diagnostic/Rehabilitation system using force measuring and force feedback dextrous masters. Proceedings of Medicine Meets Virtual Reality, San Diego, June 4-7, 1992.

Chen D T, Rosen J and Zeltzer D, Surgical Simulation Models: From body parts to Artificial Person. Proceedings of Medicine Meets Virtual Reality, San Diego, June 4-7, 1992.

Cuschieri A, Minimal Access Surgery; Implications for the NHS. HMSO Publication, 1993.

Greenleaf W J, DataGlove, Datasuit and Virtual Reality Advanced technology for people with disabilities, Proceedings of Virtual Reality and Persons with Disabilities, Los Angeles, March 18-21, 1992.

Hon D, Tactile and Visual Simulation: A Realistic Endoscopy Experience. Proceedings of Medicine Meets Virtual Reality, San Diego, June 4-7, 1992.

Knapp R B and Lusted H S, Biocontrollers for the Physically Disabled: A Direct Link from the Nervous System to Computer. Proceedings of Medicine Meets Virtual Reality, San Diego, June 4-7, 1992.

Satava R M, Virtual Reality Surgical Simulator : The First Step. Proceedings of Medicine Meets Virtual Reality, San Diego, June 4-7, 1992.

Sims D, The point where lines converge, IEEE Computer Graphics and Applications, July 1993.

Todd D, Making Anatomy Come Alive, New Media, July 1992.

Warner D and Jacobson L, Medical Rehabilitation, Cyber Style. Virtual Reality Special Report, AI Expert, July 1992.

Weghorst S, Inclusive Computing in Medicine, Proceedings of Medicine Meets Virtual Reality, San Diego, June 4-7, 1992.

Whalley L J, Ethical Issues in the application of virtual reality to the treatment of mental disorders. In Virtual Reality Systems, Eds R A Earnshaw, M Gigante and H Jones, Academic Press, 1993. pp 273-287.

Contact Details:

Dr N J Avis Department of Computer Science VERC - School of Engineering and Computing University of Hull Hull HU6 7RX UK

Direct Line: +44 (0)482 465247

General Office: +44 (0)482 465951

Facsimile: +44 (0)482 466666

email: N.J.Avis@dcs.hull.ac.uk


The Use of VR for Enhanced Visualisation in Systems Simulation

Robin Hollands and Neil Mort

Dept. of Automatic Control & Systems Engineering University of Sheffield e-mail r.hollands@sheffield.ac.uk.

Abstract: The requirements for a mixed-mode simulation language in order to simulate real systems and the use of virtual reality graphical techniques to enhance simulation output are presented. The benefits of using advanced graphical output techniques are discussed. The SWOOP simulation package and VROOM graphics package are presented and finally the incorporation of virtual reality into SWOOPVR is shown, with an example simulation of a model toy car factory shown.

1. The Need For System Simulation

A system can be thought of as a collection of elements that are interdependent and which interact with each other. Simulation is a key part to understanding and successfully manipulating any system. Although it is possible to gain an understanding of the system and try out different control methodologies, system simulation is often desirable for a number of reasons:

- Experimentation may be too expensive, either in lost production or because of the equipment and personnel requirements for setting up and execution.

- Experimentation may be too dangerous, if, for example, the system involved hazardous material, or put operators at risk. It may also be that the system is an essential part of some safety process, or emergency response.

- The system may be too slow. Because simulated time need bear no resemblance to real time, hours, weeks or even months can be simulated in seconds.

- The system may not exist.

Experimentation with a simulated system may mean a number things: training, design, calibration etc.

2. Types Of Simulation

Computer simulations are not the only kind of system simulators available, and are actually a relatively new development. Simulations can be divided into a number of categories:

- Cognitive simulations: most humans have a mental model of the objects and processes they work with every day. They can guess what the possible outcome is likely to be for a given action, because they are effectively running a simulation of that system within their own heads.

- Desktop/hand simulation: a useful, if somewhat slow, tool for gaining a good understanding about a familiar system is to physically act out the process interactions by hand.

- Physical scale models: rather than committing to a full system, a fully functional, but less expensive replica is manipulated instead.

- Mathematical models: if the process can be represented adequately by mathematical equations, then the system can be examined and experimented on analytically. - Analogue computers: although seen nowadays as rather crude, analogue computers were ideal at speeding up the analysis of the sort of systems that could be represented using mathematical techniques.

- Digital computers: originally only good at simulating the type of systems that consisted of logical flows and connections, digital computing has become sufficiently good at numerically approximating calculus that it has effectively made analogue computers redundant.

3. Types Of System

Systems can be generally be divided into two types: continuous and discrete. Continuous systems are those systems where systems variables change continuously over time, e.g. chemical reactions, Newtonian motion, analogue electronics. Discrete systems are systems where changes occur instantaneously, at specific, but not necessarily fixed, points in time, e.g. parts moving through a factory, customers queuing at a bank, batch processing.

However, if these supposedly single mode systems are examined closer, elements of the other mode can usually be found e.g. a mechanical switch in analogue electronics, equations of motion for factory parts. It is to what extent that these opposing characteristics can be neglected that determines in which category they can be successfully modelled [1]. In fact, it is now generally acknowledged that many practical/industrial systems contain elements of both these modes, and so a combined continuous/discrete simulation language is required to model them. The journal 'Simulation' stated in an editorial that: "... consider all simulands as potentially both continuous and discrete, thus amenable to modelling with a combined simulation language and those with no 'events' (continuous systems) and those with no 'state variables' (discrete systems) as special cases but ones requiring no special language" [2].

The idea of combined continuous/discrete systems is by no means a new concept, and was originally proposed by Fahrland [3] in the late 1960's. However discrete and continuous systems require different techniques to mathematical model and simulate, and so most commercial simulation languages are either single mode, or excel in one mode and provide fairly crude facilities for the other. This leads to a situation whereby an opposing mode is approximated using various techniques, e.g. discrete characteristics modelled continuously using non-linearities, or continuous characteristics represented in discrete models using time-delays, or discretising the continuous model [4].

4. Visualising Simulation Output

No matter how accurate the model is, a simulation is only as good as its output. As well as providing comprehensive data for later processing, the simulation output must also be capable of indicating how the simulation is progressing. Continuous system outputs can be represented fairly well by the means of time based graphs, however these are not suitable for displaying the complex interactions of most discrete systems. Even in some continuous systems, the link between graphical trace and the physical characteristic it represents is not always clear.

Manufacturing system simulators are widely used throughout industry and are used to simulate the complex discrete processes found in today's factories. Simulation in this case may well be used for a number of things: staff training, testing new plant layouts, trying new management strategies etc., and all these cases the need to easily see what is happening within the simulation is paramount. To this end, most manufacturing based system simulation languages provide the facility to have a 2D animated representation of the process being simulated. The animation is usually representative of the factory environment, and parts can be seen flowing around the shop floor, machines being used etc.

5. Advantages Of Animation

There are many advantages to using animation instead of the more traditional statistical summaries, graphs and tables of data:

- The human mind is more tuned to processing images than numerical data. By presenting the information as pictures, the analyst's cognitive load is reduced, and therefore he can concentrate more on the task in hand, than in trying to interpret the data.

- The workings of the system simulation can be easily verified. Firstly, the simulation engineer can check that the simulation is executing in the way he intended, and secondly personnel more familiar with the system can check that the simulation adequately represents the real world.

- Complex interactions of subprocesses can be more easily observed, which leads to a greater understanding of the system being modelled.

- Because pictures are a natural and neutral way of communicating, there are less problems due to difference in terminology and language between the different parties involved in the simulation exercise.

- Pictures are more 'friendly' than numbers and hence lead to a greater sense of confidence in simulation accuracy.

6. Using Virtual Reality As An Output Tool

The use of virtual reality takes all the arguments for animation a step further. Whilst the use of animation provides a more natural interface than numerical data, it is still has the disadvantage of restricting the users view and interaction. The standard 2D animation means that the users view of the system is restricted to one or a finite number of predetermined viewpoints. Interaction with the process while it is being simulated is either absent, or confined to a series of menus and hotkeys.

It is the objective of virtual reality technology to remove, or at least minimise, the barrier between the simulation and user caused by the mechanics of operating the computer hardware. In an ideal immersive VR simulation set-up the simulation user would feel that he is in the real process. The user would be free to move around the factory, look at any angle and from any position, and interact with the process without disturbing the continuation of the process simulation around him.

Even in the early days of the technology, visualisation like this had already been identified as being desirable by major industrial users [5]. Initial work has been carried out at the Human Interface Technology Lab, University of Washington, attempting to interface a standard manufacturing simulation language with a virtual reality visualisation tool [6]. This system requires 4 networked graphics workstations, and replays the output of a precalculated simulation. The user experiences the virtual environment using a VPL stereoscopic display. The HIT model was of a simple process in a bicycle factory, however due to the extra computation, they were only able to generate around 1 frame per second, and had a tracking lag time worse than 0.3 seconds.

Work has also been carried out using a desktop VR approach to a simple robotic cell [7]. This is a much simpler process, has not got the complexities of an immersive set-up and runs with a much higher frame rate than the HIT Lab system, whilst maintaining a very high image quality.

7. SWOOPVR

SWOOPVR (Simulation With Object Oriented Programming and Virtual Reality) is an integrated simulation and virtual reality system being developed at the University of Sheffield, Dept of AC&SE. This is a combination of two propriety pieces of software: SWOOP and VROOM.

SWOOP is a combined discrete/continuous system simulation toolbox that runs under a high level language (Turbo Pascal v5.5) It provides a large number of procedures, functions and data structures to facilitate the simulation of discrete systems using a process interaction or event-based approach, continuous systems using difference or differential equations, or a mixture of them both. SWOOP provides a large number of process blocks, special purpose continuous systems, a range of integration algorithms and a selection of random distributions.

As a simulation package on its own, SWOOP allows simulation output using statistical summaries, time based graphs, or high quality 2D animation. The user can interact with the simulation using a collection of mouse driven sliders, buttons and icons. An example of a typical display can be seen in figure 1. In order to allow SWOOP to work within a VR environment, a separate VR package, VROOM was written.

VROOM, Virtual Reality using Object-Oriented Methods is a graphics package that allows the creation and manipulation of virtual environments. VROOM runs under Turbo Pascal, and runs on any 386 or 486 PC with VGA card. Virtual objects can be created within the code, or loaded from file and can then be positioned at any location or orientation within the virtual world. Objects can be transformed instantaneously or can be given a script to follow. The user can view the virtual world from any position or orientation and can navigate around the environment using keyboard, mouse, joystick, or Logitech Cyberman. Objects are displayed on a 320 by 240 resolution screen, and are flat shaded with ambient and a single directional light source using 256 colours.

SWOOPVR brings together these two pieces of software in order to create a virtual simulation environment. The simulation automatically handles the creation and manipulation of the graphical objects representing the entities and resources within the system. Due the high degree of computation required, the virtual environments use simple iconic representations of the real life objects. The primary objective of the simulation is to show how these objects behave and interact and so there is no requirement to produce photorealistic images. Instead we trade off image authenticity for high frame rates, resulting in images which are simplistic, but move smoothly.

8. Immersive Hardware

So far, the discussion of SWOOPVR has been purely considering a desktop, or through the window, approach to VR. However, once a desktop simulator has been achieved, it is only a small step to an immersive system. At Sheffield, simple, cheap immersive hardware has been constructed, based around a monoscopic head mounted display built from a pocket LCD TV set, and mechanical boom type head-tracker [8]. Using this, the user can effectively put his head inside the simulated process. As he looks around, the head tracker senses the direction of his gaze and alters the displayed image accordingly. Also under experimentation is then incorporation of other low cost VR hardware including speech recognition/playback, 6D joysticks and 3D sound.

9. Example: Model Car Factory

To illustrate the capabilities of SWOOPVR, a hypothetical model car factory has been simulated. The factory builds toy cars by manufacturing the car bodies and the car wheels separately out of plastic sheets which are stamped into a three dimensional shape. The car bodies are then spray painted and mounted on the wheels by a robot arm, before being crated and carried out of the process by the worker. A paint store is located by the side of the spraying machine, which empties continuously while the machine is spraying. If the paint level falls below a minimum, no more car bodies are processed until the store is refilled. Workers arrive at the process store at random intervals, carrying random amounts of raw material. The height of the stores is proportional to their content, and varies during the simulation run. The worker carrying the final crate out of the process also takes a random amount of time to return.

Fig 2. shows a series of screendumps from the simulation. While they cannot hope to capture the effect of moving through the factory in real time, they do show the capabilities of SWOOPVR rendering the simulated environment from different viewpoints and orientations.

10. Conclusions

SWOOP has been designed to be a flexible and easy-to-use mixed-mode simulation tool, and has proven to be successful in a number of case study simulations. The inclusion of virtual reality as an output device in SWOOPVR, has shown that whilst still limited, the results are visually impressive. The modelling of a variety of real industrial systems using SWOOPVR, and the resulting feedback of the users of the plant will be essential in finally deciding whether the inclusion of virtual reality will be of any practical value instead of purely a novelty.

REFERENCES

[1] Cellier, F.E. (1979), "Combined continuous discrete simulation by use of digital computers, techniques and tools", PhD Thesis, Swiss Federal Institute of Technology, Zurich, Switzerland

[2] McLeod, J. & McLeod, S. (1988), "Simulation in the Service of Society", Simulation, Vol 51, No 4, pp 174-175, October

[3] Fahrland, D.A. (1970), "Combined discrete event continuous systems simulation", Simulation, Vol 14, No 2, pp 61-72, February

[4] Edwards, J.B., Mort, N., & Hollands, R.J. (1992), "The use of animation in the simulation of coalmine production systems", Proc. 3rd IEE Conference Factory 2000, York, 27-29 July, pp 278-284

[5] Rheingold, H. (1992), Virtual Reality, Mandarin Paperbacks, London, p366

[6] Jones, K.C. (1992), "Manufacturing simulation using virtual reality", MSc Thesis, University of Washington, Seattle, USA

[7] Neugebauer, J. & Flaig, T. (1994), "Virtual reality for the pharmacy industry", Proc. 4th London Virtual Reality Expo, London, 31 January - 2 February, pp 51-54

[8] Hollands, R. (1994), "Getting started in virtual reality : how to set up a VR lab - the garage VR solution", Proc. 4th London Virtual Reality Expo, London, 31 January - 2 February, pp 35-40

Fig 1. Example of SWOOP 2D Animation

Fig 2. Model Car Factory

From Dreams to Reality

Chris Hand

Department of Computing Science, De Montfort University, The Gateway, Leicester. LE1 9BH e-mail: cph@dmu.ac.uk

Abstract

Virtual Reality systems allow us to interact with objects that don't really exist. Although the virtual objects are not real, we interact with them as though they were real, in the same way that we react to an image in a mirror. Despite the fact that we may not realise (or even acknowledge) it, in everyday life we become accustomed to treating some things as real although they are not. For example, characters in books and films, and the events in a dream may take on many of the vivid aspects of reality.

This paper deals with the broader implications of virtual reality and considers whether the quest for photo-realism is a valid one. Trends in the underlying technology are considered and a survey is made of some "fringe" aspects of VR, along with the systems and devices that go with them.

1. Introduction

"Dream not of other worlds, what creatures there Live, in what state, condition, or degree; Contented that thus far hath been revealed Not of Earth only, but of highest Heaven."

- John Milton, Paradise Lost.

Dreams (in the literal sense) are a common, yet paradoxical experience. Paradoxical because they are patently not real, since the events within them take place while we are asleep. Dreams are our oldest experience of a virtual reality. Since early times, they have been considered a powerful source of inspiration or premonition, while in the metaphorical sense our dreams are our hopes and aspirations.

This paper considers some of the dreams and some of the realities associated with VR, discussing how the experience of VR may be improved. A brief survey of the use of additional modalities is followed by a look at trends in the technology of input devices. This includes some aspects of work that might be considered to be at the "fringe", such as using computers while we sleep.

1.1 Dreams as Virtual Reality

In VR we present an artificial stimulus to the user, which causes in them a reaction similar to that caused by a real stimulus. Note that stimulus here does not necessarily mean goggles and gloves. Stimuli may be presented at many levels, not just using sensory channels such as sight and hearing.

For example cinema, books, theatre and dreams are just a few of the day-to-day stimuli which engage us. The people, places and events represented by these stimuli are not real. We know they are not real. However, many people react to them as though they were real. The key is that we willingly suspend our disbelief, or to put it another way we forgo reality-testing.

An interesting question for researchers in VR, as well as psychologists and philosophers, is why do dreams seem real?. However, the crucial point for VR is this: A dream can seem vividly real despite the lack of sensory input. So if creators of virtual realities aim to increase the quality of their synthetic experiences purely by concentrating on fooling the senses better, then they must be misguided.

2. The Quest for Photo-realism

"Geometry is not reality. Interactivity is reality."

- Myron Krueger. [Gara91]

Recent developments in computer graphics technology have meant that artificial images of high quality may now be generated at very high speed. When these pictures are virtually indistinguishable from photographic images we call them photo-realistic. Photo-realism has been one of the dreams of computer graphics almost since its inception.

The latest VR systems are able to render millions of texture-mapped, shaded polygons in a second. Is there any reason to believe that an increase in the fidelity of the graphical image brings about a proportionate increase in the realism of the experience? It seems unlikely. In fact, experience with relatively crude, low-resolution VR systems has shown that even these can provide a reasonably believable experience, especially when the visual display is moving.

There are problems associated with assumption that better visuals make for more realism. Firstly, creators of artificial realities are designing an experience, not just a display. Secondly, sensory substitution alone does not create a new reality - the experience must be interactive. The ways in which the user interacts with the system, both physically and mentally, need to be considered. Current VR systems are mainly confined to using graphics and perhaps sound (three-dimensional if we're lucky). The rest of the human experience is largely ignored. Hence, the experience of the cyberspaces which these systems create is extremely isolating. John Perry Barlow, a user of VR systems since the early days of VPL, describes it as "like having had your everything amputated" [Barl93]. Why should so much emphasis be placed on graphics? Certainly, vision makes a large contribution towards understanding the world for most of us. However, this does not mean that a blind person has a lesser experience of reality.

3. Enriching the Experience

"Designing human-computer experience isn't about building a better desktop. It's about creating imaginary worlds that have a special relationship to reality - worlds in which we can extend, amplify and enrich our own capacities to think, feel and act."

- Brenda Laurel. [Laur91, p32].

"The awareness of imaginary entities and events might be ascribed to the operation of the perceptual system with a suspension of reality-testing."

- James J Gibson [Gibs86, p263].

3.1 Reality is in the Mind of the Beholder

Computer-based virtual worlds are experiences, not just displays. To perform useful work (or to be entertained) users must in effect "suspend reality-testing". That is to say that they should be strongly involved with the experience (or "engaged" to use Laurel's term) rather than observing the user interface or its artefacts. Sensory substitution is not enough; the experience needs to amount to more than sensory input.

Those who create these experiences must do more than design screens. The whole experience needs to be stage-managed; the scenery is only part of the play. Designers of multimedia software have recently begun to accept this greater responsibility, while the software houses which produce games have been employing script-writers, graphic artists, scenery designers and musicians for some time.

In considering classical drama, Aristotle formulated a six-layer model, known as the six qualitative elements of structure. The lowest of these levels is "enactment", which corresponds to the sensory aspects of the representation when the model is applied to the human-computer interface [Hand94]. Each of the other levels ("pattern", "language", "thought", "character" and "action") builds on the one below it. Even Aristotle knew that designing the whole experience means making use of not just sensory stimulation, but also considering the experience at higher cognitive levels. In the long term greater attention will need to be paid to the psychological mechanisms used if we are to design the experience as a whole. It may also be helpful to use ideas from theatre, film and other established media.

In the short term we can make the experience richer by paying attention to other modalities rather than placing most of the emphasis on visual representation.

3.2 Making use of Other Modalities

In the current generation of VR systems a great deal of significance has been attached to the presentation of visual images to the user. Since an experience of reality does not depend entirely on visual perception, it follows that any graphical or video displays used should also be supplemented with the use of other modalities. The idea is to display information to the other human senses such as hearing and touch.

What is equally important is that many users (or potential users) of virtual worlds technology are unable to make use of some modalities due to physical disability. For these users the use of extra channels determines not whether the experience seems real, but whether they can use the system at all. VR is an enabling technology which allows disabled users to communicate and interact with other users on equal terms, both locally [Paus92] and across computer networks [Smyt93].

A further aspect of using other modes of sensory communication is in feedback. In any interactive computer system, users need feedback in order to tell whether they are performing a task successfully. Complex 3-D virtual worlds need to provide more feedback to users if they are to perform complex tasks. Tasks may be classified as visual, motor, locomotive or auditory, and each of these effector channels has a related feedback channel: visual, kinaesthetic/tactile, vestibular and auditory. Only the auditory (sound), tactile (touch) and vestibular (balance) channels are considered below.

3.2.1 Sound
Although human-computer interfaces have been using sound for years, it has mostly taken the form of "beeps"; nothing more than bells and whistles added to the interface as an afterthought. The sound display technology used in virtual worlds is much more advanced. The intention is to present audio signals to the user which create a virtual sound-image in 3-D space.

A technique known as spatial convolution may be used to transform a number of monophonic sound sources, creating a true 3-D sound field which may be presented to the user by way of standard headphones [Wenz92]. At present the equipment required to perform this task in real time is expensive, since the convolution requires a great deal of calculation. However, the increasing demand for high-quality digital audio products (especially for personal computers) means that these devices are likely to become much cheaper - especially as video and computer games start to include 3-D sound as standard.

3.2.2 Touch
Displaying information in a tactile form is much more difficult than sound. Early tactile feedback mechanisms used in VR systems used crude piezoceramic devices built into a glove [Zimm87], which produced a "tingling" sensation in the finger. One approach that has been adopted recently [John92] is to present small tactile display devices ("tactors") to the user's finger tips, for example using a glove device. The tactors have a matrix array of small pins which may be raised or lowered under computer control, creating a small amount of pressure on the user's skin. This technology is still very new and there are many alternatives which may be more successful. For example, it is possible to evoke touch sensations within the skin using electrical stimulation [Kacz91].

3.2.3 Balance
Although users of VR systems are often free to walk, run or fly around their virtual worlds, they are very rarely given any vestibular feedback. In other words they don't experience any apparent movement of their own bodies. However, techniques are available for producing vestibular sensations in the user. For example, a motion platform may be used which physically moves the user around in a small area to create the impression of tilting (in pitch, roll and yaw) and acceleration or deceleration in the x, y and z directions. A classic "trick" used in flight simulators is to accelerate the user forwards and then gradually decrease the acceleration to zero; the user interprets this as continual forward movement.

Although at present vestibular feedback is a very expensive technology (motion platforms don't come cheap), it is economically justifiable in some areas such as pilot training and entertainment. It may be possible in the future to artificially evoke vestibular response electrically or otherwise; however great care must be taken for vestibular signals to match up with the other cues and displays presented to the user, since it is considered that the sensory conflict [Oman91] is a cause of nausea ("simulator sickness").>p>

4. Technological Trends and the VR Fringe

A trend [...] is emerging. Its effect on the field of input devices is specifically to move from providing objects for the user to actuate through specific commands to simply sensing the user's body. [Jaco93, p70]

4.1 Alternative Input Devices

There are demonstrably useful ways of interacting with computer systems which have been largely overlooked.

Two-handed input [Buxt86] is easily achievable, either with custom-built input devices or using the more common ones such as mice and joysticks, and yet it is hardly ever considered during the design of the human-computer interface.

As VR moves from the research lab to the office, non-intrusive video techniques for anatomical tracking will become more prevalent. This will allow the computer to calculate the position and orientation of the hands [Well91], the eyes [Ware87], the head and even the whole body [Krue91] without forcing the user to wear instrumented clothing.

Surprisingly, speech recognition is one area which has been neglected. Even though today we can command hands-free, in-car telephones to dial numbers for us simply by speaking to them, we are still not regularly interacting with computers in the same way. Combining speech with gestures [Bolt80] is an ideal way for users to work in virtual, keyboard-free environments, and it is one which they prefer over gestures or speech alone [Haup93].

Despite the good sense inherent in many of the systems described above, commercially-available VR systems still show a distinct lack of imagination in their use of input controllers. They mostly concentrating on electromagnetic trackers, instrumented gloves and force-balls.

4.2 Biocontrollers

There is no reason why we shouldn't look to our other biological functions as ways of interacting with virtual environments. An emerging trend in computer input devices is to move towards passive sensing of the user's body. A good example of this is the BioMuse biosignal processor [Lust92] which is designed to collect and interpret electrical activity from the brain (electroencephalogram, EEG), muscles (electromyogram, EMG) and eye movements (electro-occulogram, EOG), so that these may be used for controlling computer systems. Although we are a long way from being able to communicate with computers by thoughts alone, there are researchers working on systems to interface computers directly to the human nervous system [Scie90][Agne90].

4.3 Dreams of the Future

As mentioned above, dreams often seem to be very real. Using a virtual reality we hope to make a task easier to perform. Could we do this with dreams? If the average adult spends 8 hours a day sleeping then they spend a third of their lives asleep. Perhaps we could put this time to some use.

Certain types of dreams - so-called "lucid" dreams - are commonly experienced, during which it is possible to take charge of the dream events [LaBe80] and literally control what happens in the dream. During this time of free thought it may be possible to try out new ideas or scenarios, build new designs and generally think creatively. The difficulty is in recognising that lucid dreaming is taking place: after all, the person is asleep.

Imagine now an electronic device that signals to the sleeper, without waking them, that dreaming has begun. This might be performed by shining a gentle light onto the eyelid, or by generating some quiet electronic sound. The detection of the onset of dreaming is fairly straightforward, since it is usually accompanied by bursts of rapid eye movements (REM). If an REM detector were constructed, for example by detecting electrical activity in the eye muscles or optically detecting movement of the eyelids, then this could be used to trigger the light or sound generator. The sleeper would then be notified that they were dreaming and they could, if they wished, take control.

Although this might itself sound like a futuristic dream, it is in fact already a reality. The Lucidity Institute, set up by a group of researchers based at the Stanford University Sleep Research Centre (USA), is dedicated to harnessing the power of lucid dreaming. The DreamLight device sold by the institute [Luci93] works exactly as described above, shining a flashing LED onto the eyelid. Work performed by La Berge and his team also indicates that it is possible for lucid dreamers to communicate with their external environment while sleeping [LaBe81]. In common with the "biocontrollers" mentioned above, EEG, EOG and EMG techniques are all frequently used in experimental measurement of the state of sleeping subjects.

Although this could be said to be an area of "fringe" science, there seems to be a certain amount of interest in making this technology both available and commercially viable (several US patents were granted for dream-detecting apparatus in the late 1980's). The question is whether the virtual reality of dreams could ever be used in a positive, constructive way.

5. Conclusions

This paper has described how dreams are similar in some ways to virtual reality. Once we have awoken from the dream of photo-realism, we may start to concentrate on designing the whole experience for the user, making use of expertise from other media such as film and theatre.

Enriching the experience means not only considering the higher level cognitive aspects of using the system, but also providing better feedback. Making use of other modalities such as sound and touch makes complex tasks easier to perform, as well as breaking down the barriers which prevent access by some disabled users. This is another way of designing the whole experience.

Trends in computer input devices are towards passive sensing of the user's body, both in terms of anatomical tracking and in using bio-signals to control the computer. These techniques help to increase the bandwidth of human-computer communication without encumbering the user.

Dreams (in the literal sense) may one day be used for communicating or working. At any rate, we can surely profit from taking notice of our dreams (in the metaphorical sense), since what is only a dream today may be a reality tomorrow.

6. References

[Agne90] Agnew, W F. and McCreery, D B (Eds). Neural Prostheses: Fundamental Studies. Prentice-Hall, 1990.

[Barl93] Mondo 2000, A User's Guide to the New Edge. Thames and Hudson (London), 1993. p272.

[Bolt80] Bolt, R A, Put-That-There: Voice and Gesture at the Graphics Interface" Computer Graphics. July, Vol 14 No 3, 1980. (Proceedings of ACM SIGGRAPH '80). pp262-270

[Buxt86] Buxton, W and Myers, B A. A Study of Two-Handed Input. in Proceedings of CHI'86. pp321-326. 1986.

[Gara91] Garassini, S. The Ultimate High of Myron W. Krueger, father of artificial reality. in Tech Images/Paris-Cite No 18, Oct/Nov 1991. pp41-42.

[Gibs86] Gibson, J J. The Ecological Approach to Visual Perception. Lawrence Erlbaum Associates, 1986.

[Hand94] Hand, C. Other Faces of Virtual Reality. De Montfort University Department of Computing Science, Technical Report TR94/3. Leicester, UK. 1994.

[Haup93] Hauptman, A G and McAvinney, P. Gestures with Speech for Graphic Manipulation. International Journal of Man-Machine Studies Vol 38. pp231-249.

[Jaco93] Jacob, R J K, Leggett, J L, Myers, B A and Pausch, R. Interaction Styles and input/output devices. Behaviour & Information Technology Vol 12 No 2, 1993. pp69-79.

[John92] Johnson, A D. (TiNi Alloy Company, Oakland California. USA). Programmable Tactile Stimulator Array System and Method of Operation. United States Patent 5,165,897. November 24th 1992.

[Kacz91] Kaczmarek, K A, Kramer, K M, Webster, J G, Radwin, R G. A 16-Channel 8-Parameter Waveform Electrotactile Stimulation System. IEEE Transactions on Biomedical Engineering. Vol. 38 No. 10. October 1991. pp933-943.

[Krue91] Kruegar, M. Artificial Reality II. Addison Wesley, 1991. [LaBe80] La Berge, S P. Lucid Dreaming as a Learnable Skill: A Case Study. Perception and Motor Skills, Vol 51 (1980). pp1039-1042.

[LaBe81] La Berge, S P, Nagel, L E, Dement, W C, Zarcone, V P. Lucid Dreaming verified by Volitional Communication during REM Sleep. Perception and Motor Skills, Vol 52 (1981). pp727-732.

[Laur91] Laurel, B. Computers as Theatre. Addison-Wesley, 1991.

[Luci93] The Lucidity Institute, Inc. 2555 Park Boulevard, Suite 2, Palo Alto, Calfornia 94306, USA. Lucid Dreaming Catalog, Winter 1993.

[Lust92] Lusted, H S and Knapp, R B. Biocontrollers: A Direct Link from the Nervous System to Computer. Proceedings of the Medicine meets Virtual Reality Conference, San Diego, California (USA). June 4-7, 1992.

[Oman91] Oman, C M. Sensory conflict in motion sickness: an Observer Theory approach. in Pictorial Communication in Virtual and Real Environments, Stephen R Ellis (Ed.). Taylor and Francis (London), 1991.

[Scie90] Tapping into Nerve Conversations (Research News), Science No 248, p. 555. 4th May 1990.

[Smyt93] Smythe, P. "he Use of Networked VR to Assist People with Special Needs. Virtual Reality and Persons with Disabilities conference, San Francisco, USA. June 17-18, 1993.

[Ware87] Ware, C and Mikaelian, H H. An Evaluation of an Eye Tracker as a Device for Computer Input. Proceedings of CHI+GI'87, pp183-188.

[Well91] Wellner, P. "The DigitalDesk Calculator: Tactile Manipulation on a Desk Top Display". Proceedings of ACM Symposium on User Interface Software and Technology (UIST '91). Nov 11-13 1991. pp27-33

[Wenz92] Wenzel, E.M. (1992). Three-Dimensional Virtual Acoustic Displays. In T. Middleton (Ed.) Proceedings of Virtual Worlds: Real Challenges -Papers from SRI's 1991 Conference on Virtual Reality. Westport, CT: Meckler Publishing. pp.83-88.

[Zimm87] Zimmerman, T G, Lanier, J, Blanchard, C, Bryson, S and Harvill, Y. A Hand Gesture Interface Device. Proceedings of CHI+GI'87. pp189-192.

The Teaching of Virtual Reality in the United Kingdom

Nick Avis, Rob Macredie and Derek Wills

Virtual Environments Research Centre (VERC) School of Engineering and Computing The University of Hull

Abstract

The aim of this paper is to stimulate discussion between parties interested in the teaching of Virtual Reality (VR). The paper will present some practical observations gained from teaching VR at The University of Hull. Problem areas will be highlighted and potential solutions based on co-operation and collaboration between academic sites will be proposed.

Introduction

Virtual Reality (VR) is a developing multidisciplinary field which is attracting widespread interest from industry and academia. The public perception of VR is influenced by exaggerated, sometimes inaccurate, claims about the potential it offers. Whilst this can help to promote the field, academic modules are needed to provide rigorous education to those with a serious interest in the field. Such modules can help to provide a more balanced understanding of the underlying issues in VR, and can provide the people with relevant skills for research and development in the area. Some VR modules already exist, and many more are likely to be developed in the coming years.

The relative youth of the field provides the opportunity for the UK VR community to influence the development of coherent and balanced module components which will further the academic standing of the area. This paper will report our experiences in teaching VR, highlighting the difficulties that we feel may be common to other institutions. We will go on to suggest ways in which these problems may be addressed by a concerted and co-ordinated effort on the part of the academic VR community.

Current situation

In 1992 the University of Hull offered one of the first taught undergraduate modules in the area of VR. Since its inception the module has proved a popular final-year option in the departments of Computer Science and Electronic Engineering.

As with any new course, initial difficulties were encountered but these were quickly addressed and resolved. This has allowed us to focus on identifying the underlying difficulties associated with teaching a truly multidisciplinary subject. These include, but are not limited to, the following:

* the need for instructional expertise in potentially diverse areas: VR draws on skills which may not be fully represented in a single department - for example, human factors and hardware development;

* the ability to orient the course to the background of the student: VR will be of interest to a wide range of students in different departments. The background skills of these students will vary, and the emphasis of the VR module should accommodate this;

* access to core information: the textbooks that exist tend to achieve their aims, but few address all issues of VR. This makes it difficult to gain an overview of the field. More specialist research information is also difficult to gather since it can be widely disseminated in a range of specialist journals;

* practical activities and resource implications: ideally students should be given the opportunity for hands-on experience in the development, integration, use and assessment of VR systems. There are obvious difficulties in providing adequate resources to support this. Resources include not only the equipment for VR but also the overheads associated with the construction of robust practical exercises.

We feel that others must be experiencing similar problems, and that these issues must be addressed if the teaching of VR is to develop successfully.

The way forward

In this section we will propose several mechanisms to alleviate the problems that we have identified and to support the effective teaching of VR in the UK. These mechanisms are obviously not a definitive list, but may serve to promote discussion between interested parties. The overall aim of this is to develop the maturity of the field and to ensure a quality learning experience.

* formation of consortia for curriculum development: forming consortia of interested academic sites will aid the development and exchange of course material and avoid duplication of effort. This will support the channelling of energies, allowing sites to focus on specific areas of interest;

* development of core elements: the consortia identified above can support development of a suite of core elements of VR with explicit aims. These elements will provide a coherent series of modules suitable for a wide range of academic and industrial courses;

* supporting dissemination of information: consortia may also be useful in reviewing the current state of VR. For example, the relation of new books, or emerging research to course curricula may be assessed by one member of the consortium and the information shared. This is likely to provide better coverage of the area, and to reduce duplication of effort;

* development of practical course components: consortia can draw on their collective experience to define an affordable core of practical exercises and demonstrations for different types of platform.

These suggestions highlight our belief that the development of effective teaching materials depends critically upon co-operation and collaboration between interested academic sites in the UK. This will relieve pressures on staff time and avoid duplication of effort. Institutions will be able to focus on providing in-depth components in areas of particular expertise, supported by a core curriculum addressing VR as a whole. We also feel that co-operation and collaboration will help to bring the UK VR community together, and establish an effective skill base to feed through into research and development in academia and industry.

Conclusion

In this paper we have identified some of the problems with teaching VR based on our experiences, and have suggested ways of alleviating the difficulties. We hope that our observations and suggestions will provide a basis for discussion. The relative youth of this field provides a unique opportunity for these discussions to influence the development of VR teaching and to further the maturity of the discipline.

Contact Details

Please address all correspondence to:

Rob Macredie VERC: Department of Computer Science School of Engineering and Computing The University of Hull HULL HU6 7RX

Direct Line: +44 482 465910 Office: +44 482 465951 Fax: +44 482 466666 Email: R.D.Macredie@dcs.hull.ac.uk

Virtual World Design: Interactive Creation and Manipulation of Virtual Objects and Worlds in 3D Space

Richard Abbott, Ken Chisholm, Peter Johnson, Russell Pacy Napier University, Edinburgh

Abstract

This paper describes a 3D "world designer" and "object manipulator" currently under development by the authors at Napier University.

At present the majority of virtual reality research is concentrating on high performance systems. It is felt that there is a niche for an inexpensive, but highly functional system which has an intuitive user interface, both for the creation of objects and worlds.

The 'Virtual World Design' system is comprised of two different components, the 'Object Designer' and the 'World Designer/Navigator', both of which have been implemented using X Windows and C / C++ on a Sun SPARCstation.

The 'Object Designer' enables the user to create an object from a set of geometric primitives. Traditionally this would have been done using a Tri-View editor which can present difficulties for people from a non-CAD or technical drawing background. Instead a solid object system has been developed where the vertices can be elasticity deformed in three dimensions, thus giving the metaphor of a shape being 'moulded' much as a sculptor would use clay.

Objects which have been created within this editor can then be stored and positioned within the application scene using the 'World Designer/Navigator'. This is accomplished by the user 'flying' around their virtual world and placing objects in their preferred locations.

The benefits of 3D scenic navigation and object design are discussed.

Napier University, Edinburgh Department of Computer Studies Craiglockhart Campus 219 Colinton Road Edinburgh EH14 1DJ E-mail: vrprj@uk.ac.napier.dcs

1. Introduction.

The Virtual World Design (VWD) project was initiated in late 1991, as a result of the authors' fascination with the production of three dimensional images. At that time, the Virtual World design system was running in wire-frame, on a black and white Sun SLC SPARCstation.

Research in Virtual Reality (VR) seems to have become dominated by the technology around the actual production of these images, rather than with the mechanics behind the display of the images and their subsequent manipulation.

Whilst we recognise that there is a place for the use of data-gloves, head-mounted-displays et al. there is also a place for a simpler system which, whilst conforming to the ideas and principles behind Virtual Reality, allows users to create and interact with their own and other peoples' worlds.

As stated by Roy S. Kalawsky [Kalawsky 1993] "Current users of cyberspace are generally legally blind, headed for a stereoscopic headache, about to be motion sick, and soon to be stuck by a pain in the neck due to the helmet weight".

The direction of the proposed research is an academically oriented package, which allows the teaching behind the methods of construction, design and implementation of virtual world environments.

Because the system does not depend on complex (and expensive) peripherals to display the resultant images and to interact with the Virtual Worlds which have been created, it makes the product especially attractive to academic institutions who are not able to afford some of the more esoteric equipment used with virtual reality systems.

The VWD system is comprised of two different components, the 'Object Designer' and the 'World Designer/Navigator'.

2. The Object Designer Interface.

The purpose of the Object Designer is to provide an intuitive utility to create or modify objects used within the World Designer/Navigator. The Object Designer allows the user to directly manipulate objects in three dimensions, in a manner akin to a sculptor shaping clay. By allowing the user to shape the objects in three dimensions no prior experience of mechanical drawing or Tri-View theory is required to use the Object Designer. This reduces the cognitive processes involved in the creation of objects and thus makes the Object Designer ideal as an introduction to three dimensional graphics, for example, in educational institutions.

Currently a user evaluation study is about to start comparing the relative merits of the Tri-View interface approach with the WYSIWYG approach of the Object Designer.

The Object Designer is split into two main on-screen sections: the drawing canvas and the control panel (see figure 1).

Figure. 1 - Object Designer and drawing canvas.

The drawing canvas is dedicated to displaying an object. The user can interact with the object on the drawing canvas by selecting parts of the object (for example vertices or sub-objects). The 'Control Panel' provides a means to perform operations on the object and is sub-divided into sections concerned with viewing, forming and file operations.

The viewing position within the drawing canvas may be altered by selecting the appropriate directional button with the mouse. To change the granularity of movement about the object, stepping units may be altered by adjusting the 'Movement' slider bars.

The 'Forming' menu allows operations such as shearing, scaling, rotation, colour selection and deletion to be selected. Forming operations are performed on vertices selected by the user. Degrees of rotation and steps of movement depend on the values shown on the 'Effects' slider bars. Normally axis choice for rotations, scaling, etc. depend on the current 'Axis Selection'. Colour selection is provided by a colour palette which allows the user to select a drawing colour, thus altering the colour of a facet on the drawing canvas.

File operations allow the user to create a new object, store the current object, open and import previously created objects.

2.1 Creation and Manipulation of Objects.

The user creates objects by importing pre-defined primitives, or importing previously defined objects. The primitives may include basic shapes such as lines, cubes, spheres and panes. Importing objects provides a very intuitive facility to aid object creation. It should become relatively apparent to the novice user that this facility allows the creation of objects encapsulating a set of sub-objects.

The forming operations allow the user to alter the shapes of selected objects. The user must initially select the vertices which require forming. The 'Selection On' button provides a facility for the user to switch between selections on individual vertices, polygons within the object, or the entire object. The user may choose between these three selections at any time. Figure 2 shows the effects the forming operations have on a cube.

Figure. 2 - An example of clay moulding as applied to a cube.

An example Tri-View system under development at Napier University is shown in figure 3.

It is felt by the authors that this manipulation and forming method readily enables the creation and moulding of objects. Although it is perhaps more difficult for this WYSIWYG approach to be as precise as the Tri-View method, the simplicity of the interface makes visualisation and usage clearer. To achieve more precision with this WYSIWYG approach it is proposed to introduce the concept of 'virtual rulers' and 'virtual protractors' in a future implementation.

3. The World Navigator/Designer.

The World Navigator/Designer application encapsulates two key aspects of virtual reality [Benedikt 1992]: firstly the creation of such worlds and secondly the notion of being able to "fly" or navigate around such worlds.

3.1 World Design.

In its "designer mode", the World Navigator/Designer enables the creation of virtual worlds. A virtual world consists of any number of objects previously created using the Object Designer and placed within a three dimensional space.

Figure. 3 - PRISM Tri-View system.

Objects may be selected by the use of the mouse, and manipulated using the Control Panel. The selected object becomes highlighted and the Control Panel may subsequently be used to manoeuvre the object to its desired location using the cursor buttons.

As in the Object Designer, forming operations to scale, rotate and shear the selected object may be carried out. This permits the user to customise the imported objects to fit the desired location within the virtual world.

Once a virtual world has been designed the user may store it for use at a later date. The Control Panel provides the functionality necessary to import objects and worlds from files.

3.2 Navigation.

Virtual world navigation may be thought of as analogous to walking through the real world. This is achieved by the user traversing around the world and subsequently interacting with the 3D rendering software.

Navigation and world design is accomplished by the aid of an X Windows interface which is consistent with the user interface employed by the Object Designer (see figures 1 and 4). Here, buttons and sliders are used to map the movements in the 3D environment within the confines of a 2D display.

There are essentially two parts to the user interface or Control Panel: movement and scaling. Movement is accomplished through the use of arrow buttons which correspond to a movement of the camera location within the virtual world. Thus, zooming in/out, panning left/right, and up/down movements are simulated. The camera can also be rotated about all three axis providing yaw, pitch and roll effects.

Figure. 4 - The World Designer/Navigator in use. (Objects inspired by 'Dactyl Nightmare' from Virtuality)

4. Implementation.

The 3D rendering software is implemented using C++ in a class library. Using the language features, such as classes and templates, the necessary data structures can be constructed. The use of C++ tends to produce an elegant and portable solution.

There are many rendering algorithms employed to do this. The algorithmic concepts adapted to render 3D scenes are based upon the matrix mathematics presented in [Angell 1991] and [Rogers, Adams 1990]. Rendering pipeline algorithms such as the storage of objects, clipping and hidden surface removal are presented in [Barclay, Gordon 1993], [Blinn 1991], [Foley 1990], and [Rogers 1985], respectively. The hidden surface removal algorithm used is a variation on depth-sorting and back-face culling as described by Tyler [Tyler 1992].

The chosen data structure is represented by an Entity Relationship (ER) diagram of a virtual world as shown in figure 5.

Figure. 5 - ER Diagram depicting the implemented data structure.

5. Conclusion.

Although necessity has been the "mother of invention" to some extent. the feasibility of implementing an interactive VR system using non-specialist, inexpensive hardware has been clearly demonstrated.

The WYSIWYG method of object manipulation in the Object Designer is more user friendly than the Tri-View approach, particularly for non-engineers without technical drawing experience. This 3D "hands-on" interface is also consistent with the typical navigational interface employed in VR system.

Future Enhancements.

As the Virtual World Design project has been developing throughout the academic year, the author's periodically add features to their "Wouldn't It Be Nice If..." list. This currently consists of the following:

1. Object Manipulation Language.

This would be a small interpreted language which could be attached to objects. When the user activates an object, the Object Manipulation Language would be executed. Such a language would allow the object or classes of similar objects to be transformed in some manner.

2. Texture Mapping.

The author's are currently considering the mechanics of the implementing of real-time texture mapping.

3. ICCM compliant communication between disparate processes.

The author's hope to further develop the software so to create a system where by many users could interact within a common virtual world. This may be a achievable using the X Windowing system across a network.

4. Lighting

The author's are considering the inclusion of real time lighting algorithms to the Virtual World Design package.

5. Applications

It is proposed that possible application areas of the VWD system could be an interior decoration preview system and kitchen design tool.

References.

Angell, Ian O. (1991). High Resolution Computer Graphics Using C. Macmillan, London, England.

Barclay, Ken., Gordon, B. (1993). C++ Problem Solving and Programming. Department of Computer Studies, Napier University, Edinburgh.

Benedikt, Michael. (1992). Cyberspace: First Steps MIT Press, Cambridge, MA.

Blinn, Jim. (1991). A Trip Down the Graphics Pipeline: Line Clipping. IEEE CG & A, Vol. 11, No. 1, Jan. 1991, pp. 98-105

Foley, James D., van Dam, Andries. (1990). Computer Graphics Principles and Practice - 2nd Ed Addison-Wesley, Reading, Massachusetts.

Kalawsky, Roy S. (1993). The Science of Virtual Reality and Virtual Environments. Addison-Wesley, Wokingham, England.

Rogers, David F., Adams, Alan J. (1990). Mathematical Elements for Computer Graphics - 2nd Ed. McGraw-Hill, New York.

Rogers, David F. (1985). Procedural Elements for Computer Graphics. McGraw-Hill, New York.

Tyler, Andrew (1992) Amiga Real-Time 3D Graphics Sigma Press, Wilmslow, England.

Acknowledgements.

The authors would like to take this opportunity to thank some people who have helped contribute to the production of the Virtual World Design system.

Thanks to:

Pat Hawkshaw & Grant Allwell of the Mathematics Department in Napier for their time, patience and resources.

Andrew Swan, a final year undergraduate at Napier, for discussions and a screen shot of his Tri-View editor, 'PRISM'.

The staff and students of the Computer Studies Department of Napier University who have been contributors to both this paper and the Virtual World Design project.

Rob Kemmer for his support and proof reading of this paper.

An Overview of Virtual Reality Resources on the Internet

Sean Clark

LUTCHI Research Centre, Loughborough University, Leicestershire, LE11 3TU, UK E-mail: S.M.Clark@lut.ac.uk

This paper presents an overview of the many Virtual Reality related resources that are available on the Internet. Virtual Reality discussion groups are listed, as are anonymous FTP and Gopher archives, e-mailing lists and World Wide Web servers. Additionally, information is given on how to use the Internet to obtain software which will enable the construction of a "home-brew" VR system.

Introduction

The Internet is a global "network of computer networks" spanning the globe. It currently links thousands of computers and literally millions of people (recent estimates suggest that there are at least 15 million users). The Net can be used to discuss topics of interest with like minded individuals, to access remote databases (containing information on almost any topic) and to obtain free or low-cost software.

Many people with an interest in Virtual Reality find the Internet an invaluable source of information. This paper presents an overview of the resources available. It is intended primarily as an introduction for people who have yet to discover the Net. However, it is hoped that it also includes a few new sources of Virtual Reality information that will be unfamiliar to the more experienced Internet user.

For those planning to construct their own Virtual Reality system details of "home-brew" Virtual Reality materials available via the Internet are provided. As will be seen, it is possible to construct a very reasonable VR system for a surprisingly low cost.

Basic Internet Facilities

The most popular Internet communication medium is electronic mail. Most networked computers provide an e-mail service of some sort and you may find that even though your company or University does not a full Internet access it does have external e-mail access.

The Internet's main discussion system is called Usenet News. In order to read discussion groups (or "newsgroups") you will need news "client" software and access to a news service. Many Internet providers (be it your University or a third party company) provide a news service and will be able to recommend client software for your preferred computer platform.

Information and software can be downloaded from Internet "sites" via the anonymous FTP and Gopher services. Anonymous FTP is a convention which allows the File Transfer Protocol (FTP) to be used to copy files between distributed computers. When you wish to obtain files from a remote "archive" you connect to its Internet address (either a text name in the form a.b.c.d or a numerical address) and log in using the user name "anonymous" and your e-mail address as a password. Once connected, simple Unix-like commands can be used to navigate the filing system. Files can then be downloaded using a "get" command. Gopher is a simple menu-driven system for accessing remote archives. Both anonymous FTP and Gopher client software is available for most computer platforms.

The World Wide Web is a new Internet standard in which all existing Internet services (including newsgroups, FTP and Gopher archives),as well as distributed "hypermedia" documents, are referred to using a standard address format or Uniform Resource Locator (URL). World Wide Web clients (such as the Mosaic system for the Macintosh, Windows PC and X Terminal) allow users to access these services via simple point-and-click interfaces.

Virtual Reality Resources on the Internet

The VR Usenet Newsgroups

Information on the latest developments in the field of Virtual Reality can be found in the two moderated VR Usenet newsgroups - sci.virtual-worlds and sci.virtual-worlds.apps. Sci.virtual-worlds has a fairly broad subject range. Philosophical discussions on the potential impact of VR on society are mixed with details of new VR products, VR conference announcements, and requests by VR researchers for information and advice. Sci.virtual-worlds.apps is more focused, reporting primarily on "real world" applications of VR technology.

More information on these newsgroups, including the addresses of the current moderators, can be found by reading the "Sci.virtual-worlds Meta-FAQ" document. This is posted monthly to sci.virtual-worlds or can be obtained via anonymous FTP from ftp.u.washington.edu in file /public/virtual-worlds/Meta-FAQ.

For those not able to access Usenet News directly, postings to both newsgroups can be received via e-mail by subscribing to the virtu-l and vrapp-l e-mailing lists. To subscribe to virtu-l (which mirrors sci.virtual-worlds) send an e-mail message to the automated list server at listserv@vmd.cso.uiuc.edu with "subscribe virtu-l " in the body of your message. To subscribe to vrapp-l (sci.virtual-worlds.apps) send "subscribe vrapp-l " to the same address. If you experience any problems in subscribing then check the Sci.virtual-worlds Meta-FAQ for the name of the current list administrator and e-mail them directly.

Anonymous FTP and Gopher Archives

Numerous VR-related anonymous FTP and Gopher archives exist on the Internet. Typically, these hold materials such as VR software, papers and reports on VR research, and images of VR systems. A comprehensive list of archive sites can be found in the regular "vr-sites" posting to sci.virtual-worlds (an up-to-date version of this list can also be obtained from ftp.apple.com in file /pub/VR/vr_sites). Some of the most useful VR archives are described below.

The HIT Lab's anonymous FTP archive (ftp.u.washington.edu) is probably the largest single source of VR materials available on the Internet. The archive has two public hierarchies - /public/VirtualReality/ and /public/virtual-worlds. Useful reading materials such as trip reports, transcripts of speeches by VR luminaries, and the excellent "whatisvr.txt" introduction to the field can be found in the directory /public/virtual-worlds/papers. A particularly useful report in the HIT Lab's archive is the Virtual Reality bibliography in file /public/VirtualReality/HITL/Bibliographies/emerson-B-93-2.txt (Postscript and RTF versions of the file are also available).

The sunee.uwaterloo.ca anonymous FTP site holds, amongst other things, the latest release of the PC-based freeware rendering system REND386. This system is a must for any PC owner who is interested in building their own low-cost Virtual Reality system. Read the file /pub/WHATS_WHAT.README for a complete index of the site. REND386 is in directory /pub/rend386.

Macintosh users have their own anonymous FTP VR archive at ftp.apple.com in directory /pub/VR. The archive contains some demonstration "virtual worlds" created using Virtus Walkthrough, a freeware rendering system called Gossamer (in directory /pub/VR/graphic.systems), and Dr. StrangeGlove, the software component of a PowerGlove-to-Macinterface.

The UK Virtual Reality Special Interest Group's anonymous FTP and Gopher server is currently at eta.lut.ac.uk. Information stored at this site includes VR product specifications (in directory /Public/product-specs), Macintosh software (/Public/mac), PC software (/Public/pc), and GIF photographs taken at a recent VR exhibition (in directory /Public/hci93vr).

The WELL Gopher server at nkosi.well.sf.ca.us contains information on fringe Virtual Reality topics such as "cyberpunk" culture. The WELL is a commercial on-line conferencing system which hosts many discussion groups, including one on VR.

US magazine Wired regularly deals with the subject of Virtual Reality. Back issues are available free of charge from the Gopher server at wired.com. Alternatively, the "Wired Infobot" responds to e-mail requests for articles. For details send an e-mail message to infobot@wired.com with "get index" in the body of the message. A recent addition to Wired's Internet services the World Wide Web server at address http://www.ncb.gov.sg/wired/WoWWW.html.

VR E-mailing Lists

In addition to virtu-l and vrapp-l, there many other e-mailing lists dealing with Virtual Reality topics. The VR-SIG runs a general discussion and announcement list for all people in the UK with an active interest in VR. Send a message to Robin Hollands at R.Hollands@sheffield.ac.uk if you would like to join. A similar list for UK VR researchers only is run from Edinburgh University. E-mail vrlist-request@edinburgh.ac.uk for details. Local UK VR-SIG discussion lists also exist, e-mail Robin Hollands at the above address to find out if there is one for your area.

People interested in adapting a Nintendo PowerGlove for Virtual Reality use should join the "glove-list". To subscribe send an e-mail message to the automated list server at listserv@boxer.nas.nasa.gov with "subscribe glove-list " in the body of your message. If you have problems in subscribing then send a message to the glove-list administrator J. Eric Townsend (jet@nas.nasa.gov).

PC users who are wishing to find out more about the freeware rendering package REND386 can join two dedicated e-mail lists: one for announcements and one for general discussion. To join "rend386-announce" send an e-mail message to Majordomo@sunee.uwaterloo.ca with "subscribe rend386-announce" in the body of the message. To join "rend386-discuss" send "subscribe rend386-discuss" to the same address. Contact rend386-owner@sunee.uwaterloo.ca if you encounter any subscription problems.

Finally, the newly established European Virtual Reality Society is planning to set up a number of e-mailing lists for European VR users. Details will be posted to sci.virtual-worlds during the spring of 1994.

World Wide Web Servers

The World Wide Web integrates all existing Internet services and allows the remote viewing of hypertext documents. These documents can contain text, pictures, sounds, and even video clips, together with "links" to other documents on the Internet. Virtual Reality researchers are starting to use the Web to make yet more information available to the international VR community. For example, the following VR Web servers are amongst those currently available.

The NCSA's Virtual Reality Lab has a World Wide Web server which contains an overview of the hardware and software used in their research. The server also includes a hypertext formatted version of the "vr-sites" list of anonymous FTP archives. The address of the server's home-page is http://www.ncsa.uiuc.edu/Viz/VR/VRHomePage.html. The "vr-sites" listing is at address http://www.ncsa.uiuc.edu/Viz/VR/vr_other_ftp.html.

To access the HIT Lab's anonymous FTP site via the World Wide Web you need to connect to addresses ftp://ftp.u.washington.edu/public/VirtualReality and file://ftp.u.washington.edu/public/virtual-worlds. At the time of writing there is no Web-specific information in the archive (this is likely to change in the near future), however, there is a hypertext formatted version of the Sci.virtual-worlds Meta FAQ at address http://www.cis.ohio-state.edu/hypertext/faq/usenet/virtual-worlds/meta-faq/faq.html.

An interesting Web VR resource is the Virtual Environments Research Web server at address http://dutiws.twi.tudelft.nl/TWI/IS/vroverview.html. It includes some MPEG animations taken from a research project which was using Virtual Reality technology to help people overcome acrophobia. These can be seen at address http://dutiws.twi.tudelft.nl/TWI/IS/Afstudeerders/Kooper/Arophobia.html0.

The UK VR-SIG World Wide Web Server

All of the VR resources mentioned in this article, and many more, can be accessed directly or indirectly via the UK VR-SIG World Wide Web Server home-page. The SIG's Web server also has a number of unique information services including a calendar of forthcoming VR events in the UK and a hypermedia "exhibition" of research VR systems. The address of the server is http://pipkin.lut.ac.uk/WWWdocs/LUTCHI/people/sean/vr-sig/vr-sig.html.

Building a Home-brew VR System

Commercial Virtual Reality systems are extremely expensive, normally costing tens of thousands of pounds. This pricing is understandable, the high performance "graphics engines" used and specialist input and output devices required are currently not cheap to develop or to manufacture. However, almost as soon as the term "Virtual Reality" was coined enthusiasts began to explore ways of constructing their own VR systems for a much lower cost. In order to make them affordable they used standard computer hardware, wrote their own rendering software, and developed home-made input and output devices.

The "home-brew" VR community now comprises of thousands of people from around the world. The community relies on Internet services, such as e-mailing lists and anonymous FTP archives, to exchange ideas and software. It also has its own paper-based magazine called PCVR (e-mail pcvr@fullfeed.com for more information).

REND386 for the PC

The flagship of the home-brew VR community is Bernie Roehl and Dave Stampe's REND386 software for the PC. REND386 is a freeware polygon rendering package which will run on an unmodified '386 or '486PC. It comes complete with a number of example "virtual worlds" which can be explored by controlling a "through the window" view of the world. It is possible for users to construct new worlds by producing their own "world description" files". The file formats are defined in the documentation which is bundled with REND386. Additionally, the book "Virtual Reality Creations" by Dave Stampe, Bernie Roehl and John Eagan (Waite Group Press) describes the process of REND386 world building in great detail.

One of the most impressive aspects of REND386 is its support for a range of unusual computer peripherals, such as, video game add-ons, 6D joysticks, and home-made head mounted displays. With the right can combination of these peripherals REND386 allows the home-brew VR enthusiast to construct a complete VR system at very low cost.

The "classic" REND386-based home-brew VR system consists of a '486 PC running REND386 software, a Nintendo PowerGlove connected to the PC via the parallel printer port, a pair of SEGA Shutter Glasses with a home-made PC interface box, and a standard joystick or mouse. The Shutter Glasses allow the world to be viewed in three-dimensions. Virtual objects can be then "grabbed" and moved or rotated via the PowerGlove. The joystick or mouse is used for navigating the virtual world. Such a system can be constructed for roughly £150 excluding the cost of the PC.

The latest release of REND386, together source code, sample worlds, and information on how to adapt a variety of video game accessories for VR use, can be downloaded via anonymous FTP from sunee.uwaterloo.ca. Look in directory /pub/rend386.

Gossamer for the Macintosh

Until recently Macintosh users did not have access to a freeware rendering package to rival REND386. This situation changed with the release of Version 2 of Jon Blossom's (blossom@cs.yale.edu) Gossamer software in January 1994. Gossamer is partially compatible with REND386 world description files and comes complete with a number of REND386 virtual worlds. However, unlike REND386, Gossamer does not currently support any peripherals other than those which are standard on the Macintosh.

Gossamer requires a 68020 or higher processor Macintosh running System 7 or System 6.x with 32-bit QuickDraw installed. The window on the virtual world can be resized to suit the Macintosh's speed - on a slower Macintosh the frame rate can be improved by simply making the view of the world smaller.

Gossamer 2.0 can be obtained via anonymous FTP from ftp.apple.com. It has its own sub-directory in /pub/VR/graphics.systems. Jon Blossom notes in the documentation that he may make the source code for Gossamer publicly available in the future.

Other Approaches to Home-brew VR

The more adventurous Virtual Reality hombrewer may decide to build a complete Virtual Reality system from scratch. For example, Robin Hollands from Sheffield University (R.Hollands@sheffield.ac.uk) has built "SWOOPVR". This PC-based system features a monoscopic head mounted display (a modified Casio LCD TV), a mechanical head tracker (which uses potentiometers fixed to a wooden boom), a standard computer joystick, and his own polygon rendering software. The SWOOPVR rendering software is freely available on the Internet. It can be downloaded via anonymous FTP from eta.lut.ac.uk, directory /Public/pc. Despite the complexity (and good performance) of SWOOPVR, the whole system cost well under £1000 to construct, excluding the cost of the PC.

h3>Finding out more about the Internet People wanting to find our more about the Internet may be wise to obtain a copy of Ed Krol's book "The Whole Internet" (O'ReillyPress) - this is an excellent introduction to the subject. Other books on the Internet are available from most large bookstores. If you need to set up your own Internet connection then look in the Appendix of this paper for list of companies in the UK who can provide you with a dial-in service.

Acknowledgements

A version of this paper is due to appear in the World Wide Web Newsletter (contact Ivan Pope, e-mail 3w@ukartnet.demon.co.uk for information about the Newsletter).

Appendix - Internet Providers in the UK

This information was taken from a regular listing of Internet providers in the UK which is posted every month to range of UK Usenet newsgroups by Paola Kathuria of ArcGlade Services Ltd (paola@arcglade.demon.co.uk).

Fees are for dial-up accounts with outgoing Internet access or for a full IP connection. The rates quoted are usually for standard services and you should contact the provider for details on additional services and rates.

Name: Demon Internet Systems Fees: £2.50 registration, £10/month, no time charges E-mail: internet@demon.net Voice: 081 349 0063 Fax: 081 349 0309

Name: The Direct Connection Fees: From £10/month, unlimited use, no time charges E-mail: helpdesk@dircon.co.uk Voice: 081 317 0100

Name: EUnet GB Ltd. Fees: From £95/quarter E-mail: sales@Britain.EU.net Voice: 0227 475497 Fax: 0227 475478

Name: GreenNet Fees: non-commercial standard: £5/month + £3.60/hr peak, £2.40/hr off-peak E-mail: support@gn.apc.org Voice: 071 608 3040

Name: UK PC User Group - CONNECT Fees: From £7 to £15.50/month or £160/yr (no timecharges) E-mail: info@ibmpcug.co.uk Voice: 081 863 1191 Fax: 081 863 6095

Name: Public IP Exchange (PIPEX) Fees: Dial-up services start at £400 p.a. E-mail: sales@pipex.net Voice: 0223 250120 Fax: 0223 250121

Presence in Immersive Virtual Environments

Anthony Steed, Mel Slater and Martin Usoh

Department of Computer Science Queen Mary and Westfield College University of London email steed@dcs.qmw.ac.uk

Abstract:

This paper describes the sense of presence in immersive virtual environments and our work at Queen Mary and Westfield College to try to isolate and evaluate factors that can influence it. We have an application context of virtual reality for architectural walkthrough, and this investigation of presence arose from our attempts to build a virtual environment within which the participants could navigate, interact and make judgements about the functional characteristics of the planned space. Firstly we constructed a model of presence purely in terms of the exogenous factors of the virtual environment and the display system. This was unsuccessful as people had widely differing reactions to the same stimuli. A tentative model based on the endogenous factors of the subject using Neuro-Linguistic Programming did however provide a good prediction of a person's reported sense of presence. Using these results we then constructed a navigation metaphor called the Virtual Treadmill which gave a slightly enhanced sense of presence in the walkthrough environment.

1. Introduction

Our project is based around constructing a virtual environment for architectural walkthrough, an environment that would allow architects and their clients to `visit' a proposed building to examine and possibly modify the structure. The simulation has to support three basic tasks:

1. Navigation around the environment

2. Interaction with the objects in the environment

3. Judgements about the functional characteristics of the environment

The first two can be accomplished with a desktop system, but it is our belief that judgements about the third task cannot be accomplished without the participant feeling present inside the environment. This is because people make certain judgements about a space relative to their body. For example judging whether a cupboard is within reach.

Our main aim is therefore to make a person feel `present' inside the virtual environment so that they can operate within it in a natural manner.

In section two we discuss the sense of presence, factors that influence it and approaches to measuring it. Our work on exogenous factors is discussed in section three and on endogenous factors in section four. Section five discusses the application of our previous work to the task of navigation.

2. Presence

Presence in an immersive virtual environment system differs from the sense of absorption, engagement or suspension of disbelief that may arise from a book or game in that after the experience many people describe the virtual environment as a place they have visited rather than an environment they have seen. Some subjects in our first pilot experiments wrote:

"My feeling when carrying on with the experiment was that of being in another part of the building where the experiment was held..."

"Looking back it feels more like somewhere I visited, rather than something I saw (as in a film), so I suppose I must have felt I was in the scene."

"In fact the `virtual reality' world was more real than I was expecting. I had the impression I was in a real room..."

We can thus define presence as the sense that the participant has of being in an environment other than that where their real body is.

There are several factors that are proposed to increase the sense of presence [She92,Loo92a,Loo92b,Hee92].

1. The data presented to the senses should be of high resolution.

2. The data should not be obviously from an artificial source. For example the displays should be refreshed at a rate high enough so the user does not see flicker and the displays themselves should not be so heavy that this becomes a source of fatigue.

3. The data presented should be consistent. For example if an object in view makes a sound then the sound should appear to originate from that direction.

4. There should be a wide range of possible interactions that the user can make. For example if there is a virtual table in the environment then you should be able to not just see it, but touch it and feel its weight.

5. The operator can effect changes in the environment.

6. There is a direct visual consequence of each of the user's movements.

7. There should be a obvious mapping between the user's movements and the movements of the virtual body or slave robot.

8. The virtual body or slave robot should be similar in appearance to the operator, so there can be an identification between the user's limbs and those of the representation. 9. Other objects or users in the environment recognising and acknowledging the user in some way (such as a door opening as the user approaches).

Held and Durlach [HD92] and Loomis [Loo92a] also note that the operator's sense of presence can increase over time.

The above criteria do not specify a natural behaviour for any object, only that there should be a direct relation between efference and afference. Of course in a walkthrough application natural behaviour of objects should be our aim but not at the expense of consistency.

2.1 Measuring Presence

To determine the effectiveness of changes to the virtual environment we need a working measure of presence but as Held and Durlach, and Sheridan note none currently exists. Methods do exist for measuring psychological states such as involvement and engagement, but it is our belief that presence is a different state. Suggested approaches for measuring presence include:

1. Users reported sense of presence. This is a complicated process because the process of enquiring the state of the user may change that state.

2. Observations of the users behaviour. This takes observable reactions to certain situations as confirmation of the users presence. For example shying away from looming objects or replying to a welcoming `hello' message.

3. Performance of tasks in real and virtual environments. This assumes that if a user performs a task in a virtual environment as efficiently and in the same manner as they do in a real environment then they must be present in that virtual environment. This however would work only for naturalistic environments and is of more use in the teleoperator field.

4. Discrimination between real and virtual events. This tests, for example, the users ability to differentiate between sound cues that originate within the virtual environment and originate in the real world.

5. Incorporation of external stimuli. If the user interprets an external event, such as a loud noise, in the context of the virtual environment then they must be present in that virtual environment.

These all have associated problems, either breaking the metaphor for the environment or explicitly giving stimuli that might diminish the sense of presence.

For our initial experiments we measured presence in a variety of ways, but for the reasons noted above eventually settled on the reported sense of presence as given by the response to a set questionnaire.

3. Exogenous Factors

One of the factors that was suggested to increase presence was that the participant should have a virtual body. In architectural walkthrough this is particularly important as it gives important cues as to spatial relationships between objects.

Our first set of experiments dealt with the issues of testing whether the virtual body was a benefit or not and to begin the process of constructing a measure of presence.

We therefore conducted a pilot experiment that would test whether a virtual body increased the sense of presence and generate further hypotheses. The experimental design is discussed at length in [SU92]. The subjects were split in to two groups, one group had a virtual body and the other had a simple arrow representing the hand, and all went into six rooms in the virtual environment in turn.

The first room was cluttered with objects and the task was to navigate to the other end of the room. The hypothesis here was that those with the virtual body would make fewer collisions with objects as they would be more careful about avoiding them.

The second room had objects that flew towards the position of the subject's real body with the hypothesis being that those without the virtual body would show a lesser reaction.

The third room involved the subjects building a pile of blocks during which the virtual body would disappear for a short period. The purpose of this was to give all the subjects a lengthy task to complete to assess whether task involvement would lead to a higher sense of presence and also to see whether those with a virtual body would react to its disappearance.

The fourth room was similar to second except that the objects approached the face.

In the fifth room the body was reorientated so they would appear to be upside down. The purpose of this was to see the effect of the disparity between the subjects sense of orientation and the visual information presented.

The last room consisted of a chessboard with a plank upon which was a plank that led out over an abyss. Here we expected to observe a fear reaction, with a possible difference between those with and without the virtual body.

3.1 Results

The two measures of presence taken were observed reaction to the approaching objects and plank, and reported sense of presence. With this group of people these did not correlate well - some people who had adverse reaction to being on the plank did not rate themselves as being present. It could be that the sense of presence varied over time and the subjects were reporting their overall impression.

Our main interest was the influence of the virtual body on the sense of presence. From the direct results it was seen that those with the simple arrow reported a higher sense of presence than those with the body, opposing our hypothesis. This could be explained by taking into account that those with a predisposition to travel sickness reported a higher sense of presence and there were more of these people in the no-body group. The analysis also showed that the non-body group contained far more people who considered themselves able to adapt quickly to new circumstances.

By considering various other personal factors the following tentative conclusions were drawn:

* Those without a virtual body who mentioned display problems were likely to report a low sense of presence. There was no difference between those with a virtual body.

* In the group that had virtual body the females generally reported a higher sense of presence than the males. The reverse was true for the other group.

Overall then, the virtual body was seen to have an influence, but in a complicated manner [SU92,SU93a].

4. Endogenous Factors

Our experience with the first analysis lead directly to our having to use a model of the subjects' psychology in order to fully understand the effects of the virtual environment.

The approach taken was to use the unorthodox Neuro-Linguistic Programming model [DGB+79]. This classifies people along two axis, representation system and perceptual position. The representation system determines whether the persons' dominate mode of thinking is visual, auditory or kinesthetic. That is, do they think in terms of pictures, by internal-dialogue, or in terms of sensations and emotions. The perceptual position is the standpoint from which the person experiences and remembers events. This is either first, second or third person. That is they remember events from either their own perspective, that of another person or from a disembodied viewpoint.

The representation system and perceptual position of each of the subjects was determined from the last part of the questionnaire, which asked the subjects to write about their experience, by counting the number of visual, auditory and kinesthetic predicates and references used. For example:

"In many of the rooms I visited, I felt I was really in that world"

would be classified as first position with a kinesthetic predicate. The outcome of analysing the results from the original pilot experiment in these terms was much more productive [SU93b,SU94].

* If a subject was more visually dominant then their reported sense of presence was higher, regardless of whether they had a virtual body.

* If a subject was more auditorily dominant then they were less present.

* For those with a virtual body, the more kinesthetic they were the more present they were. For those without a virtual body the opposite was true.

* The level of presence increases with the first perceptual position up to a certain point then decreases.

The interpretation of these results is quite intuitive. Because with our DIVISION ProVision 200 system the experience is primarily visual, the more visually orientated a person is the more present they are. There is almost no sound in the environment used for the experiments so it follows that a auditorily dominated person might feel less present. Having a virtual body provides a grounding for proprioceptive cues, and so if there is a virtual body then the kinesthetically orientated person is more present and if there is no body they are less present.

The effect of the first perceptual position is also intuitive but the quadratic factor is a problem. It may be that it is an artifact of the measurement process or it could be that some people who generally remember events from the first person standpoint cannot make the suspension of disbelief necessary in order to become present in the virtual environment.

This led us to conclude that increasing quality of the visual and auditory channels is important, but is not sufficient for presence in the general case. The kinesthetic sense, which includes proprioception, is just as important and for this reason the virtual body is an essential feature of the system.

5. Application

A further part of our work then took these results and applied them to the problem of navigation through an environment.

A typical virtual reality system allows the participant to move around by pointing in the direction they want to go and pressing a button. The cues provided by this are not what would be expected from real walking around an environment, which is after all one what we are trying to simulate in a walkthrough application. In particular there are none of the proprioceptive signals from the body that it is walking.

We had seen that for kinesthetically orientated people this would decrease there sense of presence as they would get visual cues for motion but none of the haptic cues. Given physical limitations on the movement of the participant due to wires, a simple metaphor would be to let them walk on the spot to navigate [SSU93]. This metaphor we called the Virtual Treadmill. The walking on the spot is detected by using a neural net whose input is the position of the head. The neural net can distinguish between walking on the spot and any other motion by the pattern of up/down and left/right movements of the head.

A short experiment then showed that this method of navigation, though considered more complicated as expected, could enhance the sense of presence.

6. Conclusion

We have constructed a preliminary model of presence in terms of neuro-linguistic programming that has served as a basis upon which to evaluate various changes made to the virtual environment. This has to date been used to evaluate environments on just one virtual reality system with fixed display peripherals, it has yet to be seen if this is a useful measure across systems. As it stands it explains the differences in reaction subjects have to similar stimuli and highlights areas of greatest need to be developed in order to achieve higher ratings of presence across the general population.

Given this model we have then applied it to a specific problem in our application and produced a more natural method of navigation.

7. Acknowledgements

This work was funded by the London Parallel Applications Centre in a collaborative project with DIVISION UK Ltd, and Thron EMI Central Research Laboratories. Anthony Steed is supported by a SERC grant.

8. References

[DGB+79] R. Dilts, J. Grinder, J.Bandler, L.DeLozier, and L.Cameran-Bandler. Neuro-Linguistic Programming I. Meta Publication, 1979.

[HD92] R.M. Held and N.I. Durlach. Telepresence. Presence: Teleoperators and Virtual Environments, 1(1):109--112, 1992.

[Hee92] Carrie Heeter. Being there: The subjective experience of presence. Presence: Teleoperators and Virtual Environments, 1(2):262--271, 1992.

[Loo92a] J.M. Loomis. Distal attribution and presence. Presence: Teleoperators and Virtual Environments, 1(1):113--119, 1992.

[Loo92b] J.M. Loomis. Presence and distal attribution: Phenomenology, determinants, and assessment. In SPIE Vol. 1666 Human Vision, Visual Processing, and Digital Display III, pages 590--595, 1992.

[She92] T.B. Sheridan. Musings on telepresence and virtual presence. Presence: Teleoperators and Virtual Environments, 1(1):120--126, 1992.

[SSU93] M. Slater, A. Steed, and M. Usoh. The virtual treadmill: A naturalistic metaphor for navigation in immersive virtual environments. In Martin Gobel, editor, First Eurographics Workshop on Virtual Environments, Polytechnical University of Catalonia, September 7, pages 71--83, 1993.

[SU92] Mel Slater and Martin Usoh. An experimental exploration of presence. Submitted for publication, 92. Department of Computer Science, Queen Mary and Westfield College, University of London.

[SU93a] Mel Slater and Martin Usoh. The influence of a virtual body on presence in immersive virtual environments. In Virtual Reality International 93, Proceedings of the third annual conference on Virtual Reality, London, April 1993, pages 34--42. Meckler, 1993.

[SU93b] Mel Slater and Martin Usoh. Presence in virtual environments. In Proceedings of VRAIS'93, September 18-22, Seattle, Washington. IEEE, 93.

[SU94] Mel Slater and Martin Usoh. Representation systems, perceptual position and presence in virtual environments. Presence, in press, 1994.

VR and the Missing Senses

Harvey Durrant

VR Research Group Human Factors, BT Laboratories

ABSTRACT

The last few years have seen VR emerge from laboratories to industry. This explosion in interest has helped increase the development of graphics and 3D sound technologies, with a corresponding increase in performance. But, regardless of how realistic the sound and graphics are becoming there is still something missing from our interactive virtual environments before they can truly model reality: simulating the sense of touch. There are two main mechanisms involved in haptic perception: tactile perception is the awareness of the stimulation of the skin and kinethesis which is the sense of joint positions, movements and the torques exerted upon them. A number of 'force-feedback' systems, that can simulate kinesthetic sensations, have been released already and this report details a couple of them. But, in the field of tactile simulation the story is quite different and although a number of companies and research teams are working in the area there are no products available that can provide true tactile simulation. This report will briefly detail the development being carried out, around the world, to provide true tactile simulation and will provide an overview of the work being undertaken within the Human Factors Division of BT Labs.

INTRODUCTION

The field of Virtual Reality is evolving rapidly. The last few years has seen the technologies emerge from laboratories and into industry where they are being adopted by a variety of professions. This increase in interest has helped spur on the development of high performance graphics and audio hardware as developers strive to provide more and more realism in their virtual environments (V.E's). As the field has developed there has been an increasing requirement for greater interaction between the user and the V.E they are inhabiting. To meet this requirement a number of systems have been developed that allow the user to not only interact with the objects inside the V.E but to receive some form of haptic feedback to indicate that an interaction has taken place. The following list details some of the systems available.

Advanced Robotics Research Limited / Air Muscle "Teletact"

One example of a tactile feedback glove is the "Teletact" glove produced by Advanced Robotics Research Ltd (ARRL) and Air Muscle Ltd. The system was developed from the pneumatic technology used to drive the puppets from the "Spitting Image" television programme.

The glove is lined with two layers of a Lycra based material. On the underneath of the glove, between the layers, there are twenty air pockets situated under each finger, thumb and palm area. The shape of each air pocket is tailored to suit the particular sensitivity of its corresponding hand area. The air pockets are fed by micro capillary tubes and can be inflated or deflated, up to 15 p.s.i, to impart a proportional sensation over a range of air pressures. The cost per system, including glove and control unit is approximately £11000.

Although the glove provides rudimentary tactile sensations it has been noted by users that when an object is grasped, the pneumatic hiss as the pockets are inflated, provides a sensory feedback at least equally as strong as the tactile sensation from the glove.

EXOS Inc.

EXOS Inc. provide two feedback systems for telepresence and virtual reality markets. The first system called TouchMaster is a simple tactile feedback device. It consists of voice coils held against fingertip or thumb pads. When the user touches an object in a virtual environment the voice coils are vibrated at approximately 270Hz, imparting a sensation. The basic system has six channels and costs $4000.

In contrast, SAFiRE, the second system marketed by Exos is a force feedback exoskeleton which provides joint torque feedback to the fingers. There are two models available, one provides five degrees of freedom and the other eight. They cost $75000 and $99000 respectively.

Rutgers University

The Micro-Piston Force Feedback System (MPFFS) consists of micro pneumatic actuators which apply forces to the fingers and thumb. The device weighs 40g and is small enough to allow it to be fitted to the underside of a VPL dataglove. The MPFFS can supply a total force up to about 13N, which is close to the average male/female maximum tip pinching force of between 13 to 16N. With this is mind Rutgers say that they can simulate the forces from anything that a hand can grasp. The basic system, which can impart forces to the first three fingers and thumb, costs about $20000. One system has already been ordered by NASA. Rutgers are also presently involved in a project with ARRL to integrate the Teletact glove and MPFFS into a system for use in hand rehabilitation trials.

As well as the above products there are a number of systems currently in development.

TiNi

The use of shape memory alloys is being developed by David Johnson at Tini, California. He is using an alloy Nitinol to produce fingertip buzzers which can be sown into gloves. The alloy assumes whatever shape it is cast into but can then be reshaped. If, however, the alloy is electrically stimulated it will revert to the shape it was cast in. The buzzers consist of a small grid of Nitinol balls and a prototype system has already been used by NASA for its "Super Cockpit" tactile gloves.

MITI Labs, Tokyo

At the MITI Labs in Tokyo Dr Tachi and his research team are currently developing a robotic guide for the blind. The robot conveys information about its surroundings to the user by the means of electrocutaneous stimulation. Small electrodes are placed onto the users skin through which low voltage pulses of electricity are applied. By varying the frequency and voltage of the pulses a variety of different sensations can be achieved, which the user can learn to discriminate between.

However none of the above systems offers the user a true tactile simulation of the 'virtual' object they are interacting with. At most they offer some form of sensory stimulation that an object has been touched, with the stimulation having no relationship to the tactile characteristics of the object itself.

THE 'TELETOUCH' PROJECT

The purpose of the TeleTouch Project is to investigate the requirements for transmitting sensory information via networks. The first stage of the project is to build a device that can provide the user with a rich sense of tactile stimulation, by being able to simulate the tactile characteristics of a material. The main emphasis of the research being carried out is not however towards the technologies involved, although state of the art technology is being utilised, but is primarily towards the user. The reason for this is two fold. Firstly we believe that the philosophy currently prevalent within industry is on fitting the user to the technology whereas we believe that the technology should be designed around the user. The second reason is that by adopting the user first attitude we have a required benchmark upon which to aim. This is because by studying the processes involved in the sense of touch, and deriving the sensory ranges and thresholds, we have a set of metrics upon which we can test our designs. To achieve this it is necessary to understand the physiological and neurological processes that appear to comprise the sense of touch.

The skin is a major source of sensory information to the central nervous system, providing information about both the local environment surrounding the body, for example by touch, and the remote environment, such as heat radiating from an object. Although the oldest sensory tissue, from an evolutionary viewpoint, the exact mechanisms by which the skin senses environmental conditions are still unknown.

It is agreed however that the main sensations measured by the skin are pressure, temperature and pain. The skin contains a variety of receptors, each of which detects certain sensations with there being at least fifteen functionally distinct sensory units, each optimised to detect specific environmental conditions. Different parts of the body's surface contain varying numbers and types of receptors and hence their sensitivity varies as well. The most sensitive parts of the body are the lips, tongue and fingertips and the least sensitive the back. For the purpose of the TeleTouch Project we are disregarding sensations of pain and simply concentrating on temperature and pressure.

Temperature

Temperature is measured by a group of sensory organs known as thermoreceptors. The receptive field of a single thermoreceptor usually covers an area just under 1mm. There are two types of thermoreceptors found in the skin known as 'cold' and 'warm' receptors. They both produce a continuos stream of nerve impulses at a given constant temperature. However if the temperature rises the 'warm' receptors increase their firing rate whilst the 'cold' receptors decrease theirs and vice-versa if the temperature lowers. Their sensitivity to changes in temperature is high, enabling the detection of both slow changes, less than 1 C in 30 seconds, as well as changes as small as 0.05 C. Although both receptors have an optimal temperature range where their activity is greatest, 25-30C for 'cold' receptors and 40-42C for 'warm' receptors the most important stimuli influencing their activity is whether the temperature is constant. This means, for example, that a 'cold' receptor may become inactive as the temperature rises through a thermal range that would cause it to be active if the temperature were constant.

Pressure

The detection of pressure upon the skin involves a more complex combination of sense organs, known as mechanoreceptors, than is used to detect temperature. There are two main groups of mechanoreceptors, rapidly adapting (RA) and slow adapting (SA).

Rapidly Adapting Mechanoreceptors
RA mechanoreceptors are triggered by movement of the skin. When the skin is moved, for example by touching a surface, they fire a series of nerve impulses. The frequency of the impulses is determined by the velocity of the skins movement. The faster the velocity of movement the higher the frequency of nerve impulses. There are three main types of RA mechanoreceptors, each responding to different, though overlapping, frequency ranges. Table 1 shows the frequency ranges of the different mechanoreceptors.

Table 1:

The most sensitive of these receptors is the Pacinian corpuscle. This detects mechanical vibrations recurring at high frequencies. The Pacinian corpuscle has a threshold of 1um in its most sensitive frequency range, between 200-400Hz, although this increases to 75um at the extremes of its frequency range.

One further type of RA mechanoreceptor is found in the skins hair follicles and is triggered by movement of the hair. The RA mechanoreceptors do not however detect constant pressure upon the skin. As soon as the skin movement has stopped they cease firing even though the skin may still be displaced.

Slow Adapting Mechanoreceptors
The SA mechanoreceptors are unlike the RA mechanoreceptors in that although they also detect movement of the skin, their nerve impulses continue if the skin is still displaced though movement has stopped. They are stimulated in a similar manner to the RA mechanoreceptors during the movement of the skin, but once the movement has ceased their impulses decrease until they reach a frequency that is proportional to the indentation of the skin. There are two types of SA mechanoreceptors known as SA I and SA II. They differ in the time they take to adapt to stimuli, with SA I being the quickest. One interesting feature of SA mechanoreceptors is the fact that their response to a steadily maintained deformation of the skin can be enhanced by cooling the skin, although the effect only last a few seconds.

The sense of touch is, however, not simply created by these receptors. The ability to 'feel' different surface textures appears to be the result of the central nervous systems ability to combine the various nerve impulses and derive a complex cutaneous pattern or 'touch blend', for example 'stickiness' or 'oiliness'. The combination of pressure, yielding the surface texture, and temperature can provide a multitude of different 'touch blends'.

The 'touch blend' of a surface is determined through a spatio-temporal pattern of sensory excitation. This means that although it is possible to determine whether a surface is hard or soft simply by resting a fingertip on it, to determine other characteristics such as whether the surface is rough or smooth requires movement of the skin relative to the surface. The actual amount of relative movement need only be very slight to elicit a correct identification of a surface. In a number of experiments a brief tap, of only 1/300th of a second, has been sufficient to differentiate between wood, metal and plastic. It has also been shown that the fingertips can differentiate between a totally flat surface and one that has raised 'bumps' of 0.001mm.

The TeleTouch Prototype

The building of a device with the ability to stimulate the skin, and provide a 'virtual' sense of touch that is as rich as the natural environment, is no easy task. There are however some features of the touch system, as detailed in the previous section, that are encouraging when considering the development of such a device. The fact that complex 'touch blends' can be derived simply from pressure and temperature holds the promise that a tactile simulator can be built, in the analogous way that we are able to simulate visual images using only red, green and blue signals. As stated earlier, the first goal of the TeleTouch Project is to produce a device that can simulate the tactile characteristics of any material. This is not a short term undertaking and there are a number of engineering and design problems that must be overcome. Although we do not envisage this goal to be met within the next decade we are currently designing and building a prototype system that will be configurable across a range of textile materials.

Currently, textile buyers may remotely view the material being offered, using a video conferencing system. They still need to actually 'feel' the material before they make a purchase decision, and this involves either a sample of material being sent to the purchaser or the purchaser actually travelling to the material. Our first goal is to provide a device that can simulate the tactile characteristics of the material being viewed so that the whole purchasing process may be carried out remotely. The initial prototype will provide an active 'pad' that can be programmed to replicate the surface characteristics of a range of textiles. The user will be able to run their fingertips across the top of the pad and 'feel' the textile they are viewing. By constant refinement of the prototypes we hope to produce an active material that can be held between the thumb and index finger and which can be programmed to replicate any material. Although the project is currently geared towards Telepresence the implications for Virtual Reality are self evident.

With the emergence of true haptic perception in our Virtual Environments, VR will become far more complete. One major advantage is that users will be able to interact with their virtual worlds in a truly natural manner, using the dextrous skills they have developed naturally. As well as this, the ability to simulate a realistic sense of touch will allow VR to support a far wider range of applications, especially in the field of Telepresence. One thing is for sure, VR systems will not be truly complete until haptic perception is fully included. But why stop there? There are still the senses of smell and taste to include, before we can truly model reality!

REFERENCES

'The Senses', Ed. Barlow H. & Mollon J. , 1982, Cambridge University Press

'The Human Senses', Geldard G. , 1972, John Wiley & Sons

'Advanced I/O devices for HCI', Steve Webster, BT Labs.

'Sensing, Perception and Feedback for VR', Chang S. et al. , EXOS Inc.

Guided Tours Using REND386

Dr. William Mitchell & Dovber Uhrmasher

Department of Computing, Manchester Metropolitan University John Dalton Building, Chester Street, Manchester M1 5GD email: billy@sun.com.mmu.ac.uk

1. Introduction

This paper describes an ongoing project which uses REND386 to provide tours of the Computing Department at Manchester Metropolitan University. The immediate aim of the project is to provide a system which can be used to tour the department during the annual openday at the university.

The primary aim of the project was to establish work in the department in the field of Virtual Reality. As with any new area, several problems were faced at the outset which shaped the course of the project.

There were no funds available to purchase either software or hardware devices. This ruled out specialised environments such as World Toolkit and Superscape. It was thus decided to make use of REND386 which was available for no cost [Stampe 1993]. In addition to being free of charge, REND386 also had the advantage of requiring a platform which was already available in the department (486 IBM PC-compatible). REND386 is a collection of C routines which can be used to construct simple virtual worlds. It supports a range of low-cost devices and a number of user support groups exist.

The choice of application was the next important decision. Two factors were influential. Firstly, the application should be realisable bearing in mind the limits of the development environment. Secondly, the application should be sufficiently openended and challenging so as to provide a testbed for exploring many concepts in VR as well as providing the platform for future projects in the area.

Aside from the problem of cost a second problem was the scepticism from other members of the department with regard to the VR field. This problem was in fact to provide an idea for an application.

One ordeal faced by the Faculty of Science and Engineering is the annual openday in which school parties and members of the general public take a look behind the scenes at the work that goes on at the institution. This acts as an important promotional exercise and helps recruitment of potential students.

The openday consists of a series of stalls and exhibits put on by each department in a central hall in the Faculty building. Visitors can then make their way from the hall to various departments within the Faculty. A notable problem at the previous openday was the lack of attention-grabbing exhibits from the Computing Department. With these issues in mind it was decided that a suitable application would be a virtual tour of the Faculty building. This would allow the public to follow a virtual tour before physically making their own way towards the Computing department.

2. Designing a virtual building

The area of VR which this project most has in common with is that of computer-assisted architectural design [Ferrante 1991], [Emmett 1992], [Novitski 1993b]. The work in this field can be sub-divided into 3 main areas depending on whether the virtual building being designed is: - a model of a building which is going to be physically constructed in the future - a reconstruction of a building from the past - a model of a building existing in the present.

The first area of VR work has its roots in the field of Computer-Aided Design (CAD). The building being constructed is a future structure which has no physical existence in the present. The purpose of constructing a virtual building is to provide a prototype which the client can then inspect and comment upon.

This provides an easier way for the client to visualise the planned building than traditional methods such as blueprints and building scale models. In addition an electronic prototype can be easily restyled according to the client's wishes. This has the overall effect of allowing the architect to more closely fulfill the wishes of the client. An additional spinoff is the competitive edge such a virtual prototype can provide when the architect is trying to win a design bid.

A second area of work is in the reconstruction of buildings which existed in the past [Novitski 1993a]. An example is the reconstruction of Shakespeare's Globe Theatre [MacRae 1994]. Such worlds allow the testing of archaeological and historical theories and also serve to make a past world come alive for the general public.

The third area involves the construction of a world based on an existing building. The work described in this paper belongs to this third area. One notable feature of this project is that the end user will typically be a member of the public rather than a specialised user such as a client, architect, or historian. In addition, as the virtual building has a real-world equivalent, the end-user will have access to both models. It is important that the user be able to easily relate the virtual building to its real world equivalent.

The emphasis of the project will thus be on trying to provide sufficient detail in the virtual building so that it appears realistic. There is an emphasis on getting the layout and measurements of the building correct and secondly providing appropriate surface details. Initially, it was decided to simplify matters by not allowing the user to redesign or alter the world.

3. Designing the virtual department

Three main stages can be identified in constructing the virtual department. Firstly, it is important to get the layout of the building correct. Secondly, it is important to provide sufficient surface details so that the user can recognise the building. Thirdly, there is the issue of how to allow the user to interact with the world and navigate around the building.

3.1 Designing the shell of the building

The layout of an already existing building can be determined by walking around the building and taking measurements. Fortunately, an easier solution was found by obtaining the original blueprints for the Faculty building and using these as the basis for the shell of the building. This ensured that the location of rooms, walls and other details were accurate.

3.2 Providing surface details for the building

As explained above, attention to surface detail is an important factor in constructing the world. It is therefore important that these details should provide cues for when the user later navigates the actual building. One simplification made is that the virtual building has only been decorated with surface details along the routes that an end user is likely to follow from the central hall to the Computing department. The cues which can be divided into three categories: - internal to the building - external to the building - meta cues which are not present in the real-world.

One obvious cue is the use of room numbers and signs. Unfortunately, REND386 does not support the use of three-dimensional text. it is thus necessary to construct letters just like other objects in REND386. This may appear unrealistic but its importance as a commonly used cue demanded that it still be included (see Figure 1).

Another important cue is the presence of certain types of furniture which act as landmarks. Examples include doors and student lockers. The use of these cues is limited as each requires the addition of at least one object to the world. As will be seen in section 4, the more objects that are added the greater the performance overhead.

One feature of the Faculty building is the use of colour coding of walls and carpets on different floors. This has also been used as a cue in the virtual building but its effect is limited as the range of colours in REND386 does not match the subtlety of the actual colours present in the building.

The Faculty building is situated near Manchester city centre. There are thus several landmarks which are external to the building but which provide cues when the user is outside the building or when the user is inside the building and looking out a window. For example, the A57(M) (the Mancunian Way) runs along the South of the site. This provides an effective landmark particularly as an animated vehicle has been drives along the road (see Figure 2). To the East of the building are two other notable landmarks - the BBC and the National Computer Centre (NCC). The logos of both organisations have been incorporated into buildings which can be seen from the Faculty building (see Figure 3). Due to performance overheads it is impossible to incorporate all internal and external landmarks. An alternative is to provide meta cues which have no equivalent in the real building. For example, certain rooms have been equipped with sensor areas. When the user passes over such an area it causes a message to be displayed in a popup window. This will contain information about the room. This has the dual benefit of not only reducing the need to provide full decoration of the room but also providing information which the user might not have grasped even if the room had been fully decorated.

A second meta cue that can be provided is a spoken commentary. This can be activated in a similar way to the information panels by pressing a button. These meta cues, whilst not having an equivalent in the real building are the equivalent of features of a real tour. The information panels correspond to the use of a guide book and the narration corresponds to what might be provided by a human tour guide.

3.3 Interaction

Aside from the appearance of the building, an important factor in the success of a tour is the methods by which the user can navigate around the building and interact with the world [Larijani 1994]. As the system is likely to be used by members of the public with varying levels of computer experience and willingness to use the system, it is proposed that two main types of tour be provided.

The first is a pre-defined tour of the building. The user is taken from the central hall to the Computing department. Stops are made along the way to show users rooms of interest (e.g. laboratories and offices). This can be seen to be equivalent of the infamous package tour.

For the more adventurous user, a second option is available which allows the user to make their own way through the world. Position points (central hall, main entrance, Computing department office) have been set up that the user can return to if they should get lost.

A specialised navigation device provided is the lift. This is activated by the user entering it. At present this is limited to either moving up or down between two floors.

Due to the number of visitors anticipated it was decided to use desktop rather than fully immersive VR, a head-mounted display is thus not supported. In addition, cost ruled out specialised navigation devices such as treadmills which are often used for walk-throughs. Instead the user will be constrained to using the keyboard or mouse to navigate. The user's interaction with the world is limited to activating information panels by passing over specified areas. There was thus no need for a data glove.

4. Implementation problems

Implementation problems can be sub-divided into 5 main areas: visibility, memory, speed, sound and modularity.

4.1 Visibility

One visibility problem encountered with REND386 is incorrect hidden surface removal. This is manifested as walls which are behind other walls being displayed. This is due to the depth sorting technique used by REND386 - the Painter's Algorithm [Stampe 1993].

REND386 uses the painters algorithm when rendering. This means that surfaces are sorted in decreasing depth. This is like a painter, who first paints the background. Next the painter paints the foreground colours and so builds the picture up. This can be illustrated by referring to Figure 4. REND386 first sees and renders the wall A as it contains a point which is the furthest away from the viewer along the z-axis. Ball B is then rendered followed by Wall C. This is incorrect as the final view will have Ball B incorrectly positioned in front of Wall A.

REND386 uses a an extension of the painters algorithm in the form of the binary space partition (BSP) tree to sort out its visualization problems. Basically these are non-visible planes that divide the world into areas. These splitting planes need to be declared manually. They can run in the x,y or z direction or any combination in between. Splits run in the direction they are declared from infinity to infinity, unless they are stopped by a pre-declared split. REND386 then looks at the split and renders anything behind it. The next step is to render what is on that split. Lastly it renders what is front. Not only is this a good way of sorting the visualization problems, but it also divides the world into areas.

The solution to visibility problems was to define splitting planes along the walls of the building. This still causes a problem when a splitting plane cuts through a previously defined splitting plane. The solution to this further problem was to divide the wall into two parts. A splitting plane is defined on each side of the original plane and two separate walls are defined that go up to the plane but not through it. This method can solve some of the more complicated cases, when there may be three or more splitting planes cutting through a single wall. This increases the number of polygons in the world, but it is a trade of between some very bad visualization problems and a slight decrease in speed.

To summarise, the main visualisation problems were to do with the fact that splitting planes needed to be defined manually and were not automatically defined by the package.

4.2 Memory

REND386 makes use only of conventional memory i.e. the first 640k of memory. Each world was also restricted to 1200 polygons (probably to stop memory overloads). This limit was soon reached. For example, a wall with one doorway uses 3 polygons plus a door. The world contains over 200 splitting planes each with an average of 8 or 9 polygons, so it is not difficult to see that the 1200 polygon quota was soon reached.

One solution was to reduce the size of the world to fit the memory allocation, this was inappropriate as this would reduce the level of detail that could be displayed. Another solution would be to try to make use of more of the (typically) 8MB of memory available. This would entail the use of an extended memory manager. Without going into too much detail (see section 4.6) it allows the storage of objects and polygons in extended memory.

4.3 Speed

REND386 is a very fast rendering package considering it is at the low-end of the market. It is capable of up to 120 frames per second (fps) depending on the world used. When used with the large world contained in the virtual tour REND386 is still capable of between 1 to 12 fps. On a 486SX, it can go down to 1 fps. On a 486DX it will go no lower than 3 fps. No test have yet been made with a 486DX2 or Pentium, but it is assumed that the speed will dramatically improve on these machines.

4.4 Sound

REND386 does not provide built-in support for sound which would be necessary for narration (see section 3.2). This is due to the large range of sound cards available. The system is currently being developed so that REND386 can be used with a Soundblaster card to provide digitized sound.

4.5 Modularity

Modularity problems divide into two main areas. At a low-level there is the problem of not having the ability to define named procedures of code. The prime example of this, is in the animation sequences. The closest that REND386 comes to procedures, is the ability to define objects as .plg files. An object only needs to be implemented once and stored in a file. Whenever it is called the object is placed in the virtual world. Every time the object file is called the coordinates, size and the colour scheme can be altered.

At a slightly higher-level it would be useful to defined the world in a modular fashion. In that way a world could be structured as a hierarchy of spaces. Again, this is provided to a limited extent by defining areas based on the splitting planes they enclose.

Defining these spaces would avoid the problem of rendering the entire world and just rendering a particular space. In addition properties could be associated with spaces so that, for example, narration could be automatically triggered when the user entered a room.

4.6 A possible solution: VR-386

One possible solution to some of the implementation problems identified has come with the appearance of VR-386 [Stampe 1994]. VR-386 is a descendent of REND386 and is also available for no charge.

The first important difference with VR-386 is that it supports Extended memory management up to 4 Megabytes. This provided a solution to the old limit with REND386 of 1200 polygons.

Secondly, a new but as yet unimplemented feature was the chance to include new worlds into the programme as it is running. One of the main problems with REND386 at present is that all the world is stored in memory. As the world gets bigger, the frame rate drops to the point where 1 frame a second is the norm. This is very unacceptable. To remedy this, VR-386 will allow the world to be sub-divided into mini-worlds. At specific points a new mini-world can be loaded in (e.g. going into a room). Thirdly, extensions to VR-386 can be accommodated by the provision of an Application Programmers Interface (API). This will make it easier for programmers to add units and modules without having to access directly the complicated underlying data structures.

VR-386 has been used mainly to view the world constructed in REND386. As yet VR-386 has not been evaluated with regard to the capabilities it provides for code development.

5. Conclusions

Overall the project has achieved the aims outlined in section 1. The project will provide the basis for future work. In particular it would be useful to be able to divide the building into separate areas. This would have technical implications (possibly improved performance) as well as design implications (treatment of logically separated spaces).

One way of improving the realism of the world would be to provide a low-level equivalent of texture mapping. Texture mapping allows an object to be covered with a synthetic or real image rather than just a colour. For example, the walls of the virtual building could be covered with scanned images of painted breeze blocks. Texture mapping requires quite high computational ability so REND386 does not support it. REND386 does provide a good range of colours and some attributes can be defined such as glass, metal, shaded and fixed colour. But to make something like a brick effect would entail declaring every brick separately. In a system that is short of memory, this would be unacceptable.

It would also be desirable to be able to store the state of the world, for example allow a user to change the layout of furniture in the building and store this information. This includes providing the user with the ability to query the system for the best path between two points in the building and have the system take them along that path. This would entail using REND386 as a display front-end to an underlying application program.

Another area of planned development is to investigate the networking of the tours. This would allow a user to follow the path of another user on a remote machine who is familiar with the building.

It remains to be seen how successful the system is at the openday and what effect this might have on the opinions of other members of the department towards virtual reality.

Bibliography

[Emmett 1992] A. Emmett, "A New Dynamism", Computer Graphics World, Vol. 15, No. 6, June 1992, pp. 30-38.

[Ferrante 1991] A.J. Ferrante, L.F.R. Moreira, J.M. Boggio Videla, and A. Montagu, Computer Graphics for Engineers and Architects, Elsevier Science Publishers BV, Amsterdam, The Netherlands, 1991, 282p..

[Larijani 1994] L. Casey Larijani, The Virtual Reality Primer, McGraw-Hill, 1994, 274p..

[MacRae 1994] A. MacRae, "The Virtual Globe Theatre", in VR'94 London Virtual Reality Expo 1994, Proceedings of the 4th Annual Conference on Virtual Reality, February 1994, London, Mecklermedia, 1994, pp.110-114.

[Novitiski 1993a] B.J. Novitski, "Visiting Lost Cities", Computer Graphics World, Vol. 16, No. 1, January 1993, pp. 48-55.

[Novitiski 1993b] B.J. Novitski, "Constructive Communication", Computer Graphics World, Vol. 16, No. 6, June 1993, pp. 35-38.

[Stampe 1993] D. Stampe, B. Roehl, and J. Eagan, Virtual reality Creations, Waite Group Press, Corte Madera, California, 1993, 591pp..

[Stampe 1994] D. Stampe, VR-386, PCVR, No. 14, March/April 1994, pp.37-40.