Organisation



Download 208.65 Kb.
Page2/2
Date03.05.2017
Size208.65 Kb.
#18979
1   2

2.3 VISUAL DISPLAYS

In a paper describing the problems of ambiguity in message design, Chapanis (1965) gave an example of a sign that appeared next to a lift in a Baltimore department store. The message was as follows:

For improved elevator service

Walk up one floor or down two

The problem with this message was that whilst some people correctly interpreted it, others spent a lot of time walking up one floor and then trying to call the lift or down two floors and then trying to call the lift. Which category would you fall into? What the message writer actually intended was, ‘for a better lift service for everyone else, if you are only going up one floor or down one or two floors, then walk’. This plea for good citizenship on the part of Baltimore shoppers largely failed in its desired effect. This was not only because of people’s reluctance to walk, rather than to use the lift, but because the message did not get across in an unambiguous way.

A wide range of technology is used with visual displays. If we leave aside the hand printed and handwritten signs that we find in everyday life, we see that the printed media are very much to the forefront. As well as printed documentation, there is also colour printing and associated processes that are used to produce signs used in buildings, on roads and in other contexts. Related to this, there are various labels and packaging that are associated with everyday and work objects.

Then there are devices like lights and illuminated signs that we use to form part of a machine interface or to enhance a sign that has to be used in all visibility conditions. There are displays that are specifically designed to carry changing parameter states to us. These are the dials and the screens that we are familiar with from everyday objects such as cars or we have seen depicted in films and television programmes about complex systems such as aircraft control cabins. Very few people will be familiar with the instrument arrays used in an aircraft cockpit, but most will have seen these illustrated. A common impression is that they are highly complex environments with a wide range of dials, lights and, nowadays, computer screens. Increasingly, the classic ergonomics of lights and dials has given way to concerns with computer screen technology. This latter technology either mimics these devices in a more flexible format, or it may present information in a new way. The ever-present computer screen has found its way, not only into offices, but also into control rooms and vehicle control stations such as cockpits and ship’s bridges. Early computer displays were rudimentary with letters and numbers presented in very simple forms. The current display technology enables high quality displays of a whole range of variables. These displays may also represent the real world either from recorded sources or direct information in terms of closed circuit television pictures.

Given the variety of media that have emerged to convey information, it will be foolhardy to predict what might be possible in the future. The important thing for ergonomists is to establish good principles of display design which will transcend particular forms of technology. General principles of display design can then be translated into specific principles within a particular medium. To take a simple example from the early days of printing, the maximum contrast that could be obtained from the printed word involved using black symbols/letters on white paper. There would always have been some variation in this depending on ink and paper quality; only in recent times has very white paper been available for this process. The darkness of the print similarly would be limited by contemporary knowledge of dyes and inks. The almost universal use of this principle was not applicable in the early days of computer screen technology. Initially, it was much easier to present the lettering as the bright source against the screen as the dark source. This was the reversal of the normal principle (i.e., light on dark). Added to this were difficulties that the words, the light source, were usually in light green and the background was usually in dark green.

Current computer screen technology has re-established the principle of dark on light. The lesson that still has to be learned is that computer screens, for a number of reasons, are not an ideal reading media. Using a computer screen to mimic a book has not been particularly successful. In addition, the principle has yet to be learned in relation to the enormous variety of colours that can now be produced on computer screens. Many people in designing their web pages choose colours that they think go well together, but very often these combinations produce problems for reading text. This is not to say that web pages and computer screens cannot do many things differently and better than standard printed work. The question is what things can they do better and under what conditions. It is striking that the printed word not only remains very much with us, but also remains very little altered in terms of display characteristics from the earliest handwritten books.

2.3.1 Visual display technology

As already discussed above, the range of technology available for visual display of information is very broad. The use of a form of technology in a particular context will depend upon such factors as cost, compatibility with existing hardware, preferences, and most importantly of all, the appropriateness of the technology for the particular visual display problem. Many of the choices in the design process will become self-evident as a project develops. Many of the choices will be technology-based in terms of such factors as hardware/software compatibility, preferences and suitability for the task in question. In certain applications, where two technology solutions are equal in their technical implications, then a choice of one over the other may be determined by human factors issues. Similarly, duplication of function will also be a possibility, depending upon the nature of the task demands. For example, a digital clock will enable a very accurate reading of time. An analogue clock, with hands and numbers, will enable quick estimates of the amount of time before or until a particular timed event. For one or other of these time-related tasks, one or other type of timepiece will be optimal. If both of these tasks are commonly performed, then it would be better to display both functions on the same timepiece.



In some control room situations, the need for the historical display of information as well as current values is another example of where the two forms of the same information may be presented. As display technologies are constantly evolving, it is better to concentrate on general principles for display design rather than just to try to specify what is best in one application; for example, is a dial or a warning light best? As many traditional display functions are now transferring to computer screen representations, many of the decisions about hardwired instruments are now no longer relevant. However, some interesting questions do arise with the new technology. For example, should there be efforts to mimic the familiar displays such as lights and dials, or should there be an exploration of new ways of representing parameters on the computer display? From the above discussion it should be apparent that the different types of information processing generated by tasks as described by Easterby (1984) will be an important influence on type of display chosen for a particular function. As an example, if a needle on a dial reaches a danger value, then this requires a certain set of processes on a part of the individual in order to bring that information to attention. The alternative of there being an additional provision of a warning light is another way of alerting the user. It could be added that a warning sound is another way of drawing attention to a particular changed state of a variable. Auditory displays will be discussed later, but in making a choice about the media to be used for a particular parameter, there should not be a focus within one modality only.



Figure 2.1. Three display configurations that communicate the temperature of four instruments; how fast can you find the instrument displaying an extreme value? In Panel A, hot and cold are represented at opposite ends of a left-to-right scale and a dial is used to indicate the current state of each Instrument. In Panel B, hot and cold have been transposed to a single linear scale with extreme values displayed on the right. In Panel C, a computer display has been used to generate a histogram depicting the temperature of each Instrument on a transposed vertical scale. Panels B and C use different technology to exploit the way the visual system groups objects with similar values. In Panel B, the leftmost dial attracts attention because it is the odd-one-out in the display in terms of orientation. In Panel C, the extreme value also pops-out because the other values are collinear.

2.3.2 Factors influencing visual display effectiveness

Physical location

The positioning of a visual display is of key importance in determining its effectiveness. The ‘textbook’ example of the ‘user-interface’ may conjure up an image of an individual sat in front of a computer screen. Under these circumstances, the optimised location of it should not present itself as a major problem. However, location of a display that is out of the main field of view or requiring head or body movements, illustrates the problem of the wide variety of physical locations that a display can have. Knowledge of the way visual acuity drops off as an object's retinal location moves from the fovea to the periphery can be used to site displays based upon the relative importance of the information conveyed, the size of the informational content and the time course of the signal. In the present context, the assumption can be made that the physical location of the display is optimised for the purpose for which it is required. If a user has to make frequent references to the current time, then the positioning of the clock within their normal field of view would be self-evident. For a user who only makes infrequent use of the current time a more peripheral location can be used.

Display arrangements

The clock example used above is not a good one when it comes to thinking about arrangement of multiple displays. In many advanced applications, it is likely that a number of displays will need to be grouped in close proximity to each other. This will be the case with motor car instruments, aircraft instruments etc. Similarly, items appearing on a computer screen for a set of tasks will need to all fall into the general field of view of the user. At the same time, they will need to be clearly distinguishable from each other. Examination of some display configurations may lead to the belief that such arrangements are governed largely by random assignments or by aesthetics. It is, however, possible to collect hard data on the optimal arrangement of a set of displays (or indeed controls). Items of primary importance should be placed in the central field of view, with those of less importance on the periphery. Importance should be decided on by an analysis of the tasks or by expert rating. There is also the question of the way in which displays are used. If a comparison of two values is frequently carried out, then locating two dials next to each other should optimise performance of such a task. Movement of displayed parameters to the more flexible computer screen may enable the two values to be displayed physically side by side on the screen.

It is unlikely that there will be one single solution to any display arrangement. The key thing is to optimise the arrangement for the task in hand. This requires not only an understanding of what the task demands are, but also of issues to do with the amount of space available and the need to integrate the visual displays other system devices such as controls.

Lighting conditions

The extent to which the display is usable will depend upon the extent to which there is sufficient light falling on or emitted by the display. In the case of warning lights and computer displays, they have light sources within themselves and although adjustments may be made for ‘brightness’, there should be no problem over viewing them, even in a darkened room. Supplementary lighting may be required for other displays. There is usually background lighting for car instruments, which in turn can be adjusted for brightness, but for many displays, the natural or artificial light within the workspace is what will determine whether they can be viewed or not. As well as providing sufficient light for effective display, light sources can in turn give rise to problems. Variability in lighting from external sources may produce reflections when instruments are covered in glass. Contrast between the display and the external environment are also important. Dashboard instruments must remain legible by night and day. Internal lighting must be sufficient to negate the reduction of visual acuity in low light conditions at the same time as preserving adaptation or the 'night vision' required to drive in the dark. Bright lights may also cause problems. Visual transients (on-off) automatically attract attention. This can be a useful way of orienting the user to a particular aspect of the display but it can also be highly distracting if the signal is irrelevant to their current objectives.

Lighting conditions are a good example of the interactive nature of ergonomics. Optimising a display for different conditions, for example day and night time usage, requires an understanding of the way the visual system responds to variability in light and the ability to utilise technology to maximise the user's ability to perceive and interpret the information portrayed. For example, reading text on paper will benefit from overhead light. When, however, the user moves to reading text from an adjacent computer screen the overhead light can produce reflections from the screen’s glass cover, making it difficult to read the text.

Static versus dynamic displays

Another obvious way in which displays can differ from each other is the extent to which they function purely as a static source, typically in the case of notices, signs, labels and instructions and the extent to which they represent dynamically changing features, typically called parameters. Analogue clocks display dynamic changes though it may be difficult to see the hands moving. Other displays associated with processes and changing states are much more dynamic in nature. Sometimes the changes are discrete as in the changing indication of how many miles/kilometres a car has travelled. Sometimes they are continuous in the way that the speedometer needle moves up and down in relation to changing speed. The complexity of a modern aircraft cockpit reflects the number of parameters that are being simultaneously measured and displayed for the pilot’s attention. An important consideration with these complex displays is the limit on the number of items that humans can simultaneously process (cognitive psychologist have traditionally referred to this limit as the 'attentional-bottleneck'). Used carefully, dynamic displays can provide real-time information to the user about multiple parameters. Poorly designed dynamic stimuli in contrast, run the risk of confusing the user by overloading their perceptual and attentional resources.

2.3.3 Display coding

Most displays follow the well-established perceptual principles of figure-ground segregation. In other words, the display capitalises on the visual system's capacity to group objects on the basis of common attributes (e.g. contrast). The simple example of a NO SMOKING sign has been used already. The black symbology of the cigarette stands out against the mainly white background of the sign. Added to this is the red bar and circular surround that will encase the symbol and also codify it as a prohibitive, warning sign. Without the red bar across, it might be seen as an ambiguous sign that could mean that either smoking is allowed or is not allowed. The bar across is widely recognised as a prohibition indication. The symbol provides both figure and ground segregation and also coding in terms of the information displayed. The key thing about the symbol is that it aims to convey its message without the need for written or spoken language being understood by the user (other than a universal language of symbology). Thus the utility of the display is maximised by using physical features to communicate symbology that is easily interpreted on the basis of experience. The hands on a traditional clock are highly salient because they contrast with the clock's face. The configuration of the hands in turn, is interpreted on the basis of (almost) universal coding principles where the Arabic numerals ‘0 to 9’ are used to reflect the two twelve hour cycles of the twenty-four hour day.

The principles of the clock are extended to most other dials and gauges. Some have circular movement of the pointer, others have movements within a restricted arc. The fuel gauge on a car reads from empty to full in a fairly simple way. The danger region of low fuel is usually coloured red. Some cars back this up with an auditory alarm that will indicate when the gauge is reading near to the empty mark. Another coding mechanism is the movement from left to right of the gauge to indicate a change from a low reading to a higher reading. Some cars have temperature gauges to indicate the temperature of the engine coolant. The movement is also from left to right from low temperature to high temperature. In this case, the red marking is used at the high end of the scale to indicate a dangerously high coolant temperature. The same coding mechanism, red marking, is used to indicate a low value in the case of the fuel, but a high value in the case of a temperature. This might seem to be in contradiction, but is understandable when it is recognised that the red colour is commonly associated with hazard or danger. Red therefore indicates a commonly understood symbol for danger (see Figure 2.1.).

A good example of the complexity introduced as technology develops is provided by the aircraft altimeter (Rolfe, 1969). The earliest aircraft flew with what were effectively barometers in the cockpit. These indicated the change in air pressure with height above the ground and the values were translated by the pilot into approximate altitude. As the first aircraft were not flying very high, and as the pilots could usually see the ground, this did not initially present many problems. However, as aircraft performance developed, and planes were able to fly at the greater altitudes, the need for an instrument that gave a specific reading of height became recognised. Increasing altitude performance led to the problem of how to display, on the same instrument, values that were important in landing an aircraft, i.e., tens and hundreds of feet*, and values that presented a current flying height which were very often in thousands of feet. The solution to this was the so-called ‘multi-pointer altimeter’. This had a number of pointers against a standard background of digits 0 to 9. One pointer referred to tens of feet, one to hundredths and one to thousands of feet. Economy was achieved by having all the values displayed within one instrument. Problems arose when such instruments were misread. A number of crashes where aircraft flew into hillsides in poor visibility were blamed on the misreading of these instruments.

A solution, based upon advances in display technology, was the integration of a digital read-out within the circular display. The digital readout provided a precise indication of height while the dials preserved the pilot's ability to quickly judge changes in the rate of climb or decent. The new technology allowed designers to incorporate discrete and relative information about the aircraft's altitude in different formats at a single location within the display.

Case study 1

Visual icons are used in numerous applications and their design should be informed by properly conducted design and evaluation studies.

Lindberg and Nasanen (2003). report such a study.

Study this reading at this point with the following questions in mind.

1. What is the context of this study?

2. What procedures exist for evaluating the size and spacing of visual icons?

3. How was the study conducted?

4. What were the outcomes?

5. What are the good and the bad features of this study?

2.4 AUDITORY DISPLAYS

The displays that characterise human-machine interfaces are predominantly visual in nature. However, other sensory modalities provide additional resources for the transmission of information. Auditory signals provide a number of advantages particularly in complex visual environments. While visual displays require direct observation, humans are sensitive to sounds from any direction. Auditory displays can be used to orient attention to events outside the user's field of view. Auditory displays can also utilise speech to communicate Information. Speech can be perceived directly or through communication systems, e.g., radios or telephones, from recordings, e.g., telephone message tapes or digital recordings, or from synthesized speech from computers. In addition to these sources, a wide variety of signals can be used to give us information. The invention of the bell is perhaps a good example of early technology that fulfilled a particular purpose. The ringing of bells in religious buildings would be one way of drawing attention of an individual or group to a key event. Messages can in turn be codified in terms of the way in which a bell or bells are used. Newer technology gives rise to various buzzers, hooters, sirens, etc. These can transmit information by manipulating a variety of attributes including the amplitude (volume), frequency (pitch) and timing between component parts of the signal. An intermittent signal that increases in amplitude, pitch and the number of pulses per minute provides a powerful means of communicating the time course of a particular process. Like the whistle on steam kettle, such alarms can be considered the auditory analogue of an egg timer, providing information about the occurrence of a forthcoming event as well as signalling the requirement for a timely response.

Although there is a tendency to think of auditory displays in terms of alarms, it is also important to be familiar with the way auditory signals are utilised by everyday office equipment and domestic appliances. Sound provides a particularly useful mechanism for feedback, indicating that an accepted input has been made to a device. Auditory icons are now used to signal particular events such as a key stroke, the selection of a file or the opening of a new application. Auditory events such as a whoosh to signal the sending of a message provide a medium that is intuitively simple as well as computationally less demanding than a visual equivalent. Like the visual icons traditionally used in computer interfaces, auditory icons are designed to map virtual events to the sounds associated with similar objects in the real-world (Brewster, 2002). The tendency of the auditory system to integrate separate sounds into a coherent percept (a series of tones is often perceived as a rhythmic pattern) also provides a basis for effective pattern recognition. Adaptation or 'habituation' means an auditory sequence that signifies the routine functioning of a particular system will quickly become background noise as long as it’s presented at a comfortable volume. Sounds that are at odds with the established pattern provide a highly salient cue to changes in routine function and provide the user an easily detectable cue to potential problems.

Sanders and McCormick (1993, p. 169) list the circumstances where auditory signals may be preferred over visual ones:

If we refer back to Easterby’s (1984) list of processes that would be supported by visual displays, we can see that most of these also have relevance to auditory displays: detection, discrimination, identification, recognition and comprehension.

One issue with auditory signals is that they should be presented at a level that exceeds the ambient levels of sound (approximately 10 dB louder than background noise). Whilst an office is a fairly quiet environment, in many workplaces, machinery or noise from other people may make the detection of a particular signal more difficult. Effective auditory displays require signal-to-noise ratios that are appropriate to the context and environment in which they are used. Producing a startle reflex may be desirable for fire alarms but the same reaction to an in-car alarm might have disastrous consequences. The temporal structure of sound also produces certain limits on its use in displays. Speech for example, is sequentially organised so the listener has to attend the whole message for accurate comprehension. Maintaining information that is no longer present in the display entails a working memory load that may be absent in visual displays. The spatial resolution of the auditory system is also less than that of the visual system, with objects tending to merge when they are not highly discriminable on non-spatial dimensions. Masking and the reflection of sound off different surfaces can also render the localisation of auditory signals difficult. In open country, it is easy to discern the direction of an approaching emergency vehicle from its siren. In built up areas, however, where the sound will echo from buildings and other vehicles, discerning the direction from which an emergency vehicle is coming can be difficult. To counter this, modern sirens are often interspersed with a loud broad-band noise. Sounding like an old fashioned fog horn, these sounds include frequencies that are more easily localised than the high pitched wails or alternating tones traditionally used. Emergency sirens are also prone to masking, with individuals often listening to loud music within sound-proofed cars. Emergency vehicle drivers often complain about being ignored by people who clearly are not aware of their presence, even though they are sounding a very loud alarm!

The useful summary of principles of auditory display is provided by Sanders and McCormick (1993).

Case study 2

Stanton and Edworthy (1998) report on work or auditory warnings. Specifically it explores the recognition of certain sounds that could be used in hospital intensive care units. A particular methodology was used to isolate key variables. You should study this paper with a view to producing guidelines for designing auditory displays, or refining those given above.

Study the reading at this point with these questions in mind.

1. What has previous research told us about this topic?

2. Why was this particular method used?

3. Are the various findings compatible with each other?

4. How useable, in terms of helping produce guidelines, are the findings?

2.5 MULTIMODAL DISPLAYS

Objects in the real-world are rarely perceived in a single modality. In face-to-face conversations, we usually look at the person we are listening to. When we move a heavy item in the workplace, we can see, hear and feel the object move. Perceptual integration at the neural level means information from different senses is combined to form multimodal representations (O'Hare, 1991). These provide a rich source of information, with information from one sense often augmenting information from another; it is much easier to tell if someone is being insincere in a face-to-face conversation than it is on the phone. Responses to multimodal events (e.g. visual and tactile or auditory displays) are typically faster and more accurate than those to unimodal (e.g. visual only) events and provide a mechanism for enhancing the user's performance in certain situations (Spence and Driver, 2004). The growing complexity of artefacts and the number of parameters that have to monitored, has driven recent interest in the use of multimodal displays. Modern human-machine interfaces now routinely integrate display information in different modalities. Satellite navigation systems, for example, reduce the need to make additional eye movements when driving by providing verbal directions. Multimodal displays also have the capacity to make a large number of applications accessible to users who may have selective sensory deficits.

As with the visual and auditory displays described above, the development of multimodal displays reflects the use of technology to enhance usability. Based upon our understanding of the way humans perceive and process information, multimodal designs have drawn heavily from theoretical models such as the Multiple Resource Theory (MRT; Wickens, 1984). According to this, human information processing is characterised by a number of separate channels or resources. Each resource is finite and defined according to its position along three dimensions. The first dimension segregates perceptual information into separate modalities (e.g. visual and auditory). The second dimension distinguishes three stages of processing; perception, cognition and response. The third dimension segregates the type of information being processed into separate spatial and verbal 'codes' (see Figure 2.2).

An important characteristic of MRT is that it allows designers to model workload on the basis that multiple tasks will draw upon either the same or separate processing resources. Concurrent tasks that draw on the same resources will interfere with each other, exhausting the user's capacity and leading to decrements in performance. Concurrent tasks that draw on separate resources, however, should enable the user to process multiple sources of information without adversely affecting performance on either task. According to this model, multimodal displays will confer the maximum benefits when informational content is aimed at separate resources. By providing a verbal commentary on the distance of a forthcoming junction, for example, modern satellite navigation devices are designed to augment rather than disrupt increases in visual load associated with busy driving conditions.

While MRT is informed by a great deal of experimental psychology in normal and clinical populations, it should be seen as a heuristic for design rather than an absolute representation of human information processing. Boundaries between resources may be more flexible than the model suggests. As any learner driver will attest, attention to visual aspects of the environment can be disrupted by concurrent auditory information, especially when this comes from the back seat! Highly salient stimuli (e.g. bright or loud events) will capture attention regardless of modality, causing momentary lapses in concentration and interfering with performance on a range of different tasks. Perceptual integration, the tendency of the perceptual system to group events occurring at the same time and place, also means there is a great deal of cross-talk between the separate resources advocated by MRT (Wickens, 1984; 2008). While multimodal displays may reduce the competition between concurrent tasks, they are unlikely to remove this competition altogether. Designers of multimodal displays must, therefore, consider the functional benefits to the user while acknowledging the role of their expectations, expertise and their sensitivity to the different types information displayed.

2.6 OTHER MODALITY DISPLAYS

Although unimodal and multimodal displays have tended to focus on visual and auditory communication, it is worth spending a little time examining some of the other modes in which information has been displayed. Whilst at first sight some of these may seem somewhat specialised, the underlying principles are worth examining. This is because they illustrate the search for an appropriate display medium for the combination of the demands of the task and the characteristics of the user. We have already seen that these factors will determine whether a visual display is used, and what type of visual display. We have also seen how alternatives may be provided for a particular task demand. Thus a visual display may be supplemented by an auditory warning signal in order to catch the user’s attention and draw it to a particular task parameter. Some of the examples that we will look at below follow this general principle of choosing the right ‘horses for courses’.

There is another reason for exploring these issues; this is the case of individuals for whom certain display media present challenges. The totally blind person will receive no benefits from a sophisticated visual display. In the case of such a disability, a range of technology has been developed in order to enhance the lives of people. This challenge to technology is not only found in the pursuit of enhancement of the everyday lives of disabled people, but also presents challenges for the occupational psychologists tasked with the assessment and the potential adaption of a working environment for an individual with a disability. Principles of display design remain the same, it is just that more specialised demands are placed upon the designer in terms of adapting to the characteristics of the user. Some specialised applications will not allow such adaptations, but the rate of change of technology will increasingly reduce the number of these.

2.6.1 Olfactory displays

The senses of taste and smell are a very important part of our everyday lives. They do not, at first sight, constitute major information channels for our working lives. There are obvious exceptions to this in terms of such areas as food selection, preparation and serving, or such things as the perfumery trade. It is very difficult to find examples of where these channels are expressly used as display channels, conveying designed information, as opposed to everyday sources of information. The smell of smoke in most working environments would indicate a potential hazard and should be quickly responded to. However, it cannot be said that this mode of display is common. There are examples of where particular smells have been used to indicate hazard. For example, fragrances have been added in certain gases which, although harmful, have no noticeable smell. A good example concerns the way in which very distinctive fragrances have been added to the air supply in mines to indicate the need to evacuate the mine (Sanders and McCormick, 1993).

Some examination of this case study is worthwhile as it illustrates specialised application with its own set of problems. It is not possible to supply every mineworker with either a visual or an auditory display close by to indicate the need to evacuate the mine. The areas in which miners are working are very widespread and constantly changing. What is common to all miners underground is a vented air supply which is constantly refreshed by a forced airflow within the mine. This air is reaching all the miners and becomes a natural medium in which to convey information. It is interesting that only one signal is ever sent in this mode and that it indicates the most extreme hazard.

2.6.2 Tactile and movement displays

A common example of this mode of display is that certain controls, e.g. rotary knobs or the tops of levers, may have a distinctive shape or edge to them to make them distinguishable from other such devices, often placed in proximity to them. This will enable the user to distinguish between one control and another by touch alone, without the user having to look down or needing illumination. Such coding can also be provided by shape or size. The roughness or smoothness of the surface can also be used to convey information about, for example, which side of the material to use for a particular process. With particular reference to the disabled, the recently reported study by Courtney and Chow (2001) concerns the extent to which blind people can use their feet to discriminate shape symbols on pavements. This is already done in some countries where areas adjacent to pedestrian crossings are marked on the pavement by raised bumps. A key source of information for visually challenged people is thereby presented in an alternative form.

The natural vibrations and movements of machinery have been used for a long while as an indication of the current state of operation. This principle can be extended by deliberately moving controls to indicate a changing situation. The best known example would be the stick shaker used in aircraft. In order to indicate a problem, the control stick controls in some aircraft have been made to vibrate. The hand, usually resting on these controls, would pick up this source of information. This is another example of the use of a different mode of information display when the other senses are very heavily loaded. Such displays have had less importance as auditory displays have become more common, but they still offer another very direct mode of input to users. The use of vibrating elements has had other applications. A rather specialised one, which can be generalised to a range of possible applications involves the wearing of a set of vibrating elements next to the skin (Geldard, 1957). The research demonstrated that a tactile language could be learned and that effective communication could be enabled with this new medium. Such devices could easily be adapted in other situations where the auditory channel or other channels were not available. A recent example would be cellular phones which can be made to vibrate instead of producing a audible tone.

2.7 CONCLUSIONS

The extent to which the particular form of display is used as an information source will depend upon a range of factors. The technology available is constantly changing and adapting. Some of these developments involve a high level of expense, for example, virtual reality. At the other end of the spectrum, we have the mass production of items that may have an impact, but have a very low cost. For example, the development of fluorescent inks and dyes has enabled widespread use of these eye-catching surfaces on paper and other surfaces. They may well have a potential to enhance a warning message. The important point here is that this fairly cheap development in technology may enable us to enhance certain forms of displayed information. However, given ever changing technology, it is important for us to both understand the characteristics of the user, but also the characteristics of the tasks that the user may be faced with.

A display may work very well in one context, a digital watch would give us a very accurate reading of the time now, but may not be so efficient when we want to judge when the display reads 47 minutes, how many minutes we have left until the next hour starts. We are faced with a mental arithmetic task subtracting 47 from 60. The “old-fashioned” hand watch requires a simple glance to show us that it was a little over 10 minutes to the next hour. This rough estimate by the visual glance may be sufficient for the task in question.

Matching the form of technology to the characteristics of the user, in performing a task that has meaning within a particular system, is an example of the central issue within ergonomics. The challenge therefore is to understand the user and understand the nature of tasks that the human-machine system generates. This in turn requires a good working relationship with the scientists and engineers who are tasked with developing the system in question.



QUESTIONS FOR CONSIDERATION

1. Why is it unreasonable to expect a user to adapt to or learn to use a poorly designed display?

2. How can displays be adapted for individuals who have a sensory disability?

SUGGESTED FURTHER READING

Noyes, J. (2001). Designing for Humans. Hove: Psychology Press (Chapter 1).



SOURCES CITED IN THE TEXT

Brewster, S.A. (2002). Non-speech auditory output. In J. A. Jacko, & A. Sears (Eds) Human-Computer Interaction Handbook (pp. 220-239). Mahwah, NJ: Lawrence Erlbaum Associates.

Buck, J.R. (1983). Visual displays. In B.H. Kantowitz & R.D. Sorkin (eds.) Human Factors: Understanding People-System Relationships (pp. 195-231). New York: Wiley.

Chapanis, A. (1965) Words, words, words. Human Factors, 7, 1-17.

Courtney, A.J. & Chow, H.M. (2001) A study of the discriminability of shape symbols by foot. Ergonomics, 44, 328-338.

Easterby, R. (1984) Tasks, processes and display design. In R. Easterby & H. Zwaga (Eds.) Information Design (pp. 19-36). Chichester: Wiley.

Edworthy, J. (1994). Urgency mapping in auditory warning signals. In N.A. Stanton (ed.) Human Factors in Alarm Design (pp. 15-30). London: Taylor & Francis.

Geldard, F.A. (1957). Adventures in tactile literacy. American Psychologist, 12, 115-124.

Lindberg, T., & Nasanen, R. (2003). The effect of icon spacing and size on the speed of icon processing in the human visual system. Displays, 24, 111-120.

O’Hare, J.J. (1991). Perceptual integration. Journal of the Washington Academy of Sciences, 81, 44-59.

Rolfe, J.M. (1969). Human factors and the display of height information. Applied Ergonomics, 1, 16-24.

Sanders, M.S. & McCormick, E.J. (1993) Human Factors in Engineering and Design (7th edn.). New York: McGraw-Hill.

Spence, C. & Driver, J. (2004). Crossmodal space and crossmodal attention. Oxford: Oxford University Press.

Stanton, N., & Edworthy, J. (1998). Auditory affordances in the intensive treatment unit. Applied Ergonomics, 29, 389-394.



Wickens, C. D. (1992). Engineering Psychology and Human Performance (2nd edn.). New York: Harper Collins.

Wickens, C. D. (2008). Multiple resources and mental workload. Human Factors, 50, 449-445.

Download 208.65 Kb.

Share with your friends:
1   2




The database is protected by copyright ©sckool.org 2022
send message

    Main page