[Her ville komme noget som sætter det I kontekst med Mathildes afsnit om Lyd som object, hvad adskiller disse kapitler?]



Download 33,76 Kb.
Date conversion30.01.2017
Size33,76 Kb.
Sound design

[Her ville komme noget som sætter det I kontekst med Mathildes afsnit om Lyd som object, hvad adskiller disse kapitler?]

In the following chapter we will look at the function of sound in our game, try to explain with different theories how and why sound has great potentials as well as be realistic about the limits and difficulties when dealing with sound in a game.

We start by looking at the role of sound in sound based games and from there on to some premises for using of sound, that is, the auditory system, the listener and aural perception. Then we are ready to delve into auditory interfaces, what has been going on in that area?

With the potentials and limitations of auditory interfaces in mind we turn focus back to the ‘sound based game’ and look at what kind of game play fits to the interaction possibilities mentioned before. What can be made with only sound?

What is being done today to improve sound based games? The answer to this will lead us into further considerations about perception theories, now taken from the audiovisual perspective. The interactivity factor and adoptive music.

Then we will look with much closer eye on sound composition and how to make use of all the above. At very last a “gennemgang” of our own sound composition and design.


Introduction: the purpose with sound in the sound based game



[At this point in the report I suppose we have been looking at various aspects of our game; the positioning game as a ubiquitous computer game vs. augmented real life game to some more abstract levels such as sound art, soundscapes and virtual realities…

I also assume a discussion about game play has occurred in some of the previous chapters but I nevertheless start with putting in context…]
Sound has twofold purpose in our game, a) being informative and b) making engagement and immersion more likely. However, this division is not to understand literary in the sense that the sounds in out game are all either informative or immersive motivating. The two purposes overlap and one could not exist without the other. We will nevertheless have to discuss the sound from many different aspects and so far we have been more on the “immersion”/”engagement” level. Let’s take a look at the informative value and possibility of sound.

Recalling some basic concepts from the previous chapters, game play is the core of a game, its raison d'être and the rules of the game describe the game play. The interface of the game enforces the rules on the players. At last there is a game universe, the setting of the game, that often binds the game to a story, with some “clues and props”.. or simply enhances the atmosphere and stimulates for increased immersion.

Once the basic idea of the game is decided, i.e. the game play, a big task the game designer is facing is to figure out how to enforce the rules of the game on the player. That is an important part of the human computer interface (HCI). How is the user going to interact with the ‘system’ and how is the necessary information (i.e. status, progress, events etc.) going to be mediated from the system to the player?

Therefore a concrete approach to sound-based games is by defining them as: games where the game play is enforced with auditory elements instead of visual, that is to say: where all or most of the necessary information is presented with auditory elements instead of visual.

Presenting information with auditory elements takes us to a subfield of HCI that many researches claim that has not been given deserved recognition; the tone of the following phrase is not uncommon: “The possibility of auditive modality (such as speech, signal, and natural sounds) has been more or less neglected in computer interfaces. Mostly some use of beeps and clicks and maybe background music.” ….. “A huge potential lies in the human auditive system that has not been fully utilized in human-computer-interaction” (Kallinen 2003: 1)

There has long been a call for action in this subfield of HCI that is sometimes called ‘Auditory Interfaces’. Not the least now times as interfaces urge to break out of their traditional modes of mice, screens and keyboards. And there we are, at your service, Worlddomination!



the auditory system – the listener - aural perception

When to explore the potentials of interaction by means of auditory elements we must start with looking at the nature of listening. The auditory system; what is it and how do we use it?

The sense organ of the auditory system, the ear, consists of inner and outer ears as well as muscles that make it capable of orienting to a source of sound (Gibson 1968: 78). Hearing happens because of sensitivity to the physical vibration within certain ranges of frequencies and intensities (Truax 1984: 13). Gibson says about the develpement of the ear: “In mammals the mobile outer ears developed, along with a more elaborate middle ear. With this went the elongation of the cochlea into the spiral shape of a snail shell. (Gibson 1968: 78). “ With this developed a very good localizing and discriminating of sound sources that presumably occurred because such information contributed to the survival potential of the species. (Truax 1984: 13). “Sensitivity to both the detail of physical vibration within an environment and its physical orientation as exposed through its modifications of those vibrations”(Truax 1984: 16)

The phycologist James J Gibson, the author of 'ecological psychology' emphasizes his belief that recogniziton of the coevolution of animals and their environments is fundamental in understanding how and why the senses function like they do.



Wave front and wave train (Gibson 1968: 81) are important ecological facts for auditory perception. Wave front affords orientation and localization. It is specific to the direction of the source where the wave hits the two ears. Wave train affords discrimination and identification. Identifies what kind of mechanical disturbance happened at the source.

Gibson lists the potential stimuli for the auditory system, i.e. the kind of information that might be picked up by an auditor / examples of the kinds of mechanical disturbance that occur in the environment are: waterfalls, which broadcast continuously, wind or air friction, which is intermittent, the rolling, rubbing, colliding, or breaking of solids, which can be abrupt, the behavioural and vocal acts of animals, the speech and musical performances of man, the whole gamut of machine sounds of technological civilization.

All of these events have a time course, and most have a beginning and an end. (Gibson 1968: 79)

How we extract and use acoustic information is one of Barry Truax’s (add footnote here about Truax being one of the establishers of the World Soundscape Project) questions in the book Acoustic Communication. He thinks very little is knows about it compared to the vast amount of knowledge on physical behaviour of sound (Truax 1984: 16). He starts his survey by looking into detail on the listener. The famous “cocktail party” example describes how we can spot our name being said in the other end of a noisy room, an incident that could turn our listening focus completely and suddenly we would only hear the voice that said our name (is it like this? Anyone wants to explain better??). Hearing can be regarded as passive since we don’t need to be conscious to hear but examples of behaviour like in the cocktail party remind us of the complexity of hearing. It is active because of the different levels of listening. We can listening for and listening to. Truax takes three examples to describe the different levels of listening between the most active listening-in-search to the less active listening-in-readiness. (Truax 1984: 17-27)

Indeed it is all about extracting information out of the big amount of details the ear organ can sense. There is no way for the brain to register all the data it gets and it is necessary to use references to memory and past to perform the extraction. Detailed differences become patterns that we recognize; sometimes we have even social relations to them.

Not finished! - Background listening -Listeners preferences



auditory interfaces

“hearing mind and a thinking ear, with or without the actual sound being physically present”

Now, a little cleverer about how the auditory system works, we are closer to what is interesting for an interaction designer or for that matter, an acoustic designer. Both want to take the question “how do we hear/listen” much further and try to explain how we make use of what we hear or how does it affect us.

The following questions are taken from a website by an experimental sound artist concerned with aural perception (er det dårligt at refere til en sådan uakademsik webside? Jeg synes spørgsmålene er gode iaf! http://soundexp.freeshell.org/docrec/modelstxt.html):



  • When does background sound/noise/music go into the background? When is it in the foreground?

  • When do two sounds have binaural interest? Three sounds... trinaural, four sounds...?

  • At what point does the quantity of sound sources attain perceptual conglomeration?

  • How does harmonic interest encourage multiple sound source recognition?

  • How is the aural attention span affected by different sound sources?

  • How does literary audio affect attentive aural perception versus nonliterary audio?

  • How does a static sound source affect attentive aural perception duration?

  • What is the average attentive aural perception duration?

(Todo here if we use it: a little summary, why this is interesting :)
Morten Breinbjerg in a lecture on Auditory spaces in computer music and multimedia comes up with on some similar questions:

  • How does sound convey information about actions in real and virtual spaces?

  • How do we relate to real world sound?

  • How does sound inform us about the environment?

  • How can the knowledge of sound art and sound art experiences contribute to the making of a theoretical frame in relation to multimedia design?

  • How do we create narratives of the world through sound?

  • What features of real world sounds is most important in order to recognize them as indexes to a specific?

(Breinbjerg: 2000 page? )
One approach to explain the way we hear/listen is by seeing it as an active process of constructing from stimulus from the environment and previous stored knowledge (the constructivist theorists). However, many HCI people (especially those that focus on auditory display) have chosen the ecological approach that is originated in J. Gibson’s perception theories (footnote). The ecological approach to perception argues that perception is a direct process, in which information is simply detected rather than constructed. For Gibson, sense perception is sensitivity to information ( Gibson p. 58),

including some kind of selection from the mass of simultaneous inputs, some sort of classifying of the inputs, sorting and categorizing of them - the same input on a later occasion must be "recognized" as the same. And there has to be, third, some kind of operation on the sensations that compensates for the changes due only to the movements of the observer himself.)

Gaver has long written about the potentials of using sound in computer interfaces. For example by using natural sounds, synthesized or digitally recorded (HCI 248). In his article “What in the World Do We Hear? An ecological Approach to Auditory Event Perception” he is concerned with developing a framework for describing sound in terms of audible source attributes. “An examination of the continuum of structured energy from event to audition suggests that sound conveys information about events at locations in an environment.” (Listening to everyday sound Gaver p. 1)
Breinbjerg adopts ecological approach and narrows his questions down to:

- How does sound convey information about its source?

- How does sound convey information about the action that excited the source?

- How does sound convey information about the location of the source-cause action?

- How does sound convey information about the environment of the source-cause action?

(Breinbjerg: 2000 page? )


The ecological approach says that we hear events not sounds. Don Norman, the author of The Psychology of Everyday Things and well known inside HCI identifies two key principles that help to ensure good HCI: visibility and affordance. “Controls need to be visible, with good mapping with their effects, and their design should also suggest (that is, afford) their functionality” (Preece, Jenny et al; 1994: 5). The principle of affordance is a central concept in interaction design because it refers to what a person thinks she can do with an object. ‘Constraint’ is the complementary concept and refers to what an object cannot do. Gaver suggests three techniques to use affordances and one is to use sound affordances (Preece, Jenny et al; 1994: 279). He also points out that musical sounds have always been used in our culture to communicate, for example with clock chimes and church bells (Preece, Jenny et al; 1994: 251).

It is also importand to get feedback. About consequence here!

Kallinen explores how to present and manage information in computers by using audio from the perspective of the semiotic theory of signs: Index – signs, icon – signs an symbol – signs. Like the others he says that the basic perceptual processes on sounds are the recognition and comparison processes (Kallinen 2003: 3)

His list of the basic qualities of sounds:

- Timbre (sound source)

- Loudness

- Duration

- Location

- Pitch

and for more complex sounds:



- Tempo (organized durations)

- Melody (organized pitch sequences)

- Harmony (summed pitches)

- Texture (organized timbres and sound sources)


Furthermore, Barry Truax sets emphasize on the source in The communicational approach to acoustics:Exchange of information – Context of source and listener – with ‘listening’ as the core….”

Sonnenschein will also be mentioned here and thus we have a relation between making sounds for films and HCI: Reduced, Causal, Semantic, Referential (more film approach though, should come together with B Lankjær etc.)



  • Sound design: exploring the source of sound making (63)

  • The sound qualities (65)

  • Physical effects of sound (70)

  • Sound goes directly to the subconcious (which gives the sound designer lots of power) The attention

  • Hearing and the ear

 Morten Breinbjerg artikel:

- p.36 Det rumlige i lyden, lyden er ikke kun nu, men også her!

- p.38 Om kontinuum og multidimensionality: modsat til de periodiske kriterium

Fysisk sansning af lyd


Here we have listed qualities of sound that are useful in our mission. However, more detailed discussion on how they can be used will be in chapter?. First we will take a look again on the sound based game and try to see how the discussion above contributes in that…. !?

sound based games

Now it would be interesting to look at how these theories are applicable and useful for the game developer. As mentioned earlier, sound and music in games is often an afterthought and has not much to do with the game play. Most games allow the user to turn off the sound/music but there is no way that the monitor could be turned off whereas that is where the game is happening! This has nevertheless changed a lot recently with improved sound design and the experience is probably quite different without sound even though it doesn’t affect the interaction directly. But this has to do with our discussion on immersion that is to come in next the chapter. There are games that rely strongly and even merely on auditory elements, these are primarily games that have been developed with blind and “svagsynede” in mind (often navigation by use of pan, left/right) but also games that use some musical elements, such as rhythm, as the main interaction mean…



[Shall we here bring out our empiri on sound based games?]

- Musical/Rhythmical games: Ribbon the rapper, Rez… ,more?

- Games for blind: shades of Doom, Monkey..., BlindEye

- Games for “svagsynede”: Dan Gardenfors...

- About sound in games where we thought it was good: Noone lives forever.

[Here we could bring in your discussion on the games Rikke]
[I will add something from my essay on computer games for blind, and Dan’s notes, that you got in a separate document are good contribution to that.]
To stimulate immersion and create athmosphere the whole thing is put into some kind of a context (the game universe)


  • What (what kind of gameplay is suitable for sound based games?) …. Refer to section on what has been done, computer games for blind etc.

  • How (how is the game play enforced with sound?) refer to the HCI chapter

  • and more how (how is the game universe created?) refer to chapters on Soundscape, Audiovisual perception theories (Birger Lankjær etc). But emphasize on the importance. (5.1 in my essay: computer games for blind)

What is happening today? Using of more spatial techniques…


I think the potential for having great fun is big, especially because the person experiencing it is always going to be challenged to imagine the game’s world. In a way more creative, the same way as reading a book is maybe a more creative activity than watching a movie.

Of course this only happens if the sounds are successful and provide a whole experience…

Some rules of thumb to achieve that could be:

- Use descriptive sounds in terms of audible source attributes.



- Keep the sound universe consistent by using sounds that have similar origin.

Kontekstualisering af musik
[Here Rikkes and Dans discussion on audiovisual perception… ]
interactive music komposition
[Here Rikkes discussion on interactivity and adoptive music…? The introduction to that fits well as an “overgang” from the previous subchapter, ikke?]

sound composition
Rikke
the sound design of our game (deskriptiv analyse)
Rikke


The database is protected by copyright ©sckool.org 2016
send message

    Main page