by Suranga Chandima Nanayakkara, Elizabeth Taylor, Lonce Wyse and Ong Sim Heng
magine attending a concert during which you hear nothing or, at best, can only hear some of the music, as if you were listening to a poor quality TV broadcast through tinny, crackling speakers. The music would lose much of its impact. But this is the kind of experience that many people with hearing problems have every day.
The authors of this article form an interdisciplinary team of researchers at the National University of Singapore (NUS) which is working on ways to help people with hearing difficulties have a richer musical experience, especially during live performances or when singing or playing a musical instrument.
A question central to our research is whether or not the musical experience — involving complex sensations such as being energized, feeling serene, or being captivated by the music — can be conveyed by sensory channels other than sound. In addition to conveying musical "information" per se, we would like to provide a complete musical "experience". For example, one fundamental feature of music is the beat or rhythm. A beat can have a powerful physical impact, related to a sense of movement or a feeling of dance. Can the different experiences evoked by listening to a march versus a waltz be conveyed using a purely visual method of presentation, or must there be a harder, more direct input? Perhaps a sensory input created by translating sound into vibration would be required to convey the composer's intention. There are many questions that can be asked and many potential answers. Our team has spent three years exploring the possibilities and is now in the final phase of testing concepts with the help of deaf volunteers and people with normal hearing.
One particularly interesting question is whether there are relationships between sound and graphics such that some mappings work better than others. There is considerable support for this concept. The composer Liszt used to tell his musicians "more pink here" or "this is too black". Beethoven called the B-minor key the black key, and Rimsky-Korsakov considered F sharp to be strawberry red. Our team has developed a system that codes sequences of information about a piece of music into a visual sequence (Fig. 1). The system has a novel architecture that generates different types of displays, allowing us flexibility to experiment to find the most suitable mapping between musical and visual data streams. This system was presented at the Sixth International Conference on Information, Communications and Signal Processing held in Singapore in December 2007.
The musical beat, directly linked to human physiology via the perception of heart beat, is of fundamental importance. If nothing else, we need to convey the beat and overlying percussive elements. However, the musical "key" is also important since this evokes emotional empathy and wraps a musical composition in a particular feeling such as uneasiness, expectancy, reassurance or a feeling of floating. Mapping key changes in real time is a computer engineering and signal processing challenge that we have met with the help of Dr Elaine Chew from the University of Southern California Viterbi School of Engineering, United States. And there are many more musical features that contribute to a rich musical experience: pitch, chords, harmonic structure, and the timbre of the instrument to name but a few.
A large number of parameters can be produced from a music data stream and these can each be mapped to several different visual properties. The number of all possible one-to-one mappings is too large to be explored fruitfully without some guiding principles. By analyzing mappings reported in the literature and considering results of studies of human audiovisual perception, we identified several avenues for exploration. For example, we could use a visual object which changes shape from spiky to smooth to represent the sound gaining additional harmonies. The object's size could shrink as the fundamental frequency rises and its brightness could rise and fall with the amplitude of the sound.
As a starting point, we have selected a set of fundamental musical features and explored ways of mapping them to visual properties. This process includes designing and conducting a series of user studies to assess the effectiveness of the representation. Figure 2 shows the interface for the system we have developed to test different music-to-visual mappings. It gives a great deal of flexibility to the user, allowing them to create different visual effects. For a given musical feature, it is possible to compare the visual effects chosen by different users. We believe the findings of such analyses will be important as they can provide clues to what mappings are most intuitively meaningful and therefore potentially useful in conveying the musical experience.
The human central nervous system (CNS) is capable of tremendous feats of signal processing, cross-correlating multimodal data streams and ultimately creating meaningful, informed and memorable musical experiences. Humans do "biological signal processing", which is fast, adaptive, low cost, and energy efficient compared to non-biological or "cold" signal processing which requires considerable computational power since streams have to be computer analyzed for optimal output. The human CNS is still largely a "black box" in data processing terms, so it would be unforgivably presumptive to assume that we can create a computerized system to replace its many and varied abilities.
A recent study reported that deaf people sense vibrations in the part of the brain that is normally used for hearing (Dr Dean Shibata, IAMA Newsletter Volume 16, Number 4, December 2001). This helps to explain how deaf musicians can sense music, and how deaf people can sometimes enjoy concerts and other musical events. Dame Evelyn Glennie is a world renowned percussionist who has been profoundly deaf since she was 12 years old. She says that she "feels" the pitch of her concert drums and a piece of music through different parts of her body, from fingertips to feet. This and much other evidence suggests that the experience deaf people have when "feeling" music is more similar to the experience of hearing music than is generally believed. It leads us to consider introducing vibrational or "haptic" inputs to enhance the musical experience, as well as using visual displays.
The perception by the deaf of vibrations caused by sound is possibly as informative as the same sounds sensed via normal hearing channels if these signals are processed by the auditory cortex. It follows that if vibrations caused by sound could be amplified and sensed by a deaf person, this might increase their enjoyment of music. We are developing a sensory input device to attempt to do this: a "haptic chair". The chair is designed to enhance primarily lower-frequency vibrations, likely to derive from the musical beat, but it will also amplify higher frequencies of sound. Initial tests of the prototype suggest that the listener is comfortably seated and is enveloped in an enriched sensation of received sound.
We believe it is important to conduct further surveys to learn more about what kinds of information help people with hearing problems to enjoy a live concert or a recorded piece of music. As a bonus, this approach might also improve the musical experience of people with normal hearing. Moreover, the results need not be limited to music. This research project also has the potential to assist the hearing-impaired sense a general acoustic environment by providing them with an appropriate visual display or haptic experience that communicates sound.
Click here to download the full issue for USD 6.50
|