Positional Audio
Introduction
The addition of sound can be a great way to enhance the level of immersion experienced by the audience of a visualisation. OpenAL is an open source, multi-channel, three-dimensional, positional, audio system. In an environment such as the Data Arena, it allows the audience to be enveloped within a 360 degree soundscape, in which directional audio cues may be used to direct the attention of the viewers to various key aspects of a visualisation.
We have developed a simple framework which may be added to an Omegalib script to implement support for audio within a visualisation. The framework provides easy access to OpenAL via the underlying PyAL library. In this tutorial, we will examine a simple example which demonstrates how to make use of this framework.
daAudio
The daAudio Omegalib module is a thin wrapper around PyAL, which is a Python language binding for the OpenAL library. The module provides the following features and functionality:
- High level abstractions for OpenAL components, including an audio emitter, listener and player
- Utility methods for loading sound streams from various different audio file formats
In this example, we will demonstrate how to make use of these features in order to implement a positional audio emitter which follows the position of a 3D cursor as it is moved interactively throughout the scene. This complex behaviour is simple to implement using daAudio, requiring only a few lines of code.
Example
The scene generated by this example is very simple – it contains a 3D cursor object lit by a single light source. An (invisible) audio emitter is attached to the cursor object, which plays a short sound clip of a buzzing mosquito in a continuous loop. As the cursor is moved throughout the scene using the selected input device (in this example, we’ll use a spacenav controller) the position of the audio source will be updated. You will be able to hear the audio source growing and shrinking in volume, as it is moved closer to or further away in different directions from the position of the listener (which in this case is set to the default location at the origin of the 3D volume encompassed by the scene).
Let’s take a look at the Omegalib script which was used to implement the example. You can find a copy of this script (mosquito.py) in the examples directory on the DAVM under /local/examples/audio:
import os
from cyclops import *
from daAudio import *
from daInput import *
if __name__ == '__main__':
"""
This example demonstrates how to introduce audio into an omegalib visualisation. It shows
how to use various features which are provided by the daAudio and daInput omegalib modules,
including:
- How to create an input cursor and use it to control the position of a sound source
- How to load and play an audio effect in a continuous loop
"""
path = os.path.dirname(__file__)
if not path:
path = os.getcwd()
resources = os.path.join(path, 'resources')
ui_context = UiContext()
ui_context.add_cursor(SpaceNavControllerCursor('spacenav', 0, TriAxisCursorGeometryBuilder().set_position(0, 2, -4).build(), ui_context))
getDefaultCamera().setControllerEnabled(False)
light = Light.create()
light.setEnabled(True)
light.setPosition(Vector3(0, 0, 0))
light.setColor(Color(1.0, 1.0, 1.0, 1.0))
light.setAmbient(Color(0.1, 0.1, 0.1, 1.0))
player = AudioPlayer()
mosquito = AudioEmitter('mosquito')
mosquito.set_position([0, 2, -4])
mosquito.set_looping(True)
mosquito.queue(load_wav_file(os.path.join(resources, 'mosquito.wav')))
player.play(mosquito)
player.update()
def on_event():
event = getEvent()
ui_context.on_event(event)
cursor = ui_context.get_cursor(event)
if ControllerCursor.is_interested(event) and isinstance(cursor, ControllerCursor):
position = cursor.get_position()
mosquito.set_position([position.x, position.y, position.z])
player.update()
setEventFunction(on_event)
As in the previous examples, we start out by declaring a UiContext, which we use to manage the custom cursor controller which we will be using in this scene. We then create a SpaceNavControllerCursor, with an associated geometry object so that it is possible to see the current position of the cursor in the scene. Once again, we disable the navigation controls on the default camera, as we won’t be needing them in this example:
getDefaultCamera().setControllerEnabled(False)
Next, we create a light source to illuminate the cursor geometry and place it at the origin of the scene. This provides an additional position cue to the user – as the cursor moves further away from the origin, it will be less brightly lit:
light = Light.create()
light.setEnabled(True)
light.setPosition(Vector3(0, 0, 0))
light.setColor(Color(1.0, 1.0, 1.0, 1.0))
light.setAmbient(Color(0.1, 0.1, 0.1, 1.0))
Now, we’re ready to define our audio elements. The first step is to declare an AudioPlayer, which will be used to manage playback of the various different emitters which make up the soundscape of the scene:
player = AudioPlayer()
In this example, we have only one AudioEmitter, which will be used to represent the sound of a buzzing mosquito. The emitter is configured with an initial starting position and the audio stream is loaded from file and queued up so that it is ready to play. We enable looping on the emitter, so that once playback starts, the sound clip will be played continuously, on repeat. If it were not for this setting, the mosquito clip (which is quite brief, only a couple of seconds long) would play once, and then stop:
mosquito = AudioEmitter('mosquito')
mosquito.set_position([0, 2, -4])
mosquito.set_looping(True)
mosquito.queue(load_wav_file(os.path.join(resources, 'mosquito.wav')))
Once the emitter has been configured correctly, we add it to the player and instruct it to update its internal state. We need to call the update() method on the player whenever any of the properties changes on one of the associated audio emitters, so that it is able to apply the necessary modifications to the active soundscape in order to take the new settings into account:
player.play(mosquito)
player.update()
Lastly, we define an Omegalib event handler, which will be invoked whenever an event occurs within the Omegalib event loop. The handler checks to see if the event was generated by our custom cursor. If so, the position of the cursor is applied to the mosquito audio emitter, effectively re-positioning the audio source to the current location of the cursor. Since the audio emitter has been modified, we need to call the update() method on the player, to notify it that a change was made:
def on_event():
event = getEvent()
ui_context.on_event(event)
cursor = ui_context.get_cursor(event)
if ControllerCursor.is_interested(event) and isinstance(cursor, ControllerCursor):
position = cursor.get_position()
mosquito.set_position([position.x, position.y, position.z])
player.update()
Where to Next?
If you have a copy of the DAVM installed, you can try the example out for yourself by running the following commands in a terminal window (note: unless you modify the script to use a different input device, you’ll need to have access to a spacenav in order to move the cursor around the scene):
$ cd /local/examples/audio
$ orun mosquito.py
If you would like to dig deeper and learn more about how the daAudio Omegalib python module works, or contribute updates of your own, please refer to the repository on GitHub, where you will be able to find all of the code:
You may like to try extending the example in this tutorial in order to add new functionality, or experiment further with the capabilities of OpenAL. Some suggested improvements include:
- Adding additional sound sources, which move about the scene randomly, in order to simulate a larger swarm of noisy mosquitos
- Implementation of a “mute” feature, so that you can enable or disable audio playback at will
- Motion tracking support, to monitor the position of the user within the environment, and update the location of the AudioListener accordingly