
Claus Reinke wrote:
The thing that always confused me about OpenAL (www.openal.org) is the following part of the spec: [...]
The basic ideas behind OpenAL are quite easy and centered around 3 concepts: * Buffers containing (normally mono) audio data, which can be shared by sources (see below). This is where you put e.g. your WAV data. * Sources generating (perhaps directed) sound at a point in 3D space for the queues of buffers attached to them. These correspond to sound emitting entities in your virtual 3D world, e.g. enemies shooting at you. :-) * A single listener (= you) positioned at a point in 3D space. OpenAL renders the sound to your audio device (stereo headphones, 5.1 speakers, etc.) via a lower device layer (ALSA, DirectSound, etc.). It doesn't handle MIDI, but there are e.g. extensions for microphone input. The "spirit" of OpenAL is very much like OpenGL, and both fit quite nicely together, although you can use each of these libraries on their own. It works on all major platforms (WinDoze, Linux, MacOS X, ...) and is an "industrial-strength" library, meaning that even people who make their living from programming use it, see e.g. Unreal Tournament 2004 and quite a few other commercial games. A few months ago I started an OpenAL binding for Haskell, see the fptools repository at fptools/libraries/OpenAL. The basic functionality is there, but the binding is not complete yet. The main reason for the latter is that I couldn't get my 4.1 speakers working under my previous SuSE Linux, but I've just upgraded, so hopefully things have improved and the binding can be finished. Apart from that, having a binding for SDL would be nice, too, and somebody is already working on it, IIRC. Cheers, S.