Confirmed users
656
edits
| Line 17: | Line 17: | ||
===== Reading Audio ===== | ===== Reading Audio ===== | ||
Audio data is made available in real-time via an event-based API. As the audio is played, and therefore decoded, each frame is passed to content scripts for processing before being written to the audio layer. Playing, pausing, and stopping the audio all affect the streaming of this raw audio data as well. | |||
<code>onaudiowritten="callback(event);"</code> | <code>onaudiowritten="callback(event);"</code> | ||
| Line 35: | Line 37: | ||
===== Getting FFT Spectrum ===== | ===== Getting FFT Spectrum ===== | ||
Most data visualizations or other uses of raw audio data begin by calculating a FFT. A pre-calculated FFT is available for each frame of audio decoded. | |||
<code>mozSpectrum</code> | <code>mozSpectrum</code> | ||
| Line 47: | Line 51: | ||
===== Writing Audio ===== | ===== Writing Audio ===== | ||
It is also possible to setup an audio element for raw writing from script (i.e., without a src attribute). Content scripts can specify the audio stream's characteristics, then write audio frames using the following methods. | |||
<code>mozSetup(channels, sampleRate, volume)</code> | <code>mozSetup(channels, sampleRate, volume)</code> | ||