Confirmed users
656
edits
| Line 28: | Line 28: | ||
== API Tutorial == | == API Tutorial == | ||
This API extends the HTMLMediaElement and HTMLAudioElement (e.g., affecting <video> and <audio>), and implements the following basic API for reading and writing raw audio data: | |||
===== Reading Audio ===== | ===== Reading Audio ===== | ||
Audio data is made available via an event-based API. As the audio is played, and therefore decoded, sample data is passed to content scripts in a framebuffer for processing after becoming available to the audio layer--hence the name, '''AudioAvailable'''. These samples may or may not have been played yet at the time of the event. The audio samples returned in the event are raw, and have not been adjusted for mute/volume settings on the media element. Playing and | Audio data is made available via an event-based API. As the audio is played, and therefore decoded, sample data is passed to content scripts in a framebuffer for processing after becoming available to the audio layer--hence the name, '''AudioAvailable'''. These samples may or may not have been played yet at the time of the event. The audio samples returned in the event are raw, and have not been adjusted for mute/volume settings on the media element. Playing, pausing, and seeking the audio also affect the streaming of this raw audio data. | ||
Users of this API can register two callbacks on the <audio> or <video> element in order to consume this data: | Users of this API can register two callbacks on the <audio> or <video> element in order to consume this data: | ||
| Line 39: | Line 39: | ||
<audio src="song.ogg" | <audio src="song.ogg" | ||
onloadedmetadata="audioInfo();" | onloadedmetadata="audioInfo();" | ||
</audio> | </audio> | ||
</pre> | </pre> | ||
| Line 49: | Line 48: | ||
* mozFrameBufferLength | * mozFrameBufferLength | ||
Prior to the '''LoadedMetadata''' event, these attributes will | Prior to the '''LoadedMetadata''' event, accessing these attributes will cause an exception to be thrown, indicating that they are not known, or there is no audio. These attributes indicate the '''number of channels''', audio '''sample rate per second''', and the '''default size of the framebuffer''' that will be used in '''MozAudioAvailable''' events. This event is fired once as the media resource is first loaded, and is useful for interpreting or writing the audio data. | ||
The ''' | The '''MozAudioAvailable''' event provides two pieces of data. The first is a framebuffer (i.e., an array) containing decoded audio sample data (i.e., floats). The second is the time for these samples measured from the start in seconds. Web developers consume this event by registering an event listener in script like so: | ||
<pre> | |||
<audio id="audio" src="song.ogg"></audio> | |||
<script> | |||
var audio = document.getElementById("audio"); | |||
audio.addEventListener('MozAudioAvailable', someFunction, false); | |||
</script> | |||
</pre> | |||
An audio or video element can also be created with script outside the DOM: | |||
<pre> | |||
var audio = new Audio(); | |||
audio.src = "song.ogg"; | |||
audio.addEventListener('MozAudioAvailable', someFunction, false); | |||
audio.play(); | |||
</pre> | |||
The following is an example of how both events might be used: | The following is an example of how both events might be used: | ||
| Line 71: | Line 87: | ||
function audioAvailable(event) { | function audioAvailable(event) { | ||
var samples = event. | var samples = event.frameBuffer; | ||
var time = event. | var time = event.time; | ||
for (var i = 0; i < frameBufferLength; i++) { | for (var i = 0; i < frameBufferLength; i++) { | ||
| Line 277: | Line 293: | ||
</pre> | </pre> | ||
Since the ''' | Since the '''MozAudioAvailable''' event and the '''mozWriteAudio()''' method both use '''Float32Array''', it is possible to take the output of one audio stream and pass it directly (or process first and then pass) to a second: | ||
<pre> | <pre> | ||
| Line 283: | Line 299: | ||
src="song.ogg" | src="song.ogg" | ||
onloadedmetadata="loadedMetadata();" | onloadedmetadata="loadedMetadata();" | ||
controls> | |||
controls | |||
</audio> | </audio> | ||
<script> | <script> | ||
| Line 303: | Line 318: | ||
writeAudio(frameBuffer); | writeAudio(frameBuffer); | ||
} | } | ||
a1.addEventListener('a1', audioAvailable, false); | |||
function writeAudio(audio) { | function writeAudio(audio) { | ||
| Line 321: | Line 337: | ||
Audio data written using the '''mozWriteAudio()''' method needs to be written at a regular interval in equal portions, in order to keep a little ahead of the current sample offset (current sample offset of hardware can be obtained with '''mozCurrentSampleOffset()'''), where a little means something on the order of 500ms of samples. For example, if working with 2 channels at 44100 samples per second, a writing interval of 100ms, and a pre-buffer equal to 500ms, one would write an array of (2 * 44100 / 10) = 8820 samples, and a total of (currentSampleOffset + 2 * 44100 / 2). | Audio data written using the '''mozWriteAudio()''' method needs to be written at a regular interval in equal portions, in order to keep a little ahead of the current sample offset (current sample offset of hardware can be obtained with '''mozCurrentSampleOffset()'''), where a little means something on the order of 500ms of samples. For example, if working with 2 channels at 44100 samples per second, a writing interval of 100ms, and a pre-buffer equal to 500ms, one would write an array of (2 * 44100 / 10) = 8820 samples, and a total of (currentSampleOffset + 2 * 44100 / 2). | ||
===== Complete Example: Creating a Web Based Tone Generator ===== | ===== Complete Example: Creating a Web Based Tone Generator ===== | ||