Media/WebRTC Audio Perf: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
No edit summary
Line 87: Line 87:
#PulseAudio's pactl tool is used to find the right sink to play-out the audio produced by the remote Peer Connection.
#PulseAudio's pactl tool is used to find the right sink to play-out the audio produced by the remote Peer Connection.
#PulseAudio's parec tool records mono channel audio played out at the sink in Signed Little Ending format at 16000 samples/sec.
#PulseAudio's parec tool records mono channel audio played out at the sink in Signed Little Ending format at 16000 samples/sec.
#The output from the SOX tool is fed into SOX tool to generate .WAV version of the recorded audio and to  trim silence at the beginning and end of the recorded  audio file. <br> <br>
#The output from the parec tool is fed into the SOX tool to generate .WAV version of the recorded audio and to  trim silence
at the beginning and end of the recorded  audio file. <br> <br>
<code>
<code>
Command:
Command:
  parec -r -d <recording device> --format=s16le -c 1 -r 16000  
parec -r -d <recording device> --format=s16le -c 1 -r 16000  
    | sox -t raw -r 16000 -sLb 16 -c 1 - <output audio file> trim 0 <record-duration>
  | sox -t raw -r 16000 -sLb 16 -c 1 - <output audio file> trim 0 <record-duration>
</code>
</code>
<br>
<br>
Line 101: Line 102:
</code>
</code>


=== Open Issues ===
=== TODO Feature List ===
=== TODO Feature List ===
These are near term things, if implemented, would improve  the framework
#Support different audio formats and lengths
#Provide tools support across platforms. Currently we support LINUX only
#Discuss the results generated and DataZilla, GraphServer integration
#Allow configuration options to specify sample rates, number of channels and encoding.


Analysis of tools and techniques for measuring WebRTC Audio Performance.
== Open Issues ==
This page is under construction. please come back later for more complete information
==Background==
=== Chrome Audio Perf ===
Following are the specific tests in Chrome that attempt to measure audio performance.
====Audio Processing Per 10ms Analysis====
This test instruments WebRTC's AudioProcessing Module under various configurations to measure mic to render audio processing for a 10ms audio frames.
APM can be configured on the following
Sample Rate, Input and Output Channels, Reverse Channels, Echo Cancellation,  Gain Control, Noise Suppression, Voice Activity Detection,  Level Metrics, Delay, Drift compensation, Echo Metrics
Logic:
  For every Input AudioFrame
    time ProcessStream()
    also apply component configuration
  For every Output AudioFrame
      time AnalyzeReverseStream()
 
Calculate Execution Time as average for all the 10ms frames processed and analyzed.
 
Audio Quality Voice Engine - E2E
Code:run_audio_test.py
  third_party/webrtc/tools/e2e_quality
This uses PulseAudio to setup virtual devices followed by comparison tool to measure the quality.
This is based on VoiceEngine loopback call
 
 
WebRTC Recording Time
Code: webrtc_audio_device_unittest.cc
This uses VoEMediaProcess::Process() callback to act as interceptor
to audio frames at the recording path to time the recoding setup time
 
WebRTC Playout Setup Time
This uses VoEMediaProcess::Process() callback to act as interceptor
to audio frames before playback to time the setup time
 
WebRTC Loopback With Signal Processing
WebRTC Loopback Without Signal Processing
Both the tests uses loopback call with/without APM enabled. this loopback runs for 100 AudioFrames.
 
== Proposal==
===Using Talos Framework===
==Open Questions ==
Confirmed users
35

edits

Navigation menu