This document puts focus on B2G only. Cross platform issues are not involved.
- WebRTC B2G META issue
- bug 750011 - [meta] Support B2G for WebRTC
- WebRTC workweek minutes(3rd Jun ~ 7th Jun)
- Relative WIKIs
In bug tables, item description with read color means blocker issues
On B2G, we put focus on
- Sandboxed. Move UDP socket into Chrome process on FirefoxOS
- Packet filter in Chrome process.
- Network interface enumeration and prioritization: WLAN & 3G connection transition
- SDP parsing for video/ audio frame parameter. For example, maximum frame rate, or maximum frame size.
- Request video/audio frame parameter base on HW capability.
- Error reporting: application is able to get ICE error/ failed callback.
|bug 825708||We should use interface properties to determine ICE priorities||FIXED||Gecko 26||Patrick|
|bug 869869||e10s for UDP socket||Open||Gecko 26||SC|
|bug 870660||Packet filter for UDP e10s||Open||Gecko 26||Patrick|
|bug 881761||NSS for WebRTC in content process||Open||Gecko 26||Patrick|
|bug 881982||ICE: handle dynamic network interface change||Open||Gecko 26||Patrick|
|bug 884196||ICE: report error on network interface change||Open||Gecko 26||Shian-Yow|
|bug 881935||Support negotiation of video resolution||FIXED||Gecko 26||Shian-Yow|
Performance-wise, we need to do more optimization on B2G because of weaker hardware. FirefoxOS run on ARM, instead of x86. We need to leverage BSP/GPU to boost up whole webrtc performance. Here list options for optimization in B2G.
- H.264/AAC codec support: create a prototype first to eveluate performance gain by using HW codec.
- Pipeline optimization. To make sure no useless or redundant opertion in encode/ decode pipeline.
- Example: bug 873003 - duplicate video frames be process in encode thread.
- Use reasonable timer interval in Process Thread on B2G. Less statistic data collection, no NACK.
- Test OPUS with lower complexity and decide whether B2G uses OPUS or G711 as default audio codec
- H.264 coding module
- H.264 encoder/ decoder with HW codec
- H.264 RTP transport.
|bug 884365||Audio realtime input clock mismatch||FIXED||Gecko 26||Randell|
|bug 861050||WebRTC performance issue on B2G||Open||Gecko 26||Steven|
|bug 896391||memcpy from camera preview's GraphicBuffer is slow||Open||Steven|
|bug 877954||Adapt video encode resolution & framerate according to available bandwidth and CPU use||FIXED||Gecko 28||gpascutto|
|bug 853356||Display camera/ microphone permission acquisition prompt by ContentPermmissionReques||FIXED||Gecko 26||Alfredo|
|bug 898949||[B2G getUserMedia] Display front/back camera list on permission prompt||FIXED||Gecko 26||S.C|
|bug 913896||Display audio (microphone) permission in permission acquisition prompt||FIXED||Gecko 26||Fred Lin|
Media Resource Management
The media resources on B2G, H/W codec, camera and mic, are limited and will be accessed by multiple processes. We need a centralized manager to handle how to dispatch these resources. We also need to define the media behavior when the process holds a media resource and switch to background.
- H/W codec management
- camera resource management
- microphone resource management
- user stories of media under multiplrocesses
WebRTC Threading Modal
WebRTC is composed by capture module, coding module and streamming protocol module. To address performance bottleneck, we need to be familiar with webrtc threading module, which include the role of each thread and relationship between threads.
Here are the threads in WebRTC(signaling threads are excluded)
- (MediaStreamGraph) Media stream graph run thread: audio/video coding.(MediaStreamGraphImpl::RunThread in MediaStreamGraph.cpp)
- (Network) Socket transport service: send/receive packets. (Entry point of user space callback function??)
- (Capture) Camera capture thread: On FFOS, video frames are callback through MediaEngineWebRTCVideoSource::OnNewFrame and the source is camera api. For other platforms, the images are from MediaEngineWebRTCVideoSource::DeliverFrame, the callback interface of GIPS, and the source is implemented in GIPS. Then MSG thread keeps pulling the latest frames by MediaEngineWebRTCVideoSource::NotifyPull.
- (Capture) Audio capture thread: recieve audio frame from microphone. All audio streams are input through MediaEngineWebRTCAudioSource::Process. In "Process" function, the audio is saving to media track. The mechanism may change since it has clock drift problem(bug 884365).
- (Process) Process thread (worker threads in GIPS): doing many other tasks. Process thread has a task queue for client to inject tasks into.
In a nut shell, we can divide these threads into three categories.
- Encode path start from capture(getUserMedia).
- MediaPipelineListner listen update notification(NotifyQueueTrackChanges) from MSG Run Thread and
- Encode audio chunks in MSG Run Thread.
- Encode video chunks in another thread(ViECapter Thread).
- Put Encoded media data into Transport Service Thread to network
- Steven, please update whole story from network/ jitter buffer to renderer.
Process dispatcher threads
Process thread is a dispatcher thread. A client can register a Module into a ProcessThread. ProcessThread will callback to Module::Process in specific duration(>= 100 ms).
Implementation of ProcessThread is located in process_thread_impl.cc
Here are modules that implement the Process function:
call_stats.cc, vie_remb.cc, vie_sync_module.cc, monitor_module.cc, audio_device_impl.cc, paced_sender.cc, video_capture_impl.cc, audio_conference_mixer_impl.cc, rtp_rtcp_impl.cc, audio_coding_module_impl.cc, video_coding_impl.cc
- RTCP - NACK/ Statistic
- Integrate WebRTC with MediaRecord.
- How to know the current resolution captured by camera?
Use gdb and break at MediaEngineWebRTCVideoSource::OnNewFrame()
- Real-time communication with WebRTC: Google I/O 2013. http://www.youtube.com/watch?feature=player_embedded&v=p2HzZkd2A40