A lot of features in Gecko require some kind of buffering; for ex., Camera, media playing back, WebRTC, Animation, ... etc. But, there is no generic buffering framework in Gecko. Every module implements its-owned buffering. The pro is getting better performance due to fully customized for the task. The cons are hardly for adapting, studying, analyzing, and porting. So, we need a framework for buffering that is easy to customize for performance, adapt to different configurations and applications, easily analysis it, port to various platforms, and a common language/slang among modules.



There are several factors should be considered for buffering/queuing.

  • queue size
  • queue management
  • queuing discipline
  • buffer synchronization
    • ack
    • nak
  • threading
  • servers
    • the service function
    • the number of servers
  • Buffer management

Data and APIs

union QueueManagementType {
    QMT_DEFAULT,              // Never drop buffers
    QMT_TAILDROP,             // Tail drop if this queue is full
    QMT_RED                   // Random early drop
union QueueDisciplineType { QDT_FIFO, QDT_LIFO, QDT_PRIO };
union BufferSyncType { BST_NONE, BST_ACK, BST_NAK };
union QueueThreadingType {
    QTT_NONE,                 // This queue does not create any thread
    QTT_THREAD_BEGIN,         // This queue is a begin of a new thread (incoming buffers are passed from other threads)
    QTT_PROC_BEGIN            // This queue is a begin of a process (incoming buffers are passed from other processes; for ex., content process)
union BufferManagementType {
    BMT_FORWARD,              // This queue does just forward buffers without any change.
    BMT_REUSE,                // This queue reuse buffers from the up streams (modified).
    BMT_UPSTREAM,             // This queue requests new buffers from a up stream.
    BMT_DOWNSTREAM,           // This queue requests new buffers from a down stream.
    BMT_PROVIDER              // This queue provide new buffers.
class BufferProcessor {
    virtual int process(Buffer *buf) = 0;
    virtual int recvAck(Buffer *buf) { return -1; }
    virtual int recvNak(Buffer *buf) { return -1; }
    // receive a new buffer for a calling of createBuffer() given by the cookie.
    virtual void recvNewBuffer(int cookie, Buffer *buf) {}

    void doneBuffer(Buffer *buf);  // The processor have done on this buffer.

    Queue *queue;
class BufferCreator {
    // padno is where is the buffer supposed to be go, or -1 for no specification.
    virtual int createBuffer(int payload_size, int padno) = 0; // return an cookie
    virtual void destroyBuffer(Buffer *buf) = 0;

    Queue *queue;
typedef BufferProcessor *(*BufferProcessorFactory)(Queue *queue);
typedef BufferCreator *(*BufferCreatorFactory)(Queue *queue);

struct QueueConfig {
    const char *qname;
    int qsize;
    QueueManagementType qman;
    QueueDisciplineType qdisc;
    BufferSyncType bsync;
    QueueThreadingType qthreading;
    BufferManagementType bman;   // this type must match the behavior of the servicefunc
    BufferCreatorFactory *bcreatorfactory;  // valid only for bman == MST_PROVIDER
    BufferProcessorFactory *bprocessorfactory;
    int numOutPads;  // Outgoing pads

    Queue *createQueue();

struct QueueHandlers {
    QueueManager      *qman;        // responsible for enqueue
    QueueDiscipline   *qdisc;       // responsible for dequeue
    BufferCreator     *bcreator;    // create new buffers for the queue
    BufferProcessor   *bprocessor;  // process incoming buffers

struct Queue {
    QueueConfig *config;
    QueueHandlers *handlers;
    vector<OutgoingPad*> pads;

    list<Buffer*> buffers;

    int connectTo(Queue *offstream);
    int sendTo(int padNo, Buffer *outbuf);

    Buffer *peek();
    Buffer *remove();
    int add(Buffer *);

    int isBusy(); // is the BufferProcessor busy?  Can not consume any new buffer immediately.
    void busy();  // mark this queue busy
    void nobusy(); // mark this queue not busy

typedef int (*BufferReleaseFunc)(Buffer *buf);

struct Buffer {
    BufferCreator *creator;
    Queue *srcQueue;  // the queue which asks the creator to create this buffer.
    BufferReleaseFunc releasefunc;

    BufferPayloadType bufferType;
    int payloadSize;
    void *payload;    // payload can be separated from Buffer itself, so we can apply Buffer to GL buffers...,etc.
struct QueueGraph {
    vector<Queue*> queues;

    /* start to run the QueueGraph.
     * The QueueGraph can be resumed from the pause state by calling start() again.
    void start();
    void pause();
    void stop();

/* QueueGraphConfig is a template of QueueGraph instances.
 * createQueueGraph() is responsible for the creation of QueueGraphs.
struct QueueGraphConfig {
    typedef pair<int,int> connection; // pair of indices of queues from source to target.
    vector<QueueConfig*> queues;
    vector<connection> connections;

    QueueGraph *createQueueGraph();

Life of Buffers

Buffers are always(?) created by a BufferCreator requested by the associated queue. Once a buffer is created, it is supposed to be sent to off-streams. The life of a buffer is managed by queues that the buffer pass. The processors of all queues should call QueueProcessor::doneBuffer() for every buffer to indicate they don't need a buffer any more. Buffers will be free if no one need them.

Configuration of Queues

Create and connect queues together. With info. from QueueConfig, it is possible to separate pipeline optimization from buffer processing; e.g. the service function in QueueConfig. Service functions are responsible for processing incoming buffers of a queue and generate outgoing buffers; for ex., a decoder or a mixer.

Pipeline optimizations are about buffering reusing, buffer passing, synchronization, ..., etc. They should be extracted from the code of processors. By implement various types of buffer management, synchronization, buffer passing mechanisms, developer can try different combinations of mechanisms during configuration stage to get better performance. For example, use RED or tail drop, a separated threading or without new thread, GL buffers or normal buffers, ... etc. These features can be decided during configuration stage of a stream pipeline. So, we can try different combinations very easily.


  • Base on queue size.
    • The predecessor of empty queues must go first.
    • The predecessor of nearly empty queues go second.
    • The predecessor of nearly full queues go third.
    • The predecessor of full queues should not go.
  • Every thread own a scheduler for scheduling of queues in the thread.

Use case


  • Statistic
    • Memory size/queue size
    • Export statistic data for developer to figure out memory or performance bottleneck