Remote Debugging Protocol Stream Transport

From MozillaWiki
Jump to: navigation, search

This transport is implemented in Firefox, and is the basis of Firefox's built-in JavaScript debugger. You can use the GitHub DebuggerDocs repo to draft and discuss revisions.

The Mozilla debugging protocol is specified in terms of packets exchanged between a client and server, where each packet is either a JSON text or a block of bytes (a "bulk data" packet). The protocol does not specify any particular mechanism for carrying packets from one party to the other. Implementations may choose whatever transport they like, as long as packets arrive reliably, undamaged, and in order.

This page describes the Mozilla Remote Debugging Protocol Stream Transport, a transport layer suitable for carrying Mozilla debugging protocol packets over a reliable, ordered byte stream, like a TCP/IP stream or a pipe. Debugger user interfaces can use it to exchange packets with debuggees in other processes (say, for debugging Firefox chrome code), or on other machines (say, for debugging Firefox OS apps running on a phone or tablet).

(The Stream Transport is not the only transport used by Mozilla. For example, when using Firefox's built-in script debugger, the client and server are in the same process, so for efficiency they use a transport that simply exchanges the JavaScript objects corresponding to the JSON texts specified by the protocol, and avoid serializing packets altogether.)


Once the underlying byte stream is established, transport participants may immediately begin sending packets, using the forms described here. The transport requires no initial handshake or setup, and no shutdown exchange: the first bytes on the stream in each direction are those of the first packet, if any; the last bytes on the stream in each direction are the final bytes of the last packet sent, if any.

The transport defines two types of packets: JSON and bulk data.

JSON Packets

A JSON packet has the form:


where length is a series of decimal ASCII digits, JSON is a well-formed JSON text (as defined in RFC 4627) encoded in UTF-8, and length, interpreted as a number, is the length of JSON in bytes.

Bulk Data Packets

A bulk data packet has the form:

 bulk actor type length:data


  • The keyword bulk is encoded in ASCII, and the spaces are always exactly one ASCII space
  • actor is a sequence of Unicode characters, encoded in UTF-8, containing no spaces or colons
  • type is a sequence of Unicode characters, encoded in UTF-8, containing no spaces or colons
  • length is a sequence of decimal ASCII digits
  • data is a sequence of bytes whose length is length interpreted as a number

The actor field is the name of the actor sending or receiving the packet. (Actors are server-side entities, so if the packet was sent by the client, actor names the recipient; and if the packet was sent by the server, actor names the sender.) The protocol imposes the same syntactic restrictions on actor names that we require here.

Which actor names are valid at any given point in an exchange is established by the remote debugging protocol.

The type field defines the type of the packet, which may be used with the actor name to route the packet to its destination properly. The protocol provides more detail about the type, which remains in effect here.

The content of a bulk data packet is exactly the sequence of bytes appearing as data. Data is not UTF-8 text.

Stream Requirements

The Stream Transport requires the underlying stream to have the following properties:

  • It must be transparent: each transmitted byte is carried to the recipient without modification. Bytes whose values are ASCII control characters or fall outside the range of ASCII altogether must be carried unchanged; line terminators are left alone.
  • It must be reliable: every transmitted byte makes it to the recipient, or else the connection is dropped altogether. Errors introduced by hardware, say, must be detected and corrected, or at least reported (and the connection dropped). The Stream Transport includes no checksums of its own; those are the stream's responsibility. (So, for example, a plain serial line is not suitable for use as an underlying stream.)
  • It must be ordered: bytes are received in the same order they are transmitted, and bytes are not duplicated. (UDP packets, for example, may be duplicated or arrive out of order.)

TCP/IP streams and USB streams meet these requirements.

Implementation Notes

Constant-Overhead Bulk Data

Mozilla added bulk data packets to the protocol to let devices with limited memory upload performance profiling and other large data sets more efficiently. Profiling data sets need to be as large as possible, as larger data sets can cover a longer period of time or more frequent samples. However, converting a large data set to a JavaScript object, converting that object to a JSON text, and sending the text over the connection entails making several temporary complete copies of the data; on small devices, this limits how much data the profiler can collect. Avoiding these temporary copies would allow small devices to collect and transmit larger profile data sets. Since it seemed likely that other sorts of tools would need to exchange large binary blocks efficiently as well, we wanted a solution usable by all protocol participants, rather than one tailored to the profiler's specific case.

In our implementation of this Stream Transport, when a participant wishes to transmit a bulk data packet, it provides the actor name, the type, the data's length in bytes, and a callback function. When the underlying stream is ready to send more data, the transport writes the packet's bulk actor type length: header, and then passes the underlying nsIOutputStream to the callback, which then writes the packet's data portion directly to the stream. Similarly, when a participant receives a bulk data packet, the transport parses the header, and then passes the actor name, type, and the transport's underlying nsIInputStream to a callback function, which consumes the data directly. Thus, while the callback functions may well use fixed-size buffers to send and receive data, the transport imposes no overhead proportional to the full size of the data.