DOM/Engineering

< DOM

Disclaimer: This document is currently just in its infancy.

This page attempts to serve as a cheat-sheet for the things DOM engineers may do by linking to the documentation on how to do it or calling out where we need more documentation. We try and transcend being a list of links by contextualizing the links and inlining and deep-linking when the target link is more than a page. For example, while there are other pages that capture the many test frameworks, readers should be able to ctrl-f for "reftest" and get a brief definition of what a reftest is and from there a link to more details.

Please feel free to edit to make the document more usable.

Contents

Building Firefox

Install Bootstrapping Dependencies

MDN has excellent per-platform guides on how to set up your first build. There's some overlap between those guides and the next section here about checking out the code. The core thing to know is that you want to check out "mozilla-unified".

Checking Out the Code

You can use hg (mercurial) or git, but it's strongly advised that you at least try using mercurial with bookmarks and the unified repo to start, even if you're a git expert.

Everything you could want to know about using Mercurial with Firefox can be found on the Mercurial for Mozillians page. But good first steps are:

Configuring the Build

The MDN Configuring Build Options page has all the details you'd want.

The core configuration options you are likely to care about are:

  • ac_add_options --enable-debug - Enable assertions and other DEBUG-conditional code. These massively slow down the browser and increase console spam, but are quite useful when you're making low level changes and you wan to make sure invariants are checked by assertions.
  • ac_add_options --enable-optimize="-Og" - Setting a specific optimization level so that the debugger will have an easier time of inspecting the state of the program. "Og" was gcc-specific last time I checked; if you're building with clang you might need something different.

Making Builds Go Faster

Spec-Work: Implementing for Content

Communication: Participate in Spec Development

The WHATWG Working Mode provides a helpful introduction to web standards development.

For DOM the main standards are:

You can participate in their development through their corresponding GitHub repository (e.g., watching the entire repository, subscribing to relevant issues, filing new issues, etc.). Feel free to chime in as you please and share your views, but confer with colleagues before expressing opinions on behalf of Mozilla.

Mechanics: Web IDL (WebIDL)

Web IDL defines the APIs that Firefox exposes to JavaScript content.

MDN's WebIDL bindings has detailed information on the setup in Firefox.

Mechanics: Testing

There are two broad categories of tests you'll deal with.

  • Web Platform Tests are cross-browser tests. The contents of testing/web-platform-tests are periodically automatically synchronized to/from the github repo at https://github.com/web-platform-tests/wpt.
    • How:
    • When to write this kind of test:
  • Firefox specific tests. Sometimes you may be testing Gecko-specific behavior that is explicitly not part of a standard, you may be testing lower-level details of a system (ex: verifying low-level error codes that are not exposed to content), or you may simply be unable to currently reproduce the necessary test conditions using the mechanisms provided to web platform tests (ex: simulating failure modes or controlling e10s process allocation). There are a number of test types/frameworks we use:
    • mochitests: The most common type of Gecko-specific test, appropriate when your test does not need to orchestrate complicated e10s behavior (use "browser" tests for that).
      • How: Your test is an HTML file that loads test-framework support JS and any JS files you author. It runs in its own tab as a content page but can request that privileged operations happen via the SpecialPowers API that is exposed to your content page so that you can flip preferences or even access privileged XPCOM objects via wrapper magic that grants you (wrapped) system principal access. Your tab will be in a content process if e10s is enabled or the parent process if e10s is not enabled. There is a desire to get rid of the non-e10s test variants once Fennec is replaced by an (e10s-capable) GeckoView solution.
      • Helpers:
        • BrowserTestUtils provides async e10s-aware helpers to open and close tabs and windows as well, wait for or listen for events, generate events, etc. This is the first place to look for helpers.
        • ContentTask provides ContentTask.spawn, an e10s-aware mechanism to run a (potentially) async function in a system-privileged frame-script whose "content" variable lets you reach into the page directly via wrapper magic. If you want to run code in the page's global with its principal, you will want to use "content.eval".
        • ContentTaskUtils provides helpers to spawned content tasks; methods in there are automatically loaded into the scope uses by ContentTask.spawn.
    • "browser" (chrome) tests: Your test is a JS file that is loaded in the parent process with system principal access. Convenience mechanisms
    • "chrome" tests: Deprecated pre-"browser" mechanism for when your test needed the system principal and the ability to manipulate the browser. Ideally you should never write new tests of this type and any changes to existing tests are minimal. If you need to make extensive changes to a test, consider just re-writing it as a "browser" test.
    • marionette-based tests: Marionette is a Mozilla-specific protocol used for implementing a Firefox Webdriver(/Selenium) so that the browser can be remotely scripted at a high-level in a browser-agnostic fashion so that websites can be tested across browsers without having to write a custom-test for each browser. We also write various high-level QA tests and tests that require the browser to be restarted or shutdown using this mechanism because this test framework runs outside the browser. It's also the case that we use marionette to actually do most of the legwork of getting all the other test frameworks to setup their in-browser tests and sometimes help get the results and any errors out.

Browser Chrome-Work: Implementing for Privileged Browser UI

"Browser chrome" is the browser UI. (This is the etymology of Google's "Chrome" browser, which will forever require you to make clarifications whenever you say the word "chrome".)

XPConnect versus WebIDL

Some inaccurate but workable definitions:

  • XPIDL ".idl" files in the tree define XPCOM interfaces. These interfaces define methods, constants, and getters and setters, as well as meta-data about them. These are conceptually language agnostic but in practice you'll see a lot of C++-specific details included in the files.
  • XPCOM covers both a bunch of important system glue code as well as the mechanics that make the interfaces and their language-specific calling conventions work.
  • XPConnect is a binding layer implemented in C++ that allows JavaScript to interact with XPCOM. JS doesn't have the low level ability to do this on its own. XPConnect is single-threaded and only available on the main thread of each process. Any JS code running on other threads is either running in a privileged ChromeWorker or regular DOM Worker, SharedWorker, ServiceWorker, or Worklet.
  • Firefox's WebIDL bindings are a high-performance binding layer created to expose APIs to web page content with low overhead. They trade increased code size for speed. XPConnect dynamically interprets memory-mapped type definitions every time calls are made which is slower but avoids bloating Firefox's on-disk or in-memory size (because the type definitions are more compact than the comparable machine code used by WebIDL).

As covered above, we have a mechanism to expose WebIDL only to system-privileged contexts by marking WebIDL interfaces or methods Exposed=System. This raises the question of when you should use XPIDL/XPCOM and when you should use WebIDL for APIs that are only exposed to privilege execution.

  • The shortest answer is that you should use XPConnect unless your API is going to be called so frequently from system-privileged JS code that you are certain the XPConnect overhead would show up in a profile. Or, better, you already tried XPCOM and it did show in profiles.
  • If the API needs to be exposed to C++ and/or Rust in addition to JS, you should consider XPCOM anyways because the WebIDL bindings are designed for consumption by JS, not C++ and Rust code.

Bugs

Handling Reported Bugs

  • Be aware of what's going on:
    • Watching: Bugzilla has a number of "watching" mechanism to help you track what's going on via email.
      • You can opt to receive bugmail for all activity in a given component without being CC'ed on the bug via the preferences' Component Watching preferences page. While this is useful, it can be a bit much. Messages you receive because you're watching a component will have an "X-Bugzilla-Reason" header value of "None" that you can use to filter on to differentiate from reasons like "CC". "X-Bugzilla-Watch-Reason" will also include "Component-Watcher" in that case among its other space-delimited terms.
      • You can also watch what your team-mates are doing by using the "User Watching" functionality on the Email Preferences preferences page. "X-Bugzilla-Reason" will be "None" in this case just like for component watching, but you can filter using "X-Bugzilla-Who" which will be the email address of the watched person, as well as "X-Bugzilla-Watch-Reason" which will also include their email address and terms that identify their relation to the bug such as "AssignedTo" and "CC" separated by spaces.
    • Triage: However, that all can get a bit overwhelming. You don't need to read every bug that comes into your mailbox. Which is why we have a triage process for components. Triagers will go through un-triaged bugs in a component and evaluate them and set a needinfo flag or assignee to take next steps to deal with the bug. The triage process is documented in the mozilla/bug-handling github repo.

Security Bugs

Crash reports

Firefox binaries are instrumented with crash-reporting handlers. These crash reports are sent to https://crash-stats.mozilla.com/ where they are processed and surfaced. Users can see a list of crashes they have personally reported by going to about:crashes in their browser.

There's a variety of information on how to understand a crash report:

You can also be granted privileges after agreeing to privacy guidelines in order to download Windows-style "minidump" files created by breakpad that contain more details than can be found on the public crash report page. The information includes details like the contents of stack memory (which may include data that has privacy implications, hence the privacy agreement).

Writing Patches and asking for Code Reviews

Pushing to Try

Updating Tests

Writing Tests

Preparing Your Patch For Review

Submitting A Patch For Review

We use Phabricator for code reviews. Check out the user guide.

Landing A Patch

Doing Code Reviews

Debugging

General links:

Using rr to record and replay (on linux)

rr is a magic super tool that can record the execution of Firefox, logging all the non-determinism so that you can replay the exact execution later with support for reverse execution. Once you reproduce a bug, you can revisit that execution endlessly. Note that it only runs on linux.

Cheat sheet:

  • ./mach test --debugger=rr ... to record a test run of interest.
  • rr ps to get a list of all the processes that existed during the most recent run.
  • rr replay -p PID to replay the execution of the given process of interest and launch a gdb instance against it. If you want to debug multiple processes at the same time and have enough system memory), you should use multiple invocations of this command at the same time.
  • Then use gdb like you normally would.

Remote Debugging of Firefox

Firefox's local debugging APIs can also be accessed remotely. This can be useful for debugging mobile versions of Firefox.

Logging

MOZ_LOG

Improving upon the prior NSPR logging mechanism exposed via NSPR_LOG_MODULES, C++ code can perform conditional logging at a granular per-"module" basis using MOZ_LOG that can be enabled several ways:

  • When launching Firefox via the MOZ_LOG environment variable without output going to stdout by default, or a file if MOZ_LOG_FILE is specified.
    • ex: MOZ_LOG=IndexedDB:5 enables verbose IndexedDB logging.
  • While Firefox is running via the about:networking page.
  • Using preferences by modifying the preferences file when Firefox is not running, or at runtime using "about:config".
    • An int pref "logging.IndexedDB" with a value of 5 enables verbose IndexedDB logging.
    • A string pref "logging.IndexedDB" with a value of "verbose" enables verbose IndexedDB logging. Valid values are "error", "warning", "info", "debug", and "verbose" in order of increasing detail/spamminess.

More info:

Performance / Validation

Telemetry

Talos Performance Tests

Focused Tracking Sites

Starting with the famous https://arewefastyet.com/ site that tracked and compared JS engine performance, there's been a history of "are we BLANK yet" sites created by teams to focus on specific initiatives. http://arewemetayet.com/ tracks all of these sites including feature-completeness dashboards that conform to the naming scheme. We call out some of these below.

Are We Fast Yet (AWFY)

Are We Slim Yet (AWSY)