From MozillaWiki
Jump to: navigation, search


  • Actions
    • Get rid of the MVP bug in favour of the dashboard (whimboo)
    • Get in contact with people from the Firefox Headlines newsletter for our addition (whimboo)
    • Find out if there’s a way to enable logging of system observer notifications (ato)
  • Triaged all the meta bugs for enable/disable methods
    • Nearly all are already fixed,except Fetch (please have a look out for more meta bugs to close)
    • How to handle intermittent failures for alpha related bugs? Should those also end-up in puppeteer-alpha?
    • Updates from mmucci
    • The first Puppeteer example (screenshot.js) is working now (7 more to do)
  • New remote agent review policy (ato)
    • Ditching r=#remote, see bug 1593700
    • Docs need to be updated
  • Security review update (ato)
    • about:support (bug 1594351)
    • Visual cue, and plans ahead
    • Rust port of startup subsystem (bug 1543115) is a dependency requirement for visual cue


mmucci, whimboo, ato, maja_zf, AutomatedTester



Not done my action item. Will attempt to do it by next week.
I got rid of the MVP bug by marking it as incomplete.
The second action item was about the Firefox newsletter. I got in contact with Mike Conley and he is a bit behind on emails, but will try to get back to me soon.
I will come back to you when I have more information.

Status of milestone 1

I have put in some bug statistics tracking progress on a week-by-week basis. It would be useful to keep this in the weekly update going into the future.
Yes, it sounds like a good thing to include regularly.

Meta bug triage

I went through all the *.enable and *.disable meta bugs, and it turns out nearly all of them have been implemented already.
So I did a triage and closed them out after verification, except for the Fetch domain, where we don’t implement Fetch.enable or Fetch.disable yet.
So in the future when you work on something and everything related to the meta bug has been fixed, make sure you close it.
We also now have the milestone and the Firefox tracking bugs, and this will make it easier to keep track of regressions.
We can see which things have been fixed for which release of Firefox, and so on.
I just had a look at the remote protoco dashboard and following these updates it is looking a lot better.
I notice there are probably some errors with the way these graphs are being created, but it’s at least an indication that things are improving.

Prioritising intermittents

I’m doing most of the work on intermittents currently, and I have a question how these should be related to the alpha release?
I think it depends on how important or frequent they are, or how easy they are to fix.
If there’s an intermittent that negatively impacts the usage experience of the remote agent in a significant way, then we obviously need to prioritise that.
But if it’s something that triggers one in 10,000 times, then maybe not?
I would count those intermittents marked by OrangeFactor as important regardless, or when the sheriffs come to us with a permafailure.
Permafails sounds reasonable, they jump straight to the head of the queue usually.

Update on velocity tracking

First of all, thank you all very much for tagging bugs properly, marking the P1s and P2s. It’s been working great.
The backlog is increasing nicely, and this is good.
Currently there are five bugs in development, there are 15 bugs waiting in the backlog to be selected, we have completed six, and we have one reserved.
On the dashboard you can the status of each of these, for example if a bug is in review, whether it is landed, &c.
Right now we should just keep increasing the backlog with the bugs you feel are going to necessary for the alpha.
My impression, given the pace of development that we are having right now, that it won’t be a problem to set the forecast on the 22nd.
When you showed us the tracking dashboard of the other team, there were a couple of different tables?
Will you add those so we can see how we progress?
Yes, right now what you’re seeing is the list of bugs with the changes in status.
None of the tables above are active: they are all blank.
Plus there’s a tab missing, which is supposed to hold all the performance graphs.
All that will become active after the meeting on the 22nd.
It’s really interesting that you’re saying this, because recently we’ve seen an influx of unexpected work coming out of the security review.

Security review

David, Thyla [sec team manager], Dan [sec team, and doing the security review for us], and I have been having an email conversation about what remaining things are expected of us.
Last few days I’ve filed a couple of new bugs based on the suggestions that have come out of that.
One of follow-ups is to have a new section on about:support that says whether the feature is enabled, is it listening, and where is it listening.
I submitted patches for that before meeting, and I think it’s a reasonable thing to want: if the remote agent is listening and it exposes browser internals, then we need some way of making that discoverable to users.
The second thing they wanted is a visual UX cue that gets triggered when the remote agent is listening.
We have a similar feature in Marionette, where the address bar turns orange and it adds a robot icon to it.
When I implemented that three or four years ago, it was never meant as a security feature. It was meant as a sort of visual indication to the user that a particular browser window that you have open on your screen is being remotely controlled, potentially by a malicious user, and that what happens in that window gets transmitted across a wire protocol that is unencrypted. All this to indicate to the user that you should probably not use that window for your online banking for example.
It was meant as a hint to the user that there’s something special with this particular browser session.
The reason it’s not a security is that, with both Marionette and the remote agent, you have the ability to evaluate script in the chrome context. And any script evaluated there has the potential of removing this visual hint from the browser UI.
On the other hand, we already have the feature implemented because we added it for Marionette a number of years ago.
Another problem I have with the visual cue is that it’s kind of annoying and not very pretty. However, I think it would be acceptable if it’s only activated when the remote agent is serving a browser-external connection.
One reason we need to think about this is that we may in the future want to allow browser-internal use of the remote agent.
For example, imaging a DevTools panel that would let you record-and-replay tests would be quite useful.
But if the address bar turns orange when you open DevTools, that would not look very good.
So the way I’m going to resolve this sec-review request is to implement the visual cue only for browser-external connections.

Rust port of startup subsystem

I also discovered that triggering the visual cue means that we first have to land the Rust port of the startup subsystem because it changes the API and introduces the necessary system observer notification, which I need in order to implement the visual cue.
However, I discovered yesterday that Nika who was doing the last piece of the review, has now gone on PTO for three weeks.
I wanted to ask if you all think it’s fine for me to remove her as a blocking reviewer for that change?
There are a number of other things that are depending on this changeset landing.
My perception is that it would be fine because the things highlighted in the first pass on the review weren’t super-critical.
I think that’s very reasonable.
Do we have someone else who can look at this code?
I’m sure we could find someone. Maybe Nathan?
But it is quite a substantial changeset, so I just wonder if it’s worth including a new reviewer when at least two or three other things that are blocked on it, and we didn’t discover any major design concerns in the first pass of the code.
The only thing I’m a bit worried about is, if [Nika] is away for three weeks she will presumably have a sizable backlog so I wouldn’t know when a review might be possible to do.
We might get a review early December, and then we have only two weeks remaining before we’re supposed to be done with the alpha.
Perhaps [Nika] has designated a go-to person while she’s away?
Yes, is there someone else on the team we can approach, and is it worthwhile?
What is the concern that Nika would fill? Like, why do we need Nika specifically?
My suggestion is that we don’t ask a second person because I felt that the first pass didn’t uncover any major concerns.
But if we need to get someone else to look at this—if we think that is important—my suggestion would be ask Nathan, because he’s already been involved in the XPCOM/Rust stuff.
I’d say go with our review and see how it goes. Nika is already flagged and we could follow up when she comes back.
Let’s not block anything unless there are concerns we are going to meet.
I suppose one mitigating factor is that none of this is shipping in Firefox, so until we get the security review done, none of these changes will be exposed to users.
So if we’re happy with that approach, I’m going to go with our rubber-stamp and land it later today.
I discovered earlier today it may have caused some leak check regressions, but I will look at this before landing it.
If you land it, [Nika] might want to add some comments when she comes back from PTO, but if you remove it from the review queue that won’t happen. You could set a needinfo just so that it stays on her radar in case she wants to.
That way it doesn’t look like we’re cutting her off from the conversation.
No, I agree. I was going to do something like that anyway. It is likely there will be some need for follow-up.

New review policy

I want to propose a new review policy, because I found out last week that it is possible to set up a Phabricator Herald rule that could automatically add the remote-protocol-reviewers group as a blocking reviewer for any change under a designated sub-directory.
And I first thought of this idea in a different context… there’s a discussion on dev-platform about how we can make sure incoming review requests from contributors who haven’t set a reviewer don’t get lost.
There is an issue that unless you set a reviewer, the change request can get lost and the contributor won’t hear anything back.
Unlike with Splinter previously, where each component had at least one person in charge of bug triage, there’s no such thing in Phabricator.
There have been a couple of examples where patches have gone unnoticed and unreviewed, and that’s not a good first contribution experience.
I think maybe we have had one or two examples of this in geckodriver previously.
Regardless, it seems to me like a good idea from a usability point of view, because it means we don’t have to remember adding r=#remote to the commit messages.
You would just write a patch, submit it to Phabricator, and then Phabricator would figure who is the right reviewer for the change.
If it works the way I think it will work, it would be an improvement.
I will let you know when the Herald rule gets added.
It still works the same way as setting a group reviewer manually? So if one of us reviews on behalf of the group, that is sufficient? It doesn’t add us individually?
So it adds the group.
You can be added individually as well, but you need a private rule for yourself.
But I can’t set up global rules myself, so I need someone with [Phabricator] admin access to do it for me.
I submitted another patch earlier today for the about:support thing, and I noticed there is already a global rule for localisation files. So it seems to be working that way.
And when someone from that review group reviews it, Lando will then do its usual thing and rewrite the commit message accordingly before [pushing it] to autoland.
There’s one thing I’m a bit worried about.
I sometimes submit patches that are work-in-progress and I don’t want to have a review yet. That’s why I don’t add r=#remote.
That’s something I don’t want to lose.
So I’m not sure how others do this, but I want to retain the ability to do this given the recent problems I had with upgrading macOS and lost all my work. So I want to push these patches up to Phabricator instead of just having them on my local disk.
I don’t want to trigger any review request for this.
I think there’s something if you add wip to your commit message, the patch will be set as a draft, so it’s uploaded but won’t show up in anyone’s queue.
But I feel like that’s outlier development process… because eve if you upload a patch and it gets automatically flagged for review you can still go in and edit the revision to remove the reviewer.
But you have to do it each time you update the patch, I assume?
Yes, but I feel that’s maybe a worthwhile tradeoff consider if we don’t have such a rule we risk not paying attention to patches from contributors who forget to set reviewers.
I only wonder if there’s a way to exclude those patches that maybe have wip as a prefix.
I think that may already be the case. Maybe double-check that and we can circle back and figure out the details.
Maybe you can add a comment on this bug, Andreas? Perhaps dkl knows if that works.
So I don’t think the wip thing is something that is built into Phabricator.
I think it’s something done by the tool you use to upload the patch.
Yeah, maybe it’s a moz-phab thing.
Yeah. I don’t think Phlay has this, but again, it’s not really a use case of mine.
But I also don’t want us to optimise for a particular workflow that can’t be resolved by two clicks in the UI.
Regardless, whatever tool you’re using to upload your patch, there might be an option to set that as a draft.
Setting a patch as a draft will not cause it to appear in other people’s queues.
Maybe look into that, and if that isn’t working for you, then we should discuss again.
There are two different states here, right? There’s ‘you upload patch but it doesn’t have reviewers’ and ‘you upload a patch in a draft state’.
I’m using moz-phab and it doesn’t have such an option.
But it seems like a reasonable thing to file a feature request about?
If you have something in a draft state, those bugs will also not be available to any other person.
It also will not add a link to the bug.
And that is not what I want.
I want to have a link to the bug so people can see there was some activity, and they can have access to this patch.
Then we need to consider the bus factor, when someone has to pick up a patch and continue the work, but if it’s a draft you don’t have that, and that’s not how it should be.
One second. Regardless, we have a lot of other things to discuss… so could we set this particular conversation aside to the end, if we have time and go on with the rest of the agenda for now? Thanks.
So the next item would be the test suite progress?

Puppeteer test suite progress

You will notice from https://treeherder.mozilla.org/logviewer.html#?job_id=275254586&repo=try that there are a lot more tests in the Puppeteer test suite passing now.
I don’t know if it’s specifically because of your patch, but that’s where I looked at the test run log from, or if it’s just a result of the last couple of weeks’ work.
I meant to take a closer look at that, because I noticed a lot more tests are timing out.
This isn’t in the agenda, but I want to give everyone an update on the browser selection work.
I had a look and nothing major has really changed. We have two more tests passing.
The main thing that helped us here was that we’re no longer hanging during the test run, and that was mostly fixed by me with my patch for the NetworkObserver.
This is not a change that is related to maja_zf’s change.
Right, and I think I said as much, that I had only looked at the test run linked from a random recent bug.
All the stuff you see here I have already filed bugs for.
And none of this is blocking our alpha release.
Right, OK.
The point I was trying to make is that since the last time I—at least—looked at this list, we went from no tests passing and everyting timing out to a lot more passing, in five or six weeks worth of work.
However if you look at the list of where the remote agent returns UnknownMethodErrors, that might be another data point which we can use to prioritise what to include in our alpha.
The alpha is not 100% Gutenberg-focussed: it contains things that we think would be possible put into the hands of users before the end-of-year, and so maybe I’ll take an action to summarise these errors from the test log.
To see if there’s any low-hanging fruit in that list we can easily support.
You have a lot on the plate I feel, how about I take that action?
There’s some stuff on this list we already have support for in some capacity in geckodriver or Marionette that we could borrow.
There’s a couple of things here that to me seems low-hanging fruit we could quickly support.
I feel getting an overview of what things we need to implement to move the Puppeteer test suite forward, seems like a good thing to do next.
I’m not so happy we spend time on this.
We have put Gutenberg and the Puppeteer examples in for the alpha.
The Puppeteer unit tests cover all the domains, all the methods, and almost all of the events.
This is way too much, and this is not something we want to do this quarter.
So I think we should do is get an overview and to file any of those bugs and mark them as mentored.
We should not work on those: they are not used in Gutenberg, they are not used in Puppeteer examples, and we don’t want to reach 100% Puppeteer support.
So I think what you just said is simply not true.
The things we have included in the alpha is beyond just Gutenberg and the examples.
That’s not what I got.
The thing that we need to be focussing on, is what is doable by end of this year.
If we think we can get all the domains done by the end of this year, and all the tests passing, then fine.
I don’t think that’s realistic, but if you think we can, then fine.
But we can get them documented and I’m happy for us to do that, but I don’t think we need to get everything in now before the end of the year.
So I did not say that we are in a position to do everything by the end of the year.
That’s not what I said.
What we have for the end of the year is the alpha.
Alpha is set for the end of this year.
What I’m concerned about is that we work on things that we know users are interested in.
That we work on stuff we know users are going to be using.
We have prior experience that [informs us] what users are interested in.
One example I used the other day was [the ability to ignore] TLS certificates.
My point is that when we have something that either have existing experience for how to do, or better, we have existing code in either Juggler/Marionette that we can re-use—which was the case for TLS certs—then that is a low-hanging thing that requires little investment for us to support.
On the other hand there are other things in Puppeteer that requires significant work.
Network events is one example.
Based on this test log, there seems to be two or three methods that are cropping up a lot that we know is easy to fix.
Then it might make sense to put these things on the backlog because they would move our test results significantly forward.
Putting them on the backlog is definitely worthwhile, as long as we only use it as a discussion point.
We mustn’t get into the habit of “this is good” and “people might use it”, because we know we will be doing releases at the beginning of next year and it’s OK for some thing just to have to wait.
Let’s get it all documented up, and then make a pragmatic choice based on what we document.
It doesn’t mean we have to do this now, when we’re trying to support he WordPress people to get them using it.
If we get good feedback on a real-world test suite, that’s awesome.
On the other hand if they’re not using it [but] we think it will be high value to people, then let’s discuss it and go from there, but only as long as that’s coming from something that’s documented rather than from just throwing everything in.
If we get everything documented, it creates the next discussion that we need.
What if we do a little bit of outreach, for people using Puppeteer?
We have the DEBUG environment variable which can print all the used methods and events.
We could have a small script which could parse such a log file and extract what we want to know, then we can get people to run it for us so we know what their test suites are really using.
I think this could be useful, and we could even make a dashboard for it.
I could also figure out what [some test suite?] is using, and maybe we can find other people in the community or at other companies who are interested in using Firefox with Puppeteer.
Asking them for a log file, we don’t even need to supply a script.
I’ve been saying all along that the more data points we have for determining what features are using in real-world tests, the better.
This is one of the reasons we chose not to focus initially on the Puppeteer test suite, because granted, it does consist of a lot of edge cases.
That was the right choice [at the time].
Anyway, it seems like we have a couple of actions here. Maja will go through a Puppeteer test log from a recent run, maybe prioritised by a usage count.
Based on this test log from the Puppeteer test suite it seems like there are one or two methods would unblock a bunch of other things.
[talks about how the Puppeteer test suite enables a lot of features before each test]
Sounds good.

Browser selection in Puppeteer

There’s a PR open on the Google Puppeteer repo.
I still have to figure out the Google CLA.
It enables an environment variable for specifying a product, right now Firefox or Chrome.
In the tests there’s a hidden variable to configure Juggler, because in the Puppeteer CI they regularly test against the Juggler implementation, and so I wanted to distinguish that from Firefox Nightly.
And I have to file a bunch of bugs for follow-up work.
What works now is launching the browser: Puppeteer takes care of profile creation [where it sets] some basic preferences.
One open question is how to specify “automation friendly” preferences?
I see there’s a RecommendedPrefs.jsm file under remote/, should I put way more stuff under there?
One of the reasons a lot of tests are timing out is that we have the Firefox “welcome” tab it opens up when the browser is started, for example, so the tests can’t even run properly.
We’ll get more tests passing just by selecting the correct, default preferences.
Another thing to follow up on is automatically selecting the the default Firefox binary to use.
Right now I’m relying on the user providing a path to Firefox, but we should either download Firefox or figure out the default.
Another [method?] in the launcher is connecting to an existing browser instance. We should maybe follow up on this, but it would be lower priority.
The PR has not been accepted by Google yet, but it is vendored in mozilla-central, and the ./mach puppeteer-test command uses it already.
There are additional options to better control how you run Puppeteer against Firefox, and you can also use the same command to run the tests against Chrome.
Regarding the CLA, you need to wait for a lawyer to put your name in a Google Spreadsheet.
Then you may have to close and reopen the PR for the CLA bot to run again.
I think maybe you can just tell the bot to run again.
Yes that may work, depending on whether [your commits are authored] with your LDAP email address.
Do you have any thoughts on where recommended preferences should go?
The recurring problem here is that we have some preferences that take effect before the browser is started, and some prefs that can be set at runtime after the browser has started.
Which is why we have a perfs list in geckodriver that gets written to the profile before Firefox is started and a separate prefs list for stuff we can set at runtime once Marionette has been initialised.
The mistake we should not repeat with the remote agent is to set too many preferences when the remote agent interface itself gets enabled.
The reason for that is you want to be able to initialise the remote agent and use it, from say DevTools or for some other browser-internal reason, without [hazardous side-effects].
Setting preferences at runtime that might interfere with other features would in this case be dangerous.
A thing I noticed when I did the Rust port of the nsICommandLineHandler is that—for some reason I can’t explain—it gets run a lot [earlier] in the startup of Firefox than any JavaScript implementation.
So when we had nsICommandLineHandler implemented in JavaScript, it ran much later [during startup] than it did when I changed the implementation to be in Rust.
And this is good news for where we can set preferences.
My suspicion is that we could set more preferences in the startup code once the bootstrapping patch lands, instead of having to split too many preferences between remote agent initialisation and [Puppeteer].
We should at least avoid setting preferences during RemoteAgent.listen().
Currently we do this and I should perhaps file a bug to move them to Rust too.
The safest thing to do would be to write the prefs to profile, however.
If you have some particular problem you need to sort out right now, writing them to the profile first seems like right option.
Then moving them over to the Rust startup code later.
I noticed from the experimental Firefox subfolder in Puppeteer that they have collected a bunch of preferences.
They have a file that they copy in before they start up the Juggler-enabled Firefox.
I guess we have a decision to make preference-by-preference about what the responsibilities of Puppeteer vs. [Firefox] should be.
You mentioned the browser startup welcome page. I think it would make sense to set this in the profile.
So that the client that starts Firefox needs to make sure Firefox ends up in a state that it can be used.
The remote agent is a bit different from Marionette in the sense that it is not purely a component for browser automation.
There are other use cases where you might want to use the remote agent outside browser automation.
If you have VSCode and you want to use the remote agent for getting some browser-specific information, you would want to launch Firefox and have it connect to VSCode, [and in this case] you don’t want to set preferences that might interfere with [the browser] because you would want keep interacting with it as a normal user.
So I think for the things you mentioned, it probably makes a lot of sense for Puppeteer to maintain that list of things that it needs to set in order for Puppeteer to work as expected.
That’s a good way of thinking about it.
I suppose if Puppeteer already has a list of preferences it knows it needs to set, that’s good.
Another source you could use to find out what is relevant would be testing/geckodriver/src/prefs.rs.
I wouldn’t use geckoinstance.py because it contains stuff that’s related to earlier release channels.
Well, I suppose this is true also for geckodriver, so I suppose you should look at it with a critical view.
Fair enough.
prefs.rs should contain those preferences that are used before we launch Firefox.
All the other ones are inside marionette.js.
Another next step is removing the experimental directory altogether.
So keeping the puppeteer-firefox [npm] package around for people to use, but the actual implementation is not under development anymore.
[explains next steps after PR]
Just as a slightly tangiental thing, I was having a thought about the puppeteer-firefox stuff: we’ll probably take it over at some point to do the browser installation code, because my understanding is that Puppeteer is made up of puppeteer-core and puppeteer-chrome, and that installs the browser, so we might need puppeteer-firefox for installing the browser.
And the work you’ve done in puppeteer-core for browser launching would then just be able to use that stuff.
I thought the agreement we had was that we would not be doing anything to the puppeteer-firefox.
Yes, for now.
I mean, not ever. Because the puppeteer-firefox package is Juggler specific, and the whole reason we’re doing this work in the core package is that we want the normal Puppeteer package to be able to select which browser to use so that you don’t need separate npm packages.
Yeah, the way I’ve done it, is that depending on which browser you select Puppeteer will download the correct stuff.
There’s no puppeteer-chrome.
I think the important point here is that we don’t want to break existing users [of Juggler]
Yes, that is the argument for keeping the puppeteer-firefox package around, yes.
Right, OK!


  • Find out if there’s a way to enable logging of system observer notifications (ato)
  • Collect list of methods failing with UnknownMethodError from a recent Puppeteer test run (maja_zf)

Status of Milestone 1

  • Last week: 44 Total; 43 Open (97.73%); 1 Resolved (2.27%); 0 Verified (0%)
  • This week: 51 Total; 45 Open (88.24%); 6 Resolved (11.76%); 0 Verified (0%)


% git log --date=iso --pretty=format:'%ad%x09%H' -- remote/ | awk '$0 >= "2019-11-01" && $0 <= "2019-11-08"' | awk -F $'\t' '{print $2}' | xargs git show -s --format='%h%x09%an%x09%s'
23ec3d5f2cee    Ryan VanderMeulen       Bug 1594871 - Disable the racy sub-test. r=whimboo
121f19b7eb9b    Henrik Skupin   Bug 1587846 - [remote] Add "quality" option to Page.captureScreenshot. r=remote-protocol-reviewers,ato,maja_zf
b589179642b0    Henrik Skupin   Bug 1587846 - [remote] Add "format" option to Page.captureScreenshot. r=remote-protocol-reviewers,ato
d9081e2ef7da    Henrik Skupin   Bug 1587846 - [remote] Fix payload of return value for Page.captureScreenshot. r=remote-protocol-reviewers,ato
ed3629de9e8c    Henrik Skupin   Bug 1592643 - [remote] Implement Target.activateTarget. r=remote-protocol-reviewers,maja_zf,ato
c689ba0e46c8    Henrik Skupin   Bug 1591922 - [remote] Page.bringToFront has to wait for activate and focus events. r=remote-protocol-reviewers,ato
d06169c5aa69    Henrik Skupin   Bug 1592643 - [remote] Methods in Target domain have to raise for invalid "targetId" argument. r=remote-protocol-reviewers,ato
c50026801957    Henrik Skupin   Bug 1592643 - [remote] Refactor and improve browser chrome tests for Target domain. r=remote-protocol-reviewers,ato
15245002ac1f    Andreas Tolfsen bug 1549708: remote: implement Page.reload's ignoreCache argument; r=remote-protocol-reviewers,whimboo
a6683fe9a01c    Thomas  Bug 1591982 - Removed 'timestamp' property from Page.navigatedWithinDocument and Page.frameStoppedLoading r=whimboo,ato
5eef6dba8cc2    Andreas Tolfsen bug 1591927: remote: implement Security.setIgnoreCertificateErrors; r=remote-protocol-reviewers,maja_zf


Development status of Puppeteer alpha
Puppeteer alpha dashboard
Bugzilla queries
All project work currently in development
Available MVP work
Completed MVP work
Bug overviews
Gutenberg dependency tree
Puppeteer examples dependency tree
Complete Puppeteer dependency tree
All ze boogs

PTO (🍂)