Compatibility/Meetings/Sync w Honza

From MozillaWiki
Jump to: navigation, search


April 16

New dashboard system (Honza)

Honza: I've shared the link, I am curios if you can login. It takes some time to load. This is the first version. The second link shows a graph. It is a submission count, shows the reports submitted per day. +- 2000 per day. As you can see, the numbers are low at first, due to the experiment period. Then it jumps, since more users have had this new reporting system.

As you can see, a lot of data is present. From the overview dashboard, we can see the number of reports received by site, by day, with comment, without comment, based on OS, etc.

We can also see that if an issue is currently known or not.

Paul: How does QA fits in this new dashboard? We will have a shorter list that QA can test and try to reproduce. Not sure how we should tackle this, with what to start. Triaging 1000s of issues per day is not really feasable.

Honza: Right. So we need features that will help QA in their triaging process, so that they can focus on the valid, reproducible issues. The first approach is to see what domain has the most reports. From the data shown, the first indication is that youtube has the most issue, so we should focus there. Sorting those issues somehow is another priority.

Paul: Maybe a keyword in comments, a root cause, would help.

Honza: Right, we should look at the most promising comments. You mentioned the kanban board with top most issues.

Paul: We can somehow filter that most of the people reporting the same incomplete issue, raises the urgency of checking that issue.

Honza: That is the future, lower quality, high number of reprots. So that is our focus now, to somehome implement some feature to make small progress.

Paul: Maybe the ability to search by comments and be ranked based on the information provided.

Honza: I agree but this more for statistics to figure it out where there could be more potential issues.

Paul: We could schedule a meeting and brainstorm ideas on how to make it better, to extract the most info from the reports we are receving.

Raul: Will the Ml-bot still be active?

Honza: Sure.

Paul: We will have as mentioned, an internal meeting, and we will present the brainstorming ideas to the team regarding the features that would be useful.

Honza: Sure, both triaging and work tracking should be supported. How much time do you thing you need?

Paul: Most likley this week.

Raul: Let's say we have everything in place, what if we get 200 valid reports a day because most likely we won't be able to keep up.

Honza: We don't know exactly yet, we will have to check how it goes and decide then. We will identify the bottleneck issues, maybe we will see that we are received less valid reports, and uppermanagement will need to take action towards this.

March 19

QA help needed for some Meta bugs(Honza)

  • QA help needed for:
 * Bug [1778322]( - [meta] Bad, removed or changed file extension after download
 * Bug [939897]( - [meta] Support shifting / splitting flex items between continuations of a fragmented flex container (flexbox content truncated when printing)
  • Go over all linked bugs and test
  • Leave one comment in each meta summarizing the status of the dependencies

Honza: Go over these 2 meta bugs and test the dependencies and blockers.

Raul: Should we look on the closed or the open bugs?

Honza: Basically the open bugs but if you see something worth testing on the closed ones you should test them as well.

Quick update on the new Dashboard(Honza)

It is not deployed yet, but it should be ready for the next meeting. Maybe Dennis will do a quick demo in the meantime.

Paul: Doing queries inside the dashboard to export metrics and numbers would be nice.

Honza: As soon as you have the dashboard in your hands, we will look for and encourage feedback. This will be taken into consideration so that your work is more easily done.

Internal updates (SV)

Calin: Just making sure you are aware about my long PTO.

Paul: We will also have help while Calin is in PTO, to take the share of the load off Radu and Raul.

March 5

Bugzilla triage (SV)

We have completed the Bugzilla backlog triage. Progress and data [here](

Firefox for Android 126 will be moved to mozilla-central (SV)

Is this something that might affect our project?

Honza: I do not foresee any troubles. Maybe Sync with Tom regarding the Automation Interventions run, just to be on the safe side. I will talk with Dennis.

New dashboard System (Honza)

Honza: I will share my screen, just to get an idea. We shipped the new reporting tool, available in Firefox - report broken sites.

Watching the graph, we can see the number of received reports during the experimental phase, roughly 200 per day. Then we can see the numbers when the tool was shipped in 123. We can see 10x times more, which corresponds to what we were expecting. A lot of data.

Tyler, from the Data Team, put together a dashboard to view what we are getting from users (broken site summary). The blue column shows the reports with comments, the red one without comments. In parallel, we are trying to group it in a way, to see the most useful comments. Ksenia is using ML to filter out without comments, or with high probability to be helpful. We can also see that most Firefox users are running Windows.

We can see the reports by OS, and by reports in the last 24hr, also per site. We can see the Trend and what is important to look at. Something similar might be available in the QA Dashboard.

The bottom piece allows filtering by URL and OS. Some things will be improved, this is just to get a general idea. It would be great to see the actual comment in the table.

Based on this information, I would like to hear feedback, e.g. what would you like to see on the dashboard, useful data, to see only reports with comments, only useful comments, high probability for it to be useful, and automatic translations of comments. etc.

We will try to automate sorting the issues automatically so it shows up on top, so we can connect the dots more easily.

The next step might be that we need screenshots to better understand what is the issue.

Dennis is working on the QA Dashboard, the only blocker is it is not yet deployed. Soon will have something for you to look over. Feedback will be most welcomed, we need to figure out what features you need to understand and do your work.

Raul: Can the links be shared?

Honza: Sure, will leave a DM with them.

Honza: It would be a lot different work. For example, in Git, we will get more valuable reports. The new report will get a lot more reports per day, 2k+, but there will be less info there. We need to figure out how to be useful. Get the most from the data. Our goal is to locate the major problems that Firefox is having when surfing the web. As soon as the QA Dashboard is deployed, we will let you know.

February 20

Welcome Radu to the team 🎉(SV) =

We would like to welcome our new team member, Radu. Yesterday he got the LDAP setup email, we're still configuring things and he started looking through ramp up documentation.

Mozilla email: Github account: Slack: railioaie

Let's add him to our Github repo, with all the permissions, and hand out calendar invites for meetings. Also on Slack channels.

  1. Increase in the number of incomplete reports received (SV)

At the start of the week, an increase in incomplete reports (no description mostly) has been observed. Did we released the new reporting flow?

Honza: In that new reporting flow, that the users see to create a new report, there is a link, "send more info", that link goes to, so I imagine that people are using that link as well to report issues.

Raul: That makes sense.

Honza: Yes, the new pop-up is just simple. Some people might get confused and try to send quickly a report on without filling in any data.

Bugzilla triage updates (SV)

Currently, we are at 70% coverage rate. Progress can be seen [here]( So far, things are going smoothly.

Honza: Awesome. We are also doing a triage session today.

The meeting today will be shorter, with not too many agenda items. We will use the rest of the time from the meeting for triage.

February 6

Reporting tool updates (Honza)

Getting around 100 reports a day for a 10% user base. We will probably get x10 on the full population.

Release of the reporting tool around 23rd of Feb. Once it is shipped the next step is to go for mobile as well. We gathered a lot of data but it's not deployed yet. We are making progress.

Dashboard will be done in Q1 and we'll probably start using it in Q2. Expecting that in Q2 it will be mostly to identify any issues with it and adjust the flow to suit our needs. As soon it gets deployed we will share the link and we will start working together.

After the deployment we'll need to work together on:

  • identifying the candidates for QA
  • confirming them by finding STR
  • prioritizing the issues and diagnosing the problems.

We will have to be smart about how we put our time into it.

As soon we have the triage dashboard, we will need some kind of metric to track the progress if we are going in the right direction. We have to come up with a reasonable number we can influence. The time spent on a website can indicate if the site is broken or not. How visited is a website (popular) on Firefox vs Chrome?

Tomorrow we have an important kick-off meeting to try to come up with that metric. A lot of important data people will attend.

Tracking data available in Q2, and start using everything in H2.

Sometime in Q2, we will run a study on users via Zoom on how they perceive Webcompat in Firefox and in general.

January 23

Bugzilla Backlog triage (Honza)

Honza: We have an additional task now, we need to triage the list of opened bugs in our Webcompat repo.


We append the webcompa keyword. We are looking at the latest ones, recently updated. We will need help triaging this, but starting from the end.

If they are reproducible, we need to see what label we are adding. It might be an OKR.

Paul: We will do a document to assign between QA

Honza: There is no deadline, it depends on your estimates.

Paul: Is this more important than Top 100? This might affect the estimate for the Top 100.

Honza: It depends what impact it might have. I am not sure of that, but definetly we should keep doing our Top 100 OKR. We can play with the definition, to see how quickly it can be done, and base our estmations on that. This will be further discussed in the other meeting. We should just burn down that backlog as we can, as there is no deadline.

Raul: How are these issues going to be split between us?

Honza: Yes, we will start from the latest, and QA will start from the bottom.

Paul: Will the labels be decided in the next meeting?

Honza: Yes, hopefully we can 100% be done with the labeling system. We are already using some keywords. Any report with the "webcompat:*" keyword is excluded from the list.

Raul: We had a plan previously for a similar triaging process. Is this something we should talk in the next meeting with the team?

Honza: Yes, that would be a good idea.

January 9

2024 Plan for QA (SV)

Since we are unsure how our work style will change/be after the transition to the Dashboard, we have no new proposals for 2024. We will stick to the 2023 plan until then.

For 2024, here is the updated [plan]( containing our OKRs

Should we still split the OKRs for H1 and H2, meaning that we will create a new project for H2 in our Github repo, importing the tasks from H1? Or we can keep this one for the whole year?

Honza: I would wait, for Q1 nothing changing too much.

We should come up with a metric to measure the state of the webcompat and in the next quarter move it in the direction we want to.

Let's wait on Q1 and in the 2nd quarter we will have a solid base for OKRs.

QA Dashboard

Do we have any updates on this?

- The Dashboard is not ready yet, I assume we will need about 2 more months for it to be done. - There is a lot of data gathered through the Reporting tool from the Experiment that is live (see link from meeting) - So far, we receive in average of 200 reports per day, and it's growing. Mostly noise for now, but we managed to gather some data and categorize some reports. - The experiment is set at 10% of the population with the new Reporting tool. We are now around 7m, and we might get to 10m. That will mean an increase in the daily reports received. - The experiment will run for a couple more weeks, it's due on February 14th. When it's done, we need to see if we will keep the Categories drop down, and ship to all users, likely in Firefox 122, sometime in February. By then, we expect to see at least the Dashboard being usable.

Paul: Firefox 122 is set to be released on the 23rd of January, it will be hard to release in 122 if the code still needs to land.

Honza. Good point, that might be for version 123.

Paul: That gives us more time to decide what to deploy.

Honza: I'll ask Karen.

Paul: You can also use a Nimbus rollout to release it if it's only about flipping a pref.

Honza: Sure, most of the stuff is enabled by prefs. Good point

Labeling (Honza)

Honza: For the next 2 months, we are still foreseeing Github triage. However, what we produce of data with the Dashboard, should be trusted info. Eventually, we will come to the platform team with data, showing the most encountered and important issues, and how fixing them would impact our browser. We need to explain this data. This is one of the reasons why we are switching and collecting data. The big picture, the real State of Firefox Webcompat, is recommending trustful. There should be minimum human judgment for this. Each category should have simple steps to reproduce, and it should be as clear as possible. This might be easier in the new system, where people can choose the category.

Paul: Do we want to automate this somehow, or we will still rely on QA to confirm the categories?

Honza: I'm not sure we can automate this, although it would be nice.

Paul: I think it would be hard to automate, as you'll need something specific like a keyword in the report to categorize it automatically. And each problem is specific.

Paul: Also, some users might not select the correct category.

Honza: There are 2 things, picking the category - which we know the limitations for this, and the 2nd thing should be creating labels. For now, we have 4 clear categories: Unsupported, ETP, Private Browsing, and Performance.

Paul: I think we should at least offer an "Other" category especially if it's mandatory to select one.

If nothing applies, there is no label. Whatever we have in February, feedback for this will be taken into consideration.

Honza: As a first update/enhancement for this flow, we are aiming to ship this for Firefox mobile as well. Also, we are thinking about screenshots provided by users - but that might be harder due to storing possibilities and other limitations. So a lot fewer labels, but trustful.

Raul: Should we edit the labeling descriptions on Git Hub?

Honza: I think for now we shouldn't edit anything since in around 2 months we are migrating to the new dashboard.

Raul: Should we create a trend label for the private browsing issues? We normally have a label for private but not for trends.

Honza: No action for now. Q2 will shed some light, as we will have more data then.

Google is going to kill cookies soon

What is Mozilla's position on this? Any plans to do the same?

Honza: I do not know what will happen on Firefox's side, but that will be interesting from the webcompat point of view.

December 12 2023

Top 100 Testing and Blockers (SV)

The number of discovered issues is lower compared to when we started (most are known issues - discovered in previous runs). Also, for Google and other sites where a valid mobile number is needed, we are blocked due to our test account mobile numbers being used for other test accounts. Some blockers have been observed for Financial sites where a valid banking account is needed, or where a mobile app is needed to perform various operations (Netflix, Google-related apps, etc)

Upcoming Holidays and Top 100 (SV)

Just a heads-up that this month we might not finish the whole Top 100 Test Suite for Q4 - Batch 1 due to upcoming holidays and PTO

Honza: Sure, please add the PTOs to the Calendar.

Work Week in Berlin (Honza)

Honza: I'll share the DevTools folder. This is a summary of planning for 2024. Our top-level goal is to improve webcompat compatibility.

  • The first part is Webdriver BiDi

WebDriver Bidi will involve more browsers, like Chrome and Safari, as it involves a protocol for crossbrowsing operability - for Automation - and Pupetteer. That would be the first use case, for Automation.

Raul: Will that involve a programming language?

Honza: Pupeteer is a library for JavaScript. It is in the experimental phase. We will be collecting feedback.

Raul: Could QA be involved in running and creating automated tests?

Honza: Sure, let's think about that for the next year.

Honza: The main goal is to build the BiDi protocol and build an ecosystem on top of it which could be used for running automated tests and debug browsers.

  • The second Part of the Work Week is the Reactive Webcompat

Most people use the reporting tool in Nightly now. Partially this is on purpose, due to the number of reports used. We want to see the big picture: the state of WebCompat. The idea is to get more reports. This will be achieved with the new reporting tool - shipping in Firefox 121 - the plan is to cover mobile as well, not just desktop. We also need to triage the incoming reports, and since the number is high (thousands) we are looking to automate some parts. That is addressed in Reactive webcompat composed of 3 parts: Reporting tool, Triage dashboard (likely just for QA), and Knowledge Base.

Based on the data collected from Firefox not being supported, we will know what actions to take, to understand the webcompatibility around Firefox.

Collecting engagement data will show some clarity as well. How much time users are spending on certain sites, top sites popular in Firefox, etc. This might show us if Firefox is gaining popularity or losing popularity. For example, some sites might not be popular in Firefox but are popular in Chrome, and we need to see why.

Private browsing mode will be looked into it.

We want to understand the state of Firefox from all of this. We can then recommend to the Platform teams what to work on.

  • The third part is Dev Tools

The same goal - we want to contribute here as well. Provide tools that enable developers to debug and optimize pages to work well in Firefox with a focus on performance and reliability.

We want to help them to diagnose webcompat problems if they happen. Quickly and efficiently. We do not want to complicate the existing system. Whatever we have, should be more quick and efficient, with small features that will make a bigger impact.

Many issues happen on the Production version of the site. Features will help them diagnose bugs in the Production version using Firefox DevTools.

Baseline- all features are well supported across major browsers, they belong here.

We are looking to run an Audit regarding this on pages.

Raul: For remote debugging, we have some feedback.

Honza: Great, let's talk about it in the next meeting.

Honza: Also, the profiler will be more user-friendly.

All of this culminates in making Firefox more compatible, and thus better.

November 17 2023


Paul: I've set up the metrics for the TREND OKR, and we can go through each label present in the list and used by QA, to see which are irrelevant and which are relevant, and to provide more clarity over some of them and the process that they are being used.

Honza: What happens when you can use more than 1 label for an issue?

Raul: There are cases where we do that, and we have reports that we have moved to `needsdiagnosis`, as QA was unable to pinpoint exactly the label behind the report. For example, the video is not playing because the play button is not responding. That is a case where 2 labels can be used.

Honza: Is there a way we can improve this?

Raul: We could use just 1 label where more than 1 label is needed. This means that QA should stick to 1 labe, as they see fit for the issue.

Honza: Can we create more labels?

Paul: If the need arises, sure.

Honza: I can see the `graphic glitch` label. What exactly does that mean?

Raul: Elements not being rendered properly, broken items from a graphic point of view, text overflowing, elements overlapping other elements, cut text, etc

Honza: So, would it be better to rename this label? Something that is more related to the issue.

Paul: We could try using layout instead of the graphic glitch title.

Honza: Sure, that sounds better.

Honza: What tools are you using for the `performance` trend label? What helps you identify the issue as being related to `performance`?

Raul: We are using the Task Manager, and the performance tab from the `about:performance` config option, to see and compare if Firefox is using more resources compared to another browser

Paul: Regarding the `other` or `unknown` label? Should we keep it?

Honza: We could keep it and use it in some cases when we are not sure how to classify the issue, for example, if we are trying to pick the best fit between 3 or more labels.

Honza: We can amend the label list after this meeting. Surely, we will use this system in the new dashboard system, and most likely this will evolve.

Honza: We will discuss with the team regarding the labels used for TRENDS, to make further clarifications. Thanks for that.

October 31 2023

QA Triage Trends

Since our last meeting, we've started using the new format on how we submit the QA Triage Trends.

We are looking for feedback:

Regarding the trend metrics, we are still working on the document.

We were wondering if the total number of issues (per label) received each month regardless if it's reproducible or not would be enough for the metrics.

Paul: Should we go this deep, or is it enough to mention a link and several total issues per milestone, instead of copying the link for each issue?

Honza: I am also thinking about the new system. We can search for individual reports by label. An important thing to add, if we are not sure about a label, it is best to not label the issue. We should use categories where we are certain. To answer your question, the numbers are more important, is that trend growing or not? Higher management wants to have some kind of metrics to see the impact the platform is making, we are trying to understand all the user reports, estimate trends, which issues have the most impact, and how to prioritize them so that we can give all the relevant info to the platform team. Whatever numbers we have should help the higher management to see if the platform team is going in the right direction.

Paul: How do we measure the impact?

Honza: That is the big question. They need to know that the actions we are taking are making things better or worse. We can not base this metric on the number of reports, getting more reports vs getting fewer reports, the goal of the system is to have as much data as possible. It is in our interest to have more reports. We might want to identify things like Firefox not supported, and we can somehow follow that trend. We would not collect the number of reports, but the domains regarding this. We can measure how quickly the platform fixes issues. We can do issue scoring, like the State of Webcompat report, top 20 issues, that we think the platform should fix. Or the popularity of sites, using Firefox Telemetry and comparing it with Chrome Telemetry. Like are there sites used by a lot of people just in Chrome? We should come up with the same data to see if we are in the right direction. The trends, the numbers, is part of that goal. Are we able to spot some trends, and trust the numbers? The overall numbers indicate the trend, what is our main focus.

Paul: So will show the numbers based on links, and show them by month.

Honza: yes.

Gathering info for the New Reporting System (Honza)

Honza: For the new system, I am interested in the triage process of the QA. That means that you would assign labels to issues. Right now you are using GitHub label, which would likely remain the same.

Paul: Every bug tracker has a way to separate things, and labels are the way to go, free to create and easier to use.

Honza: When you split the work, as per your email that Dennis asked for info, what if there was one more person?

Raul: We would have a batch each day and split the issues in 3.

Paul: Or we could assign the issues directly to the QA member.

Raul: It would help us if we could mass-assign a batch directly to a member without assigning each issue manually.

Paul: Right now we are using keywords for specific OKRs. I'm not sure if its feasible to add a label for each issue (for e.g. counting number of issues triaged in a week, for now, we use "[qa_44/2023]" or "[inv_44/2023]" for investigated, 44 stands for week 44). The new dashboard should have something in order to count the issues received each week.

Raul: These keywords help us a lot when counting our issues in different metrics/reports.

Honza: The current system is based on the reports received on Git Hub. But in the new system, it will be harder to filter that out because more reports would be received. Maybe we would have to mass close an issue that has the same domain.

Paul: The bot also closes issues that he finds inappropriate or irrelevant.

Honza: Right, we want to measure only the work that has been done by the triage team.

The issues will be grouped by the domain. There might be some tooling that groups issues that has similar description. Once we have more data, we will start learning how to make the Triage process much better.

October 17 2023

QA Triage Trends OKR (SV)

We were wondering if this OKR was helpful so far, and if it would make sens to change a bit how we report these trends.

New format idea for reporting the QA triage trends:

- Adding a time frame for the trends (e.g. QA Triage - week 42)

- Highlighting the reproducible issues as usual. Or we could do a query for them if you don't need them individually written

- For other not reproducible/duplicate/incomplete issues, providing a link with a query


QA Triage Trends - week 41 (or 09-13.10.2023)

Issues where the browser is not supported


  • issue 1
  • issue 2
  • issue 3

Not reproducible/duplicate/incomplete: issues

Honza: We introduce the OKR to spot trends and paterns in the user report, one of the most known was Firefox not being supported - still investigating- that made us decide to watch to similar trends, repeating paterns, and investigate further to identify it. I agree we can improve on this. It is clear that the weekly reports are hard to use to spot, it is a long term OKR. It might be relevant after a month or two. The idea is to spot anything useful there. A different apporach might be in place here. Looking at the weekly report is not enough.

Paul: Maybe we could make a table and gather data each week and we could compare later in time if there is an actual pattern.

Honza: When I am thinking about trends I am thinking about labels, using them to identify issues. As soon as we have data, we can see how Firefox is affected, if there is a tendency for a certain trend (Firefox invested more in codecs for videos for example). Labels are a way to go.

Calin: We can put them in a table each month based on a specific trend, highlighting the reproducible one, not reproducbile on, etc.

Paul: By labels you mean adding one for each issue, and then do a query?

Honza: Something like that.

Calin: We can compare by month.

Honza: Exaclty, we can compare by month and see if the number are increasing or not.

Paul: A query will only show the number of issues, and we can go further and gather the data on a table and compare the numbers. Right now we have some labels, but some new might arise in the future.

Raul: Can we use the labels that we currently have, or should we create a new ones?

Paul: I think we could build onto it.

Raul: I've seen James has a list of focus areas from Interop, maybe we could use that list for naming the trends.

Honza: Do you have an example for naming?

Raul: For example there's CSS, maybe we could use that as well.

Honza: How do you make sure you don't mislabeling them? Some issues requires diagnosis to be sure of that.

Raul: Thats right, what we meant is that to just for inspiration regarding our trend labels. Normally we categorise them only by observation.

Honza: The weekly OKR regarding trends could be used a journal, not necessarily about mentioning the issues themselves but documenting what we have observed every week, like "we've added a new trend label" etc.

Also, the experiment is set to be in place by vers 120 for 10%, so we are expecting a lot of reports once the new reporter is rolling out. January next year is planned for the switch. The reports will be a lot less detailed. The Trend OKR will come in handy there. The triage will change a bit. We won't have a feedback channel, it will be one-directional. PRO: lot of data, CON: less details. We will see how to integrate the Trend OKR there. will still be there but we should focus on the user reports and not overload with work.

We are not going to triage everything, we will use a dashboard which will priritize which ones should be triaged.

Paul: How should we prioritize the issues, based on the info received?

Honza: Yes and also how good the descriptions are.

Raul: Dennis said the issues will be completely private. Normally we have the bot closing automatically some issues that might be spam.

Honza: Yes, for the new dashboard, the bot will learn as you close incomplete issues. It will be a lot of learning.

Flooded with SeaMonkey issues (SV)

We have received a large number of issues that reproduce only on SeaMonkey. We've moved them accordingly to bugzilla and instructed the user to file them there if he finds new bugs.

Honza: Patricia is working on a guideline document for this specific situation, so that will come in handy. This might take some time, estimated by the end of the year.

Calin: Is this change going to affect webcompat or Bugzilla ?

Honza: It will be a global document withing Mozilla, and it should be up on . We can point to it at any time.

Changes to our current OKRs and workflow in Github when the new reporter and dashboard will be up and running (SV)

What changes can we expect once the new process is in place? Are we keeping the same processes and OKRs, or are we going to let some of them go and introduce new processes? Github will be kept strictly for user reports?

Honza: We will not need github anymore. When we have the new system we will gather all the data there and the dashboard will prioritize the issues where we have the biggest chance of success of triaging.

In the new dashboard will be implemented different features (e.g. labeling issues).

So no more github triage, everything will be done on the new dashboard. It might be an overlap though from with the new dashboard.

October 3 2023

High number of incomplete issues than usual (SV)

Could this be caused by the new reporter?

Honza: We did an experiment a while back, but it was just for a week, and the new reporter is not ready yet. There is no simplified reporter yet.

Calin: We had a flood of random issues as well, unusual Opereting System, sites, etc

New dashboard and reporter (Honza)

Dennis mentioned some updates, but we are waiting for a sync meeting.

Honza: He needs a feedback for the dashboard, how you split the work, how bugs are marked etc. Hopefully we will set up a meeting this week when Dennis comes back and discuss in detail. I will rescheduel the meeting for tomorrow, at 3:30 (4:30 your time)

Top 100 testing - updates

Raul: For the top 100 we haven't found that many bugs lately. Firefox might be improving.

Honza: Great.

September 19 2023

Moving any video playback issues reported on to Audio/Video-Audio/Video playback Component on Bugzilla (SV)

We have been advised on Slack, after several reports came that Youtube playback is failing (see: , that even if we can reproduce the reported video playback issues, to move them to Bugzilla.


Is this something that we should amend in our current process, as we did not move issues that we could not reproduce?

Honza: I think is case by case. Not for all sites, but for sites where the potential is high, user usage is high, the browser score is high. But it is hard to give an exact rule. If it is a popular site, it is a high indicator that it could be reported.

Paul: As they've said, sometimes it is worth investigating, bugs are cheap.

Honza: Sure.

Raul: Great. So our focus will be on top sites for such issues.

Honza: That is correct.

Finding issues when performing Top 100 Exploratory, that are already reported in Github (SV)

When performing Top 100 Exploratory, we have found issues that have been reported using in GitHub, but are not investigated. Should we submit a new Bugzilla report, and close the old GitHub report in favor of the newly created Bugzilla report?

Honza: Are they marked as Needsdiagonis?

Raul: Yes.

Honza: Then I would wait. One of our goals in H2 is to diagnose them. It is best to wait for now. There is no need to file a new Bugzilla bug. You can update with a comment if something useful comes up (new info).

Webcompat Reporter

Probably when the new tool variant lands in Nightly, will need QA testing. - Do we want this to be tested by the WebCompat QA or by the Desktop QA team that is testing and validating Firefox builds?

   - In theory, we should probably make a combined effort in this case, as the WebCompat QA has the product knowledge, while the Desktop QA team has the browser testing knowledge.

- For QA to happen, [a request in the Quality Assurance project]( needs to be filed as "Request" type, containing any kind of documentation available, for QA to understand the level of testing needed and the UX flow behind.

   - [Here]( you have the documentation that explains each field in a request, the importance, and what it needs to contain.
   - For any additional questions you can pingme (:poiegas) for help.

- For the Experiment part we'll need a second ticket filed in the same Quality Assurance project, but it needs to be filed as an "Experiment" type.

   - You also have the details of each field explained in the [same documentation](
   - Shell Escalante is a good point of contact if you need additional details about the whole process.

Honza: Update on the reporter: Tom is working on it, the plan is to ship that feature in the 120 version, so that should give us a bit of time to set everything in order.

September 5 2023

NSFW #needsdiagnosis reports (SV)

We've agreed to no longer report any NSFW issues and to close the incoming NSFW reports from the users.

Should we close as #wontfix the existing NSFW reports that are under the #needsdiagnosis milestone as well?

Issues [list](

Honza: Yes, we should close them. Agreed.

Paul: We also amended the issues from Bugzilla, where we had a similar situation, users asking why we are closing NSFW issues.

All hands update (Honza)

I will provide a link to a folder where all the important stuff that was discussed in the All-Hands.

August 8 2023

Updates (Honza)

Honza: Any updates since we last spoke?

Raul: Yes, we are back on track with the number of reports. The numbers are back to normal, both when starting the work week and when ending the work week (Triaged).

Honza: I see, so the numbers of reports are back to normal. What about the Top 100 websites testing?

Raul: Yes, the number of the issues have returned to normal. Regarding the Testing top 100 we are on track, last month we've finished the batch a few days earlier.

Honza: Any changes in the trends, some special observations?

Raul: Not really, everything is back to normal again regarding Trends and the number of observed issues. There was a small hick-up with Firefox failing to play DRM content, video streaming, but that was resolved on the next Firefox Nightly update. We have received webcompat issues regarding: [1845917](

Honza: Anything related to Top 100?

Raul: Yes, so far we have completed one full batch, we are currently running the batch for August, and based on our observations, we will complete this batch as well. So far, we have found mostly UI bugs, nothing major. Except for Reddit, which I think that we might have a person of contact there, if that is of interest. See: [1847541](, [1847702](

Raul: Any updates regarding the contact we have at reddit, maybe they could help us with these issue?

Honza: I have no updates about the reddit contact at this time.

Paul: I have promised to come up with a list of issues and search queries regarding the Top 100 websites, this is just a draft but take a look and let me know if you have any questions:

Honza: Perfect, thanks.

Honza: Regarding the credit card, I have good news. I've managed to get it. I will share the credentials, please keep in mind they are sensitive and can not be shared across other teams.

July 11 2023

Low number of opened issues for this Monday (SV)

We have seen a significantly low number of new issues this Monday, at least 50 issues below the average, since the dual form is deployed from this Wednesday for When can we expect to triage them?

Honza: Yeah, most users reported the bugs with the simplified form which end up in our database for now. The next steps is to provide a spread sheet for you to look over it. There might be an option to sort reports based if they have a description or not. Dennis might come up with something quickly. If it turns out it needs more time, we might rollback the change on We will see. From your side, any feedback related to the process is welcomed, how we can improve the process.

Expected high numbers of incomplete issues with the new report form (SV)

With the new report form, we may end up with a high number of incomplete issues. We are thinking to reduce the time frame for replies for which we are closing issues as incomplete, from 10 days (12 days for users reply) since our last comment on the issue, to 5 days (7 days for users reply). Should we keep the same approach we used for our current triage process, where we can not extract any relevant details for incomplete issues, or should we try and dig deeper than usual?

Honza: We are looking to sort the newly received issues and to sort them.

Paul: For the new received reports, we might end up with them piling up in our repository at the end of the week (opened issues waiting for info).

Honza: I think the goal won't be to go over everything, it will be some grouping for the same domain. And we will prioritize where's the most info provided. It all depends on how it will go.

Paul: That's why we were asking, we are hoping towards a more simplified process of triaging issues from the QA point of view.

Honza: Hopefully we will have the spreadsheet soon.

New simplified report form - Operating System detection (SV)

Since the form is simplified how will we figure out on which operating system the issue happens? Will it be autodetected and included in the bug report once it is submitted?

Honza: Yes, I think it is included by looking at the patch on:

Honza: The report will include also the prefs.

Paul: We had something similar in Bugzilla, where data is extracted regarding user ID and Operating System.

Paul: Our concern was to see the exact Operating System where the issue is reproducible, and not the info about the Operating System is reported.

Honza: I see, that is a good point, I'll raise this concern.

Raul: Wouldn't it be better to include an optional screenshot field too? If we end up doing manual triage on the issues comming from simplified form, usually the screenshot is an important part in figuring out the issue.

Paul: As optional, it would be useful to have the Screenshot part at the beginning of the report, not at the end of the report, like we currently have.

Honza: There is an ongoing project where this option might be added in the new reporting tool in the Firefox Browser. The simplified report is to start building data for building the new reporting tool in Firefox.

Smoke testing for (SV)

With the new updated form, should we perform a smoke test on once all the data needed is extracted and the final conclusions are made?

Honza: Good point. The only question now is to see if we roll back or not. Once we have everything we need, that would be great.

Paul: What would cause a rollback?

Honza: That's only if Dennis needs more time in order to put it in place (eg. like 2 weeks).

Paul: So the simplified version is here to stay but the rollback is to give time to the dev team to finalize the form.

June 27 2023

Express VPN Subscription renewal (SV)

If possible, we would like the renewal of the subscription to the VPN provider, as the number of devices connected far better than other competitors, allowing us to connect multiple devices at once (sometimes we need 2 devices per person conected at the same time) and the list of avaible VPN locations is extensive.

Paul: I'll check if the VPN was bought by Mozilla or Softvision and I'll update you on that subject.

Honza: Ok, let me know.

Top 100 websites testing (SV)

Paul: So far Calin and Raul have been on PTO this month and the progress doesn't look that good so far. We will have to see how it goes on a month where they are fully available, most likely in July. If its not doable we will have to adjust the workload accordingly, maybe extend the time allocated for a batch.

Honza: That sounds fair, it is probably because of the PTO but we will see.

Credit Card update (Honza)

Quick update on the Card issue. We are currently blocked at having a signature and a beneficiary on the credit card. At the moment, we are looking for someone based in the US, to pass the relevant checks.

Raul: We are not looking for a card with actual funds on it, just to pass the credit card check on certain websites.

Honza: So for now we are looking to have a card with some funds on it, that all the other teams might use it, as others are interested.

Update on the simplified Webcompat Report (Honza)

We are actively working on this, to simply the process so that we can get more users that will finish the reporting process. This will be implemented in the future reporting tool of Firefox, using Big Query data and by analysing the curent reports. We might see an impact from and to the TRENDS OKR.

Paul: Simplified reports will end up with QA?

Honza: The current reports will end up on GitHub, and the simplified reports will be stored in Big Query. For now, QA will look at the complex (current) reports and Dev Team will be looking at the simplfied reports. We will provide a spreadsheet and the simplified reports will be added to it, for analysing.

Paul: I was curious because if the reports are more simplified we will probably have to ask for more information when triaging.

Honza: Yes, probably we will have to investigate more in order to figure it out exactly. Currently for the simplified reports, we might just get the URL and that is it, but we can extract other data from the simplified reports.

Paul: Maybe there is a way we might see the changed prefs the user made by submitting the simplified reports.

Honza: I'll check some more and give you an update.

Honza: We will have more reports but with less details obviously.

June 13 2023

Testrail walkthrough (Honza)

Honza: Could you show me how testrail works and what's the workflow?

Calin: Sure, I'll share my screen. Here's the link for the WebCompat project:

Honza: Great, thanks.

May 30 2023

Top 100 testing (SV)

Note: Account login should be performed into TestRail before accessing the link

We have started testing for top 100 sites using Test Rail. Test Run and Test Suites are available here:

Currently we are running Batch 3 after talking with the Desktop Team to avoid overlapping, which is due by the end of May. Since we have started later, Batch 3 will not be completed.

Honza: So there are 3 batches?

Raul: Yes, to avoid overlapping with QA Desktop.

Honza: So 1 batch per month? what about the OS tested?

Raul: Yes, 1 batch per month and we are testing on Windows 10 and Android but if we have more time we will test on Mac Os as well.

Honza: What is the time gap between staring a new batch?

Paul: So 1 batch per month but the gap between us and the QA Desktop is 2 months.

Paul: Should we log the issues in Bugzilla or GitHub?

Honza: We want to move everything in Bugzilla, so probably Bugzilla.

Honza: After all this testing is done, where will the reports go?

Paul: The webcompat bugs found by the Desktop QA team would end up on bugzilla.

Honza: I see, normally we are working on a Knowledge base so any reports from webcompat would end up on bugzilla database in the future anyway.

Raul: I have a question. We've already found 2 bugs based on our testing, submitted via reporter. Should we move them to Bugzilla from now on, under the Webcompatibility Product?

Honza: So we have the Knowledge base Component, where we gather known compatibility issues list, to help us understand issues better. This will be used to gather data for Telemetry and for the future release of our new reporter in Firefox. For now, if you find any issues when doing the OKR - Exploratory Testing for Top 100 sites, you can submit the issues using Bugzilla under the Desktop or mobile Component, as we will keep for now just for users.

Honza: We should have all the issues logged into Bugzilla, as we are actively moving issues from webcompat to Bugzilla.

Paul: Do we need to help with QA on the new simplified report flow?

Honza: Yes, that's a good idea. We will keep you in the loop regarding the new Firefox Reporter, regarding updates and future tasks.

Paul: Will this increase our workload, when the reporter will launched be in the release version?

Honza: If that is the case, we will make a plan. I will share some documents with you to get a better idea of the new reporter. document_1 document_2

Honza: The reports received now are from developers and advanced users, as most of the reports are received from the Nightly version, so they are pretty biased. We know that user reports are the best source of data.

Paul: The key in that is that with so many users, we have also so many different configurations and set-ups.

Honza: Only 2-3% finalize the webcompat reporting process due to the number of steps required to submit an issue. We will simplify the process for them, but it will be harder for us to replicate the issue.

Calin: We could also add checkboxes to see if the users has add-ons/prefs active.

Honza: Exactly. It will not be just URL description, but more, such as OS, graphic cards, etc, context required for enviroment set-up. Something very easy for users to check, to provide any usefull data. If will make it more complex, people will not do it.

Raul: This might also be helpful for the other teams, like Add-ons team.

Honza: Exactly. The Perfromnace team might be interested, the ETP team as well, based on what we can see in the data. If we are able to identify a signal in the reports, we can highlight it to the relevant team.

Honza: Getting the reporting tool in the release version will take time, maybe months. We are thinking of introducing the simplified version in, like making the report imediatly, instead of going to the 7 steps process. it will go straight to data base.

Paul: Is help needed from the QA to test the changes?

Honza: That would be great.

Paul: Will start to simplify the process now, and push the changes?

Honza: We will keep the actual reporter, and we will be introducing new stuff to it, giving the reporter 2 option, the simplified tool or the old-complex reporter. The simplifications will be happening just on

Paul: And the button will be added in the browser?

Honza: Correct.

Raul: Will all the versions benefit from the new reporter?

Honza: Yes. Android is still under discussion.

Raul: We can also test the changes that will be applied in

Honza: Yes. Ksenia in actively working on this.

Paul: Will the 2 variants, complex and simplified, still exist together?

Honza: That will depend on the outcome.

Paul: We should at least feed the users the simplified reporting first, and then feed them the complex one if they want to add more data.

Honza: Agree.

Honza: At somepoint, the reporting tool (new) will be the final path. Not sure what will happen to

Honza: The biggest challenge will be to adapt and interpret the data.

Paul: Will we have a bot like on GitHub that will cut down the noise from the new flux of reports?

Honza: We are attempting to use the principle when we are using bots in Github. A way to identify top x pages, based on if the page is standard or if it is not supported by Firefox, to filter the noise, like we did similar for GitHub.

Increase in issue engagement and number of issues (SV)

There has been a noticeable increase in the number of issues received this week. Most likely, as seen around the web, dark reader is causing issues with Firefox.

Also, some users have started to engage with our reports, and sometimes to interfere with our diagnosis process.


Honza: What are they commenting about, are they providing usefull info?

Raul: Some yes but most of them just clone our reponses.

Honza: The team will take care of the ones that irelevant. You might want to communicate with the ones providing usefull information.

Paul: So we already received bugs regarding dark reader? Because I observed that it started to mess many websites (I'm using it by the way).

Raul: Calin observed that some issues are fixed if Dark Reader is disabled.

May 16 2023

WebRTC issues (SV)

Since WebRTC is getting more visibility lately, is there a way to see when using Firefox (maybe in about:webrtc) if a website is using it? Would it be useful to add a Trend Label for specific WebRTC issues, to use for future refrences, if we can differentiate issues specific for WebRTC?

We have documented around it and it seems to be used when using web apps for voice/video chatting that use your webcam, headphones and microphone, or for P2P file transfer. We have an ideea of how issues related should look like, but maybe there's a better way to find out if WebRTC is used.

Paul: The mobile and desktop team are doing dedicated tests for webrtc.

Paul: We have an idea what is WebRTC and what to look for but we were wondering if there's a way to check exactly if an issue is webrtc or not?

Honza: I don't have an immediate input regarding WebRTC but lets put a topic in the meeting we have later on with the team.

Top 100 testing (SV)

Paul: QA Desktop wants to reduce their workload regarding this topic. They will go down from their current number of sites tested.

Paul: We will allocate 1h per day for testing the top 100 sites because the main task is to triage bugs and that's more important.

Honza: I am understanding that you would spent roughly around 1h per day for testing this OKR, meaning that in one month, you will cover a batch.

Paul: Testing is organized by us, since we took over this OKR from oher teams, to ensure that the available time for testing is used to its maximum.

Honza: Some things to take into consideration if the triage process time will increase.

Paul: As previously discussed we will test mainly on Android, Desktop-Windows and if the time allows it we will test on Mac as well.

Honza: Every month we should be able to do one subset from the document.

Paul: Initially it was planned to test this twice a year.

Honza: So that means roughly one site per working day.

Paul: We are aiming to verify one website in one hour per day.

Honza: Is that do-able.

Raul/Calin: It looks do-able. We will concetrate our efforts first on mobile and desktop- Windows. If time permits, we will extend the desktop enviroments for testing.

Honza: I have seen that is not included, just in the list.

Paul: We have a list for both desktop and mobile, and if the site has the mobile version, when testing in mobile, we will cover the mobile version as well.

Honza: What are the next steps?

Paul: We will test the list accordingly, to avoid overlapping. It would be better to have one pause between lists, to ensure avoiding overlappin.

May 2 2023

Interventions testing possible restrictions (SV)

After the last run for Intervention tests, both manual and automated, we have made notes in [this document]( regarding which website might not be tested using automation, due to restrictions (2FA, geolocation restrictions, enviroment).

These possbile restrictions were observed when running mainly manual tests.

Honza: That looks good. Can you run the list by Tom as well, and keep him in the loop?

Raul: Will do.

Honza: How many are automated and how many are manual test?

Raul: All the entries in the list are runned manually. When running the automated tests, about 50+ are being runned.

Honza: Is there a reason for that?

Raul: Tom is working on setting up the mobile automated tests, so for now we are only running them on a desktop enviroment.

Honza: I see. Are the manual test runned by enviroment?

Raul: Some are runned on all platforms, some just on mobile, others just on desktop.

State of Webcompat report (Honza)

Honza: There will be a summary there. James is mentioning QA, mainly from trends. We are thinking about a summary/conclusion about the mentioned subjects.

Do you think we should mentioned other issue?

Raul: Firefox not being supported, reports for a specific site like we have seen for Reddit, where we have a communication channel with them. We can also extract issues from the Trends OKRs by using the assigned labels.

Honza: That sounds great. Do you think we can do a top of issues based on Trends?

Raul: We can see which labels from Trends received the most issues and make a top 3.

Honza: Could you bring this is up in our upcoming team meeting?

Raul: Will have it ready by then.

Updates - Top 100 (Honza)

Paul: We are still waiting for the feedback from the team, regarding what would be important in the webcompat area, what we should focus on. Maybe be can use the same list as the desktop team or do another round of checks to see if the list entries have changed. We will take a look to see what the list looks now vs how it should look with the current top sites.

Honza: I like the categories as well.

April 21 2023

1. Top 100 testing(SV)

It seems that the Desktop QA team is also running tests for checking build compatibility with websites. - I've discussed with them to see if they want to continue to do this, so we won't overlap. - We'll have a meeting on Wednesday to discuss the coverage.

Paul: Initially I have discussed this with the mobile team and we've decided to move the task to the WebCompat team. We are still waiting for a response from the Desktop QA team if they would let us do the task instead.

Honza: So there's FF desktop QA doing similar things like testing Firefox builds and there's also the mobile team doing web compatibility testing.

Paul: The mobile team won't be doing that anymore since they are switching...

Honza: So mobile would give up on testing compatibility on those websites? From my understanding, different builds are available for different OS (mobile and desktop)

Paul: When this OKR was created, we did not consider that other teams are testing this, since this seems more like a webcompat team thing. Now we have to decide if they will start testing this, or should we keep this OKR for other teams inside the organization.

Honza: Given the fact that 5 platforms would be tested, how do we see this from the workload point of view?

Paul: Other teams are running the tests in one quarter for 100 Top Sites, 30 sites per round. Their estimations show that it takes them around 20 minutes per website, 11h for 32 websites. For about 100 websites that would be around in the 33 hours mark, but maybe we should do the testing extensively, going deeper compared to them, so that would mean double. But the plan would look more doable after we talk with the desktop team.

Honza: How much time would be left for other things, since this is not the only OKR? How much time should we spend on this? Unless they are testing different features.

Paul: I think it is general. I think we could test 2 times a year the top 100 websites.

Raul: I've seen from their previous tests that they test features such as saved logins but from our point that's not webcompat.

Honza: We will lay out a plan to see what it is covered from our point of view.

Honza: Is mobile covered?

Paul: Both mobile and desktop.

Honza: How much time is needed for mobile?

Paul: About 20 minutes per website. We need to cover also mobile. We plan to run it twice a year. We will see what the stakeholder has to say.

UX Research (Honza)

Honza: we have the results that summarize all the responses given by people. James was looking at that document and provided small summary. Since the feedback was related to Social Sites, I was curious if there was something overlapping, like common ground, to identify the same set of priorities/ conclusions. I did not see any overlaps there, but maybe there is something I am missing.

Paul: From what websites did the reports come?

Honza: It was user research, no specific sites were targeted. The feedback given in the survey is related to webcompat. I was curious if there are any overlaps in this. But maybe we could coordinate a little bit more, maybe we can sync with these efforts and see if people are saying the same things as our findings from our testing.

Paul: We might not get to the same conclusions due to hardware availability, as some users have a ton of different configurations.

Honza: Maybe we can get the results once the survey is completed and adjust our testing accordingly, maybe concentrate and narrow our efforts to sites we pick and testing we are doing.

Paul: That sounds like a plan, as we can concentrate on areas reported in the reports. We would know where to stress the application more.

Honza: I will try to keep you informed about the results.

Raul: As Paul said we can either do tests for the top 10 or do for a specific feature that fails on certain websites.

Honza: [This is the output]( from James, and this is the original [document](

Interventions testing (Honza)

Honza: Are there any differences between the automated tests that are run manually?

Raul: I'm guessing the automated tests are the same as the manual testing. We have runs for manual and automated tests. Some test runs require 2FA authenticator and these will fail when running the automation suite. Also, geographical restrictions, environmental restrictions, and incomplete login credentials are also taken into consideration for automated runs, as these will fail if the correct setup is not available.

Paul: Could we mark which tests could be run manually and which ones could be run automatically?

Raul: We run the automated tests at the end of the manual run for interventions. Usually on the first automated run, there is a high number of failed tests, which is lower on the second run

Some tests need to be run manually because it requires authentications and/or VPN.

At the end of the runs, we have a clear view of why some automated tests fail.

Honza: Then as Paul says, we should make a list of which ones can be run manually and which ones can be automated.

Honza: Could you make a new column in the doc and classify which ones are which?

Raul: Sure, we could try to do that. Usually, Tom knows better which ones can be automated and which ones has to be tested manually.

Honza: Yes, Tom knows more about this, so please feel free to contact him and sync on this subject.

April 4 2023

Google offline feature (Honza)

Honza: This seems done. I've shared this with others, and now I am waiting for feedback from Joe and Maire. They are interested in a bigger anaylis, to help us understand better what is broken from the webcompat point of view, like we've seen from the UI point of view, or weather we can reveal more from future testing.

They want to know how expensive will it be to fix these features. Maybe the next step is falling in the diagnosis backend. Could further testing help the diagnosis process, or even maybe find the root cause?

Raul: We can provide more testing, but we do not have the knowledge to pin point the root cause.

Paul: We'll probably need Engineering help to figure that out. Even for the initial testing Denis helped us a lot to figure out the problems.

Honza: Fair, then we shall consider this done from QA prspective.

Testing Methodology (Honza)

Honza: I've created this folder so we can put other relevant documents here besides the ones we already have, so they will be easier to find. Honza: I was looking at the document with the top 10 social media websites and I think we can improve the structure. I've created a guideline doc with what info would be useful to have in our reports.

Raul: Should this document help us with new OKRs when we are testing different websites?

Honza: Yes. Those kind of documents could describe the methodology on how those features/websites were tested so it could be easier for other to understand whats happening there, what the situation is.

Honza: So I'm looking at the results, I can see there are a lot of links redirecting to different issues but thats not too insightful. I think it should be more helpful if you make a summary on each site, which kind of issue are more concering for that specific website. We could make this summary better by focusing only on te P1's after the Engineerign team triaged the issues.

Raul: There's one issue here, we don't really assign a priority ourselves. Thats normally made by the dev team. Not all the issue are trully webcompat, some are ETP issues. Should we categorise them?

Paul: No, just focus on the P1 issues after they are triaged by the team and the priority is added.

Honza: So this document is not exclusively for the webcompat team, this is higher than that. The goal is not helping with triage. This is for people outside the project to understand the problems that are going with those websites. I think what I was editing the most in the document is the context about those websites.

Paul: We'll provide a link with all the issues we are refering too and do a summary of them, maybe even categorize them if we see a pattern.

Q2 Proposal Review - Top 100 Sites (Beaucoup list) (SV)

As discussed, we are planning the following [OKR](

Is there an up-to-date list that we can use? Also, we are thinking of running the tests in TestRail, for each domain, where we can group sites (News, E-learning, Shopping, etc.)

We have this test suite used in the past, which we can tailor it to be up-to-date and relevant: [link](

Note: The link should be accessed after account login is performed. If not, then accessing the link and performing the login requires the link to accessed again in order to see the test cases.

Testing will be conducted on Desktop and mobile. Should we include iOS + ANDROID on the mobile side, and Mac +Windows on Desktop?

[Paul] 1. I've discussed with the Mobile QA team and they only covered Android, so we can scratch iOS. 2. Regarding Desktop side, I think it's the WebCompat team role to decide what Desktop platforms to include in our testing based on markers like users numbers, platforms with most issues, etc. 3. Mobile QA team was testing websites depending on regions, but that is not mandatory either, they have only done that as the Alexa top they were using, was splitting websites on regions. Also, I think it would only add complexity for us if we want to do that, so I don't think it's worth it.

Raul: In the past we've used Alexa for the top 100 sites, in the current beaucoup list last time we had only around 20.

Honza: We actually have 100 websites now from the beacoup list but those 20 were currated.

Honza: We can use the current list from Beucoup or we can make our own list.

Paul: Top sites from Alexa was based on how accessed the pages were, but that is not avaible anymore. Maybe webcompat has another list that we might use?

Honza: We can use also the list made by Tranco, the spreadsheet from Beucoup and compile a list that reflects the Top 100 sites today. There is also the HTP archive.


Honza: Best would be to talk to Ksenia about this as well.

Paul: We will look over it and see if we can figure out an overlapp, if we don't we will ask Ksenia's help. However, what would be helpful, is to find out on what platforms we should focus our testing on from Desktop. Maybe we could pick the ones most used, or pick the ones from which most of the reported issues are coming from, or other clasification.

Honza: Next week maybe we should open a new topic with the team regarding this subject.

Honza: So we are making an analisys which sites are not supported and lately we are seeing an increase in them. This should also be one of our main goals, if certain pages from top 100 list are supported by Firefox.

Paul: We can pinpoint regressions also by doing this constantly. If possible, we are planing to do this twice a year.

Honza: Maybe we can also look at a trend in this case,to see how the webcompat part evolved from one run to another.

Paul: Yes, we can have a summary in the report, and we can see the differences between runs.

Honza: That sounds good, sound like a good reason to do this OKR.

March 21 2023

Google drive offline mode (SV)

Google Drive requires an `add-on` that is specific just for Chromium browsers, in order to be used offline and since docs/sheets etc. rely on the Gdrive we are not able to test these as well.

Raul: Chrome does not need the `add-on`, it works without it. Edge needs it for the feature, which is available once it is installed. However, for Firefox, with the default UA, does not have the "Offline" feature for Gdrive, following the above link. Changing the UA in Firefox to either Edge or Chrome (it is important to be signed in prior to changing the UA) shows the `feature` in the settings of the account. Once you try to enable it, it prompts a pop-up that redirects to the installment page of the add-on if you click on the `Install` button, but there is no option available to install the add-on, as it is specific for Chromium-based browsers.

Honza: I see, so accessing Gdocs and Gsheets is not possible.

Honza: It would be nice to have a separate document, to document all the findings regarding this.

Raul: When should this document be ready?

Honza: If you can have it ready for the next meeting, that will be great.

Raul: Will do.

Social Media Top sites Exploratory Testing (SV)

Testing is on track and will be completed on time. We have substituted some pages that can not be tested on Mobile and Desktop (quora instead of snapchat)


Raul: We've replaced 2 websites we couldn't test from the list with and

Honza: You can not test Snapchat?

Calin: Mobile requires the app, and desktop is not supported (known issue):

Honza: I see. Is this the final document?

Raul: Once we are done with the testing, we will export all the data we found in a new document.

Honza: Alright, a summary of the findings would help explain the situation. You could insert different screenshots, different issues related for each webpage, any other data that might help.

Raul: What about the issues that were previously reported before the test was done?

Honza: You could add/mention them in the report as well, especially for The reason why I am asking this is that we have a contact now for, and a separate communication channel with them, so all the known issues will come in handy for this.

Raul: We will have the report ready in next the quarter. Basically, we finish the testing on 31 st of March. We can make a draft of the report ready by the first Monday of Q2, for the first team meeting of Q2.

Honza: Ok, sounds good.

Honza: I could create a structure for the report, to make sure we cover everything. Basically what we are looking for is issues that are reproducible on all browsers, issues that are reproducible on Firefox but not on other browsers, and issues for mobile environment that require the installment of an app/feature to be used in the app by default by the page. Like where we had `Can't test`.

February 21 2023

Q2 Proposal: Top 100 most visited sites

Could we test this in the up-coming Q2 period?

Things we had in mind for testing:

  • The page is rendered correctly, no visible artifacts or layout problems are present, without glitches, interruptions.
  • The user can:
sign in without issues,
use the search bar,
scroll the page and open different links - no visible artifact or layout are present,
open a menu and select submenus,
navigate without issues back and forward the pages.
  • Share an article/information.

Raul: We haven't tested the top 100 most visited websites for a while and we were thinking proposing this one for the next Quarter.

Honza: Do you have specific strategies to test those websites?

Raul: Yes, we had in mind features like sign-in, search bar, menus/submenus, articles etc.

Honza: I see, how do you find the top 100 websites?

Raul: Well, before we had Alexa top 500 sites, and now at the moment we have an internal list, and the Beaucoup list.

Honza: Okay, we will have to look into it. Is this OKR do-able? How long did it took last time when doing such task?

Raul: We might finish the whole OKR in one Quarter depending on the workload. In the past, we took this OKR from one Quarter and continued with it in the upcoming Quarter.

Honza: So when doing this OKR, if you find bugs do you file them as regular on github or Bugzilla?

Raul: We normally file the bugs on Github but we can do it also on Bugzilla, if needed.

Honza: I'll ask around for a list, and we can see what the plan is for the next Quarter. Analysis

Honza: This is the most used data source that we have. There are discussion how we could get more, keeping in mind that we have limited resources. The action to report sites is availbe jsut for non-release channels. Enabling this to release versions, might cause a high number of issues. We are in search for a specific way to get more reports. That would be a different channel, not linked to It would be a telemetry, it would end up in a data base. This is just an FYI.

Exploratory testing of Top 10 Social Media

Raul: Most websites we tried on the mobile version need the app to be installed.

Calin: The issue is with the websites where the core feature is messaging.

Honza: Is it impossible to test?

Calin: Main page loads, other features need the app. For some you can switch to the desktop version.

Honza: So desktop works, but mobile needs an app.

SV: Yes. That is a current workaround.

Honza: So how is that going so far? Anything interesting regarding the reports?

Raul: So far, just small UI defects.

Honza: The important part is that everything we find has a coresponding report, if not, a new report is in place. Where testing is not possible, please make a not of that.

Raul: We can make notes of the current know issues for each site.

Honza: If testing is not possible, leave the N/A page in the list, and introduce a new relevant page to test in the least.

PayPal account (Honza)

Honza: I was also asking around for PayPal, not easy to get one. I ended up to talk with Dave Hunt. There is a paypal email list, which we did not get a respond for that. This is still pending. We can use this pending email list as a reply for current issue.

Calin: The main issue for Pay Pal is that we do not have a linked credit card or a valid ID verification.

Raul: In the past we used a temporary credit card that was generated online but now it no longer works.

Move to Bugzilla Add-on(SV)

We have started to use this add-on, for issues that are non-compat, but could use an investigation on Bugzilla:

We had in mind user retention when using this, as instead of closing the issues as Non-Compat or Incomplete (where account creation is not possible), If users (even anonymous ones) can see that we are trying to help them and move the issues accordingly to other relevant projects, that might help us in user retention for 2023.

Raul: We used the addon where we couldn't test the issue ourselves such as special accounts/banking or non-compat issues (another app involved, special set-up, pref changes) for user retention.


Honza: That sounds great, we want users and reporters to see that we are attempting to solve issues, not just closing the door as valid non-compat or incomplete issues.

Duplicates & repositories moved from Github to Bugzilla

The Fenix repository has moved from github to Bugzilla and all the bugs are closed and archived now, but not every issue has a coresponding Core bug report on Bugzilla.

If we receive a reproducible bug report that is a duplicate of the one from the Fenix repository, how should we proceed in this situation? Should we just mark them as duplicates of the closed Fenix Reports, or file a new report on Bugzilla ?


Honza: I would not close them as duplicates for the Github Fenix issues, I would open a Bugzilla issue. It's read only on Git, so it would be good practice to look an archived report to an active report on Bugzilla.

Calin: Should we file a new bug if there isn't one already?

Honza: Yeah, we could use the see also option, and link the closed Fenix report there and the current reproducible Git issue.

Calin: Is there someone handling the current archived Fenix Git issues?

Honza: We do some in our current webcompat triage meeting.

Honza: How do you search for core bugs in Bugzilla?

Calin: We perform different search queries, either with the current title, or we use relevant keywords.

Raul: We also search by domain or by duplicates, based on history.

Honza: Why don't you send the issue to needsdiagnosis?

Raul: Well, we use the current knowledge base for duplicate issues or similar issues reported in If we see that a similar report received a resolution that states that the issue is either a Firefox issue or Bugzilla issue, we act based on history, if we have encountered similar bugs in the past. When we are not sure and nothing relevant has been found related to the reported bug, we will just move it to needsdiagnosis or ping someone to look into it, and give a resolution for it, that we might use in the future for similar reports.

Honza: That sounds like a good approach.

Firefox "Unsupported" banner

Is there a way to identify unsupported sites and see how widespread this problem is?

Honza: Do you have any idea how we could identify how many websites do not support Firefox?

Raul: At the moment, just the OKR Trend will help.

Calin: I think we should first identify what is the common issue across websites when it comes to the browser being unsupported or for example why some websites from a specific geographical region are more prone to make Firefox an unsupported browser.

Honza: That makes sense. If we knew what the Core problem is, that might help. Any ideas how we should do that?

Calin: Maybe an user poll would help.

Raul: And maybe from our previous experience we can see, like we saw with the 110 version breaking sites in the UA String, and freezing the version number to 100 solved the issue with Firefox being unsupported for certain pages.

February 7 2023

MS Teams

Honza: For this issue, we have mixed results. Now, the page fails in Nightly but works in Release. It seems that there are 2 versions of MS Teams, for regular users and for Enterprise users.

Raul: We might have 1 for corporate and 1 for regular users, the free one is one that we kept receiving reports for and which we investigated.

Honza: We are not sure if they blocked only Nightly or other versions as well such as Release and Beta.

Raul: Normally only the meetings were not supported but now it seems the whole website is not supported on Linux. We could investigate it more since we had no new issues reporting this.

Honza: That will help, how do I try? Do I need a test account?

Raul: You can test with a regular account.

Honza: Great, could have a look into it?

Raul: Will do.

  • Notes:

After the Team Meeting, Dennis concluded that is also affected. The reports came for as well. Analysis

Analysis from Kate Hudson:


  • Does it make sense to optimize and get more reports?
  • We have limited resources to test all reports, but perhaps we could get more quality reports? What could be better?

Honza: We looked into some data about how users file the bugs and most of them give up after completing the first 3-4 steps (7 total). Maybe we could make the report easier, reducing the steps.

Raul: We've noticed in reports that when they get to the description part, the users get frustrated and they will just fill the form with random words until they've met the minimum required to submit the report.

Honza: Is the screenshot optional?

Raul: Yes it is.

Honza: How many people skip the screenshot part, how important is it for you? Raul: Yes, it is optional but it is very important for us. It is easier for us to identify the issue.

Honza: If we somehow make the reporting much easier for the users, how you will handle the situation if we see an increase in the reports?

Raul: We are not sure if that will cause an increase in the reports we receive but in the past, we have received way more and we had a hard time triaging while also doing OKRs, as we saw when the "Report an issue" button was added and ready for usage.

Testing framework for shipped interventions

Honza: Tom was working on the infrastructure on how we test/manage interventions. How does that work?

Raul: At the moment we do Manual and Automatic testing and we do our testing on real devices. For now, these tests don't take that much time. The only issue is when we have to test payments or websites that require special accounts such as banking.

Honza: So you run those tests with on/off interventions, what do you do if they fail?

Raul: We keep them open and mark them as reproducible, if the issue is no longer reproducible we create a task on Bugzilla to remove the interventions because its no longer needed.

Honza: Are there any factors to take into consideration when running the Interventions Tests - Automated?

Raul: Besides having the environment set up, for example, requires a Mac device, since the tests are run on a Windows Machine, that will show as fail.

Raul: Other factors to take into consideration are VPN Connections, UI update of the page, and a stable Internet connection.

Honza: Do you feel that these runs of the Interventions are still needed?

Raul: Yes. We can keep track of the issues, we can see if other issues are related to the Interventions, and running them once a month does not take extra resources. Honza: What is the flow of creating the Run for each cycle?

Raul: We have a predefined list of Interventions that are in place, and following that list, we create the Run based on the environment and if the issues are CSS Injections or Overrides.

Honza: Is the list stable regarding the number of issues?

Raul: Yes, pretty much. The number of Interventions is roughly around 90 at each cycle. Some are new, some are old. It gets updated regularly.

Honza: Glad to hear that

January 10 2023

OKRs for 2023 (SV)

Should we make new OKRs for Trends:, or can we transfer the old one, as we did not close it at the end of 2022. The same question is in place for:

We are thinking of some OKRs to be added in Q1 for 2023. But given the RoadMap for 2023, are there any possible OKRs that we should focus on first?

Raul: Should we open a new OKR for the Untriaged/Triaged issues from 2022?

Honza: The ones that are in progress from the last year should be closed. I think a new project should be created as well.

Raul: Dennis created the project for 2023.

Honza: What about new OKRs? I've seen that Top streaming websites OKRs is already done. Maybe we should move to a different area now.

Raul: We should probably test Top social media websites.

Honza: Have you tested them already in the past?

Raul: We had some testing before.

Honza:Nice, if you have documents to review please send them to me.

Raul: If we have any idea for new OKRs we will let you know or maybe, if you have any suggestions for the roadmap.

Honza: First, any other ideas for Q1 OKRS?

Raul: We had a cleanup on bugzilla before with bugs related to webcompat, some that were really old and kept piling up in the backlog.

Honza: That sounds like a candidate for a new OKR for Q1.

Raul: We could do that again.

Honza: I am wondering when you are performing triage, do you find any other issues besides the reported issue?If you find any issues that reproduce, how do you proceed?

Raul: We ping Ksenia or Tom to look into it. There are issues that are Firefox issues but not necessarily WebCompat issues, such as features of the browser.

Honza:What is the amount of bugs in our Product (WebCompat)?

Raul: We don't know exactly...We can perform a search query in Webcompat Mobile and Desktop Component, based on the NEW or UNASSIGNED status.

Honza: Cleanup on Bugzilla could be a great OKR as well. We should talk more about that on the upcoming meeting with the team.

Raul: For the top sites Social Media OKR, should we use the Beacoup list?

Honza:We have like 1000s sites there, I don't think that category of top 10/20 social media sites is there, I will send you the sheet: Also, Dennis might assist with another list.

Issues that are reproducible on Release version, but not on Nightly version (SV)

Usually we close such issues as WontFix, but we were wondering if we could discuss about it with other team members before adding a resolution to the issue, depending on the importance of the page (users, popularity, etc) or number of reports received for one website. As seen here:, an uplift to Beta/Release helps in this kind of situations, especially if we are looking at user retention as well.

Raul: Based on the issues we've received, we've noticed a trend with Azure. A bug that reproduced on Firefox release but not with Nightly.

Raul: Using the methology of the trends we've managed to identify the issue regarding Azure. This also shows how important it is to identify the trends that are happening. But because the issue is a classic Wontfix issue, Ksenia pitched in and we were able to successfully solve the issue, without waiting for the next 110 release version, thus retaining users.

Honza: I saw the thread, good job there. This is what the trend OKR should be about. Also, yes, it is a good idea based on that issue that we should prioritize websites that could use a uplift/patch.

Honza: If we are on the subject, we are thinking since Trends seems to help us a lot, to integrate Trends in the Knowledge base -real data base which can be easily integrated in Bquery. HTTPArchive and telemetric data are stored there. This can be merge together. The next base should be built in the same system. Read data and learn from them. For example Webcompat issues with CSS Properties failing, we can query the archive and see how much that property is used across the web, and it can help identify the impact of the issue.It would help provide a state of webcompat report. Having trends intergrated there that would be something. My point is that the OKR is on github, and somehow to store it in Bquery. Maybe we can do something similar for trends. Fetching data from web bugs. Fetching the trends and process them. This is something to think about for Q2 OKR.

Verified Paypal account (SV)

We were unable to add Paypal as a payment option when ordering on ubereats (non-compat) and we believe it's because we do not have a verified account.

Context: Other:

Honza: When you say verified account, who sould verify the account and how?

Calin; The account needs to be verified by Pay Pal, with a valid ID.

Honza: I will ask around.

December 13 2022

Needsdiagnosis re-check issues Q4 - Done (SV)

We have finished the re-check of Needsdiagnosis issues. Below are the results:

Honza: Awesome, thanks for that.

Raul: Exploratory testing is on track, and will be finished in the coming week.

Honza: Great news.

DevTools Workweek (Honza)

Honza: Most of next year will be about those 3 subjects regarding webcompat (Webcompat Knowledge base, Webcompat Issue impact,Webcompat User retention) We are aiming for a dashboard that will show us the top 20 webcompat issues (top issues), based on scoring. That is addressed in Webcompat Issue impact, collecting data. Our current scoring logic is based on assumptions. All the data collected is stored in Web Query. The canonical list of webcompat issues is stored in Bugzilla, for every webcompat issue, there will be a corresponding core Bugzilla report. Webcompat Retention - finding the relation between fixing webcompat issues and user retention. In order words, could we say that fixing webcompat issues helps us retain users? Can we prove it? It is about producing a document on how we can do it, based on behavioral differences.

So our focus for the next year is to build webcompat knowledge base, collect data, and user retention -impact user retention.

Calin: Whenever a bug will be reported on Webcompat, there should be a corresponding Bugzilla Bug?

Honza: If there already is a Bugzilla core bug for that, there is no need. If not, there should be a Bugzilla bug for that (core issue). We will talk about this flow with the whole team as well, in the near future.

Honza: There will be some demos as well.

Honza: I was also thinking about how to improve our triage, based on the data received. Could we simply the way we produce this data? The work you do is visible on the data, but are people looking at this data? Can we add a way to simplify the way people understand this data, outside of the project? So 2 things that we should be working on. 1- work becoming more visible, 2-learning from the data. This could be a goal for next year.

Honza: There is a company that works for Firefox (Bocoup), that concentrates on platform gaps. Like features available on Chrome, but not on Firefox. We had to come up with the top 20 sites, based on the top 100 sites that they are working on. So that fits together with the current OKRs we have done, as we used some of the data from there, to justify our picks.

Raul: Should we also think about future OKRs for 2023?

Honza: We will discuss more on this at our meeting happening in January, next year. I will be on PTO when our next meeting is due this month.

November 15 2022

Issues for pages where content is streamed (movies, tv shows) where we are not sure if the streaming is legal or not (pirating) (SV)

How should we check such pages, to see if the streaming is legal? And if the streaming is not legal, but the issue is reproducible, how should we address them?


Raul: How do we check if those pages are legal or not? In different parts of the world, some are legal in other parts some are illegal.

Honza: Good question. Are those sites popular?

Raul: Some of them are popular.

Honza: We mostly care about wecompatibily between browsers, we tend not to focus on the content that a website is providing, but we do focus on how the browser is behaving. Some content might not be available in our browser, but the users will still access that website on the other browsers regardless if those are illegal or not. So we want to retain them on Firefox.

Calin: There are some sites that present pop-ups, suspicious content, redirects to spam sites, etc, close to malware content. Should we look into them?

Honza: This sounds like something that `NSFW` pages might present. What are you doing with adult sites?

Calin: We test them, and act accordingly. But some sites are trying to trick the users into clicking on some suspicious links/pop-ups.

Honza: That case is simple. We can not learn from that, the whole goal for our team is: we do it to identify webcompat issues and recommend the platform team to fix them. If there is nothing we can learn from the site, we can ignore them.

Calin: Some users think it is a browser problem, which is not true.

Honza: It is very time-consuming to figure out if it is a webcompat issue when diagnosing. I think this is the same for testing.

Honza: regarding the if the stream is illegal or not and how we can act on them, I'll get back to you after I got a clear answer.

NeedsDiagnosis Re-check issues (SV)

We have finished the list with reports submitted by users:

This week we have started the rest of the issues.

Honza: What is the pattern regarding most of the issues that are fixed or nobody responding on the issue?

Calin: Those issues are very mixed, so a pattern can not be clearly pinpointed.

Honza: Are people responsive?

Calin: Some of them are.

Honza: How long do you wait for a response?

Calin: About 12-14 days. After this, we will start looking into anonymous user reports.

Honza: Because there you have nobody to ask for info, do you just close them or?

Calin: We will treat them depending on the outcome (reproducible, worksforme, non-compat, etc).

Disney account (SV)

Disney account is up and running, we have checked some older reported issues, but the page loads as expected, and streaming presents no issues.

Honza: I think I found a way how to get a paid account for pages behind a paywall, so if you need in the future something similar, let me know.

Work week in Berlin (Honza)

Honza: regarding this, you can join the meeting via Zoom if that is something you are interested in.

SV: If the schedule is aligned, sure.

Honza: As you know, not every topic is of importance to you. Here is a link to the document:

Honza: I think Tuesday would be the most interesting for you, to see how your work fits into the big puzzle.

Raul: If the policy allows us, sure. If so, we could join and do this instead of the meeting.

Honza: I'll double-check as well.

Honza: Could you also send me something to highlight 2022, OKRs, numbers, charts, etc, whatever you are proud of? If you can send it to me by the end of the week.

Raul: Sure thing.


29th of November meeting will be canceled

November 1 2022

Web Renderer (SV)

We know that the Web Renderer functionality is no longer a thing but we are curious if there are any updates regarding that topic because we still receive bug reports on webcompat with the label "type-webrenderer-enabled".

E.g :,

Honza: No news regarding that. That label is appended by the BOT, let me talk with Ksenia about the BOT integration with bugs in GitHub repo. They seem obsolete for now, but maybe there is something I am missing.

Honza: Is there anything relevant in the reports about webrenderer?

Calin: Usually webrenderer was enabled in about:config, but now for some issues, the label seems to be applied even for issues where about:config can not be accessed (Firefox Release for Android)

Honza: Just ignore it, for now, I will talk to Ksenia to see if there is any reason to keep this label in the future.

Issues where a paid account is needed (SV)

We have seen an increase in issues for Disney Plus, mostly about the video not playing, but we are unable to test the issue because a paid account is needed:

Link for current test accounts:

Honza: Who gave us access to the paid accounts in the past? I can see both paid and unpaid accounts in the list.

Raul: The "Media Top Sites" tab contains the paid accounts needed at the moment. Recently we have not made such a query for accounts, but from my knowledge, Oana would highlight the need for a paid account to Karl, and Karl would pass that on to the relevant team.

Needsdiagnosis issues (SV)

We have made a document with the provided open issues in the Needsdiagnosis milestone that need to be rechecked:

As requested, we have also made the 2nd list with issues that are reported by users:

Raul: The first list is with the general issues reported by anonymous users and regular users and the second one is with the issues reported by users.

Honza: We should go with the list of issues reported by users first.

Raul: Sure. We can make an OKR task. After the task is made, when should we start? Honza: Sure, send me a link. We can start the task right away.

Honza: What's the strategy when testing those issues? Raul: Regarding the reports received by `anonymous users`, if we can not reproduce the issue, we will close them as FIXED. If more info will be needed, we will ping the assignee of the report. Regarding the reports received by `users`, if we can not reproduce the issue, we will confirm this with the reporter. If the user confirms, we can close the issue as FIXED. If the issue is said to not be fixed from the reporter's point of view, we will ask for info from the assignee and/or the reporter. In both cases, if the issue is still reproducible, we will leave a comment to highlight this.

Raul: If no answers are received from the users after 12-14 days, we will close the issue accordingly.

Honza: Sounds like a plan then.

October 17 2022 repository edit rights (SV)

Oana: It seems that both Raul and Calin have no rights to close/reopen or add labels to the issues on repository.

Oana: Also, it seems that you (Honza) is missing from the OKRs, as an assignee:

Honza: It seems that I do not have privileges for that repository, I'll resolve this later.

Also for Calin, could you invite him to Webcompat internal Slack channel?

Slack ID: ctanase

Honza: All done, you should be there.

We've gathered a list of QA experimental labels for trends (SV)

 Link to the list:

Honza: It looks good. But the prefix for the label seems to be used for something else.

Raul: A lot of prefixes are already used, so maybe we can come up with a more relevant prefix.

Honza: We'll use this for now, and later we can edit it. I was looking at your notes, so all of these seem like good candidates. Do you want to add them immediately?

Oana: We can use them for now, and we can add issues to the Trends OKR based on these labels. Also, we should keep the same color for all related labels to trends.

Honza: That looks ok. We can start using them. The list should be up to date.

Oana: Yes, we update the list accordingly.

Oana: the experimental labels were added:

[Beta] State of WebCompat Report (SV)

We have made a draft for: regarding the Trends OKR on what we've learned and what we can use from Trends.

Honza: Can we highlight the most important issues from this list?

Raul: Based on the data gathered from Trends and from the number of issues related to a certain site, Youtube seems to be on top of the list.

Honza: It seems that the Print Preview feature can be added to the list, being reproducible on other pages.

Honza: Our next plan is to proactively test the site. Do you think we can test another category of sites as we have for Top Streaming Sites OKRs?

Oana: We can test sites that allow online meetings (Zoom, Google Meet, Microsoft Teams, etc).

Honza: Can we also test relating to Print Preview?

Oana: If the page presents this feature, sure. Or has content that is mainly used for Print Preview (PDF files for example)

Honza: Are there any Trends in the mobile area?

Oana: Mostly for the unsupported features, which are present on the desktop.

Honza: Let me put together some text based on this document. The report is getting more reach now, so this is a way for you to have more visibility. We can also combine this OKR with the current OKR for Exploratory Testing on Top Streaming Sites, maybe there is something we can learn from both OKRs, or we can use them in our advantage to highlight something important. We can also use them for reference for a future OKR.

Honza: I'll summarize the table, and then you can review it and add your ideas as well.

* Youtube - an important site and many reports were related to it (recently an issue with short videos, not being able to play videos). This is a major site and reported issues tend to have rather big impact. Mostly desktop.
* PDF Printing - Issues found in the produced PDF. Chrome works fine. Mostly desktop. Honza: double check with Tom, why this isn't a Webcompat issue.
* OKR - test top movies and streaming sites
* Next OKR - Testing online meetings sites


Oana: 24th-31st October Oana: Maternity leave starting from November

October 4 2022

[FYI] OKR tasks added to the dashboard (SV)

Honza: I see 2 planned, and 3 in progress. These planned OKRs, will you be working on that list?

Oana: Calin gathered the list, and we will discuss the topic.

Honza: The in-progress ones make sense.

Oana: They are OKRs in progress e.g. End every week with O untriaged/unmoderated issues.

[FYI] Site Intervention/UA Override re-check (SV)

This task is being performed today to be part of this month and new OKR task.

Worldwide Streaming & Video Stream sites (SV)

A list of 10 Streaming sites and 10 Video Stream sites was gathered to perform exploratory testing on them in Q4:

Which approach would be the best to tackle?

1. Test 5 Streaming sites and 5 Video Stream sites in 2022 Q4
2. Test 10 Streaming sites in 2022 Q4 and 10 Video Stream sites in 2023 Q1 or vice versa

Calin: We have broken down this list into 2 categories, mainly movie streaming and gaming streaming. Can we merge this list or should we make 2 separate OKRs?

Oana: What would be the best approach? Having mixed content, or having 2 OKRs?

Honza: Do we have a link?

Calin: Only SV accounts have access, but I will update the permission.

Calin: Link added.

Honza: Interesting, you have gathered all the issues related to certain pages?

Calin: We have used the INCOMPAT ADD-on.

Honza: How does that work?

Calin: The ADD-on shows all the issues listed under a certain domain. We can share it now via share screen, and we can have a walkthrough on how it works and how is it useful to our project.

Honza: Right, so it counts every issue (closed and opened). That looks very helpful.

Honza: Coming back to the OKR proposal, is there any plan to reflect the number of issues, the signal where most issues were?

Oana: We wanted to have worldwide coverage.

Honza: Seeing the number of issues per domain, should we concentrate our efforts first on the most reports for a site? Netflix seems to have quite a lot. Do you also look after the popularity of the domain?

Calin: We have taken that into consideration as well, as we have picked via different sites domains that have a significant number of subscribers.

Honza: How will the testing work?

Calin: We have a checklist.

Oana: Mostly exploratory testing will be done.

Honza: Do we have the checklist somewhere, so I can see it?


Honza: I will check the documents offline as well.

Oana: Where we will not have an account, that will be a problem.

Honza: If you do not have an account, that means that our options will be limited.

Oana: We have a list with Mozilla paid accounts from SV, so will be using them.

Raul: Where we do not have an account, we can proceed to the next available domain where testing can be done.

Oana: We can also check the list to where an account is needed or not.

Honza: You mentioned Mozilla accounts. Are those paid by Mozilla?

Oana: Yes.

Honza: What happens where in the future you will need an account (paid) for a certain account, what is the procedure there?

Oana: We ask around for an account, and if there isn't one available, we can make a request based on the priority of the domain/issue.

Honza: If there are sites for which we do not have an account, it will be nice to gather a list of sites to that we do not have access/account.

Oana: Sure, we can gather data for a future document.

Honza: How do you collect data that we can use after these OKRs?

Oana: We add them to the OKR task with all the relevant data.

Raul: We also make Metrics and collect data via documents.

Honza: Any interesting data that we can collect from this, we can add it to Trends. Testing these sites may offer us some data that can provide some insights e.g the result after the testing compared to the results gathered for that domain in webcompat. E.g. how Firefox is doing in that area, is it supporting them properly. Using your checklist, you can score an URL based on the findings.

Hawaii AllHands (Honza)

Are there any updates/meeting notes? Any tasks for us?

Honza: Yes. Honza: Meeting notes from most of the meetings the team (DevTools + WebCompat) had at AllHands

Honza: Scrolling down into the project, Webcompat tooling for Diagnosing was a topic that presents interest for our project.

  • Pretty Printing was one of the biggest topics.
  • Platform Data Glue shows an insight into how the future issues might look (what’s happening in other browsers and not in our browser)

Honza: If you can go over these notes, and give feedback, that would be very valuable.

Honza: State of `webcompat` report shows the goal of `webcompat`(how many issues, what is broken)

Honza: We are actively building a report which has a goal to teach others what is important from `webcompat`, so for example, the product team can gather data to improve or fix the browser based on our reports.

Honza: Everyone understands that `webcompat` is important, but not a lot of people know how helpful a `webcompat` report can be.

Honza: In the future, more context is planned to be added to our `webcompat` reports.

Honza: Feedback on this note would help us a lot.

Honza: For example, after our top 10 Streaming OKR, we can gather data that other teams might find useful, or they can see what can be improved. As you are doing the report, see it as we are teaching people about our product via `webcompat1, based on the irregularities found.

Honza: Patricia asked about a Trend that shows that something bad is happening,e.g performance issues.

Honza: There is a list of performance bugs, with the label `bad performance` or `performance` label.

Honza: Ksenia uses the unsupported label.

Honza: can we do something similar while triaging?

Oana: we also add the label `unsupported`,`type-no-css` or `print`, but for performance issues, we are not always sure if it really is a performance issue and something else, so we only record a performance profile and add a comment with it on the issue

Oana: labels are added by our dev team for performance issues after it is being diagnosed by the devs

Honza: Can we think about a new list of labels for Trends? And how we can make our diagnosis process easier?

Oana, Raul: Sure.

September 20 2022

Onboarding Calin (SV)

  • add sv-calin to Slack channel #webcompat-internal
  • add Calin to webcompat internal - for Google Calendar

Honza: I will do it in two weeks after the ALL HANDS and his PTO, so I can properly introduce him to the team as well, and so he can introduce himself as well.

iOS issues approach (SV)

We have received an issue coming from the Firefox iOS repository, but issues that are reproducible for Firefox iOS are being moved by us to the iOS repository, and not in our `needsdiagnosis milestones`:

Should we continue to move reproducible issues for Firefox iOS to the Firefox iOS repository? If yes, can we also let the Firefox iOS team about this, so their are aware of our flow, instead of them moving valid issues to our repository?

Oana: iOS was dropped from our team. We just move them to their repository. We have a Firefox iOS channel for webcompat on Slack (#webcompat-firefoxios ), but there is no activity there.

Link to iOS repository:

Honza: Let's discuss this with the entire team at our meeting.

Tom: When the iOS team determines that an issue is with a website, not Firefox/iOS, they will move it back to us.And in those cases it will probably be non-compat, since the issue will also happen on Safari on iOS too.

Tom: So we can just leave those open as needscontact, but not specific to firefox (just iOS in general). That way if anyone ever wants to reach out to the site, they can.

Focus and Fenix repositories transition to Bugzilla (SV)

We have seen this on the Focus repository:

Do we have any info regarding a date when this will happen?

How would this affect us and our workflow? And if specific bugs will continue to be reported in the old/new repositories, or on Bugzilla.

Honza: I don't have much info on that as well, but I will ask around. I think all reports related to Fenix and Focus will be reported in Bugzilla.

Oana: Will our repository be moved to Bugzilla as well?

Honza: So far, there are no plans regarding this.

Proposals for Q4 (SV)

We have gathered some proposals for Q4

Honza: Can you please explain more about the intervention OKR?

Oana: Before the release, we check if the Interventions or UA Override as still needed for the issue

Honza: And what about the OKR regarding the top 10-20 sites?

Oana: This will be based on the reports received and also trends.

Honza: We can also use the Trends OKR to help us with this. Looking at the Trends, we can see what we can focus on. Collecting data from the Trends OKR and checking if the Trends are really Trends or not.

Oana: Based on the current Trends, we could consider top 10-20 Streaming sites.

Honza: What about the Contact ready? What is the outcome of this? Does it work?

Oana: Sometimes we get a response that the issue is not reproducible. Sometimes the website contact is not reachable, and we can close them as incomplete.

Honza: This sounds like we can do this more often. Who is setting the milestone for this?

Oana: The person who is assigned to the issue.

Honza: Did you do this using other labels/milestones?

Oana: We did it for most of the milestones (sitewait/needsdiagnosis/needscontact). We usually do this at each quarter. We iterate around the milestones.

Honza: How many issues are in these milestones?

Oana: Around 63, but for `needscontact` around 500.

Honza: Regarding rechecking the ETP issue, we can ask Tom at the meeting. Will you be checking all the ETP issues on Bugzilla?

Oana: There is a component in anti-tracking on Bugzilla. Some of them might be complicated or complex.

  • Update from TOM on ETP recheck OKR:

Tom: the anti-tracking team considers standard-mode issues important, but I believe strict-mode-only ones are generally assigned a severity quickly, and do not require a round of re-checking anytime soon after that. So we can skip non-standard issues if they have a severity of s2 or lower, and I don't think there are too many bugs left as s1 or which are standard-mode-only. we have triage and diagnosis meetings every week, so I don't think there is too much value in going over them again.

Honza: Sounds good. Thanks for the list.

Oana: All the proposals depend on an estimate of the workload, so some might be dropped.

Trends (Honza)

 * YouTube videos don’t work ([bug](
 * Entrata platform not supported in Firefox ([gh](
 * Yahoo mail ([gh](
 * Google Calendar ([gh](

Honza: What should be the workflow? Spotting trends or potential risks and checking the Bugzilla bugs already reported, or reporting a new one in Bugzilla are also part of the workflow.

Oana: For Firefox being unsupported, we can contact the site owner as well.

Honza: Should we go back and look in the comments if the issues are reproducible?

Oana: Yes, we already re-check the issues that have been signaled using the Trends if they are reproducible or fixed. We update them on the fly, with the corresponding Bugzilla bugs as well. We also follow up with users as well.

Honza: Do reporters show any interest? E.g asking questions, being willing to volunteer?

Oana: Not in our case, very low interest is shown to volunteer to do diagnosis, they are interested only for the issue to be fixed.

Oana: Dennis collaborates with other people (outreach) and part of the work they do is to learn how to diagnose issues.

Honza: I will talk to Dennis about it.

September 06 2022

Cases where the "lock icon" shows a message like "Parts of the page are not secure" (SV)

In the past, we have treated issues where inside the lock icon from the URL bar stated that parts of the page are not secured or not working, as valid webcompat issues. How should we treat similar issues from now on?


honza: treat them as non-compat, but better ask dennis about it (for mixed content issues)

dennis: That happens when a site served via encrypted https:// is loading an asset, like an image, over unencrypted http://. This is a small security risk, which is why browsers warn you. However, this is just a warning, Firefox will still load and display the "insecure" image either way. So from our point of view, nothing is broken in most cases. It only becomes a WebCompat bug if something is actually broken on that site, for example if images are missing. In theory, Chrome will show the same warning. However, this became a lot more confusing recently, as Chrome is now shipping an automagic upgrade mechanism. If Chrome encounters mixed content, Chrome will try to load the image via https://, and only warn if that fails. So you generally don't see this warning in Chrome anymore

Accounts/access for Calin (SV)

As far as I know LDAP account was requested already for Calin. For webcompat, he already has an account, but he needs rights to be able to change the status of issues/add labels etc.

Oana: is it possible for Calin to get access?

Honza: working on it

Oana: but for Github, he needs to be added to the repository.

Honza: I will do it.




Browser feature issues that are reproducible for one page only, working as expected on other pages(SV)

Are issues like this classed as valid Compatibility issues, or valid browser issues? For example, more saved login infos are shown (3) for, instead of just the login info for walmart. Facebook for example shows just the login info for Facebook, without showing other saved login infos for other pages.


honza: indeed it might be a browser feature, but we can't fully know for sure, so move them to Needsdiagnosis and they will be checked at the Fast Triage meeting

QA Trends (Honza)

Most encountered issues at triage:

- YouTube videos not playing - reproduced by Calin and Raul

- Twitch videos - videos are not displayed in dark mode - fix is available for Firefox Nightly

- Print preview issues - broken layout

honza: continue adding qa trends

August 23 2022

DevTool - Remote Debugging tablet (SV)

Unable to see Inspect panel on a tablet device (empty screen shown), even if connection to the device was established. (USB debugging is enabled both in Firefox and in the device system)

Sometimes an error is shown

Honza: Let's keep this topic for the next meeting.

Oana: [Update] it works now


  • If this happens again:
   * There is #devtools and #devtools-team slack channel
   * The best person to talk to: @jdescottes

ETP issues and caching (SV)

Some sites are broken due to ETP (usually Strict), but after disabling and enabling it again, the issue no longer reproduces.

E.g. (

Workaround: clearing cache, the issue reproduces again with ETP - Strict.

This probably needs some investigation on the ETP side, while changing states cache should be cleared.

Should we continue reporting the issues in Bugzilla as we did before and add a note regarding the cache problem?

honza: talk to Tom, and ad the topic to the Webcompact meeting agenda, and file a Bugzilla bug if there is none (with details, and examples)

July 26 2022

QA Documentation added to Mana page (SV)

Honza: Cool, thanks for that.

OKR Triage Trends (SV)

We've added some topics to the OKR

Raul: Observing duplicate issues that were not reproducible at the time, as per:, where we observed a pattern, helped us to reproduce an issue that affected numerous users following the Trend guideline.

What is a Trend? (SV)

We summarized some ideas:

- something that occurs on most used/popular sites
- duplicate issues 
- unsupported features on Firefox 
- unable to sign in with FB/Google/Twitter with ETP - Strict enabled
- embedded media content not displayed with ETP - Strict enabled

Honza: yes, these points summarize the idea of the Trend, and what to look for when observing a Trend. I would also add a page not working in certain parts of the world.

[FYI] Incompat ad-on (SV)

This addon is a companion for It helps track compatibility bugs for sites and shows you the number of already reported bugs on that domain.

Honza: Does this add-on show Duplicate issues?

Raul: Yes, it creates a list of all the issues signaled regarding the URL submitted. All the reports are shown via a GitHub search query made by the-addon

Honza: Does it help you in your daily triage?

Raul: Very much. It helps identify duplicates faster and easier, or other related issues with the reported issue.

DevTool - Remote Debugging tablet (SV)

Unable to see Inspect panel on a tablet device (empty screen shown), even if the connection to the device was established. (USB debugging is enabled both in Firefox and in the device system)

Sometimes an error is shown

Honza: Let's keep this topic for the next meeting.

Priority vs severity labes (Honza)

Honza: I've seen that you use labels to set severity and priority. How does that work?

Raul: Once an issue is reproducible, we move it to the Needsdiagnosis milestone, where we set the default label for priority to normal, and the severity label accordingly.

Honza: How do you set the priority level via the label?

Raul: Based on the impact the signaled issue has, we have 3 levels for the severity label: minor, important, and critical.

- minor- for cosmetical issues on the page
- important- a non-core broken piece of functionality
- critical- the page or core functionality is unusable, or you would probably open another browser to use it.

Honza: Who sets the labels?

Raul: The priority is set during the diagnosis process, while the severity is set in the triage process.

DevTools for QA guidelines document (SV):

Honza: Thanks for that, that is very helpful.

July 12 2022

Q3 Planning done (SV)

We've created the tasks and added them to 2022H2 dashboard.

Oana: If you consider adding other tasks, please let us know.

Honza: How does Site Interventions Release work? Is it like testing every Release intervention?

Oana: Yes, that is correct.

Issue reported with features unsupported on Firefox (SV)

We get issues where some features are not supported on Firefox but are supported on other browsers.

There are many times where we can't create a test account or the STR are not clearly provided, so we are unable to verify them.

Based on the info/description/screenshot provided by the user, should we move them to Needsdiagnosis or maybe directly to Needscontact, so the team would contact the site owner to understand why those features are not supported in Firefox?


Honza: If it's the second case, we can move it to NeedsContact, and if we do not have an account, we should move it to NeedsDiagnosis, because we can ask around if we can get an account to test.

Interventions/UA Overrides QA verification guidelines manual+automation (SV)


Honza: Do we have a list of the already made documents? I know this is not the first time you have made such a document, which is super-helpful.

Oana: We can make one, sure.

Honza: We also have the Mana Page:, can we add it here?

Oana: Sure, we can add it here.

Honza: Can you quickly summarize this document for me?

Oana: ofc, there is a bit of info about Interventions/UA Overrides, how we can see it based on different devices, how to enable or disable them, and a section about manual and automation testing.

Oana: If one of the Webcompat issue is reproducible, the Intervention/UA Override is still needed. If not, we create a Bugzilla task to remove the Intervention/UA Override.

Honza: When you test if an Intervention/UA Override is needed, do you go through a list?

Oana: Yes, Dennis created a dashboard where we have the necessary data.

Honza: I see you build the gecko driver as well. Is that needed?

Oana: Yes, that is needed for the automation set-up in order to run the tests in Firefox

Honza: Sounds cool, please share this document in the next Webcompat meeting.

Honza: What happens if there are too many failed automation tests?

Oana: We run them one by one.

Oana: For the moment, the automation tests are just for the desktop issues.

Trends/patterns (Honza)

One of our goals is to spot trends in WebCompat landscape.

Honza: We have a list of tasks to be completed. This should help us recommend to the team what are the top of the webcompat issues, which are important, which are not.

There are different perspectives regarding this (how many users are using the page, etc).

We have recommendations to the platform team, so that they can focus on them.

The second part is trends, and about spotting them. So, what is a trend?

A trend might be that a page does not work in Firefox, in some parts of the world (unknown issues). Or other browsers are implementing APIs, except for Firefox (doing something without us- future issues).

Oana: The second example of the trend can be observed in Interventions/Overrides

Honza: Exactly. The third case of a trend is known webcompat issues.

So as you are doing triage, you might be able to spot trends. Maybe you can see some patterns. Is that something you can help us?

Raul: Should we focus on reproducible issues reported by users, or Worksforme issues?

Oana: Usually we move issues that we can not reproduce to Worksforme.

Honza: Writing them down, regardless of their status, as long as the issues present a pattern/trend, we can write them down.

Is this possible to summarize this kind of issue?

Oana: We can make a document to summarize this.

Honza: Great, let's try this.

Raul: Here we have an issue that might be classed as a trend. Should we mention future issues inside related issues, like we did here?

Honza: Sure.

Oana: Can we make an OKR task out of this, eg. Triage Trends?

Honza: Sure.

Oana: I've created the OKR task and added a few items:

DevTools (Honza)

How much DevTools is useful/needed for triage?

Oana: We use the Inspector to pin-point the affected area of the code, or we play around with the CSS where possible to see if a possible fix can be applied, and we also use it for RDM.

Honza: For the next time, I will put again this item in the agenda, so we can talk more about this, and what we can use to improve DevTools for everybody.

Oana: Also, I've seen something about the Compatibility panel. We hardly use this. Also the screenshot feature.

Honza: We will talk about this as well.

Oana: Reporting issues from the DevTools, how will this work? Will it be implemented?

Honza: This is a suggestion, we have not agreed on yet.

Oana: We also use Remote Debugging for Android? We have some ideas there for improvements, as we guide users sometimes to use this, and we also use them.

Honza: Sure, we can talk about what we can improve here as well.

Oana: Also, we use the performance tab of the DevTools, and the Network tab.

Honza: Cool, highlight this in a document regarding how, what and why you use DevTools in your triage process.

June 27 2022

Verification of shipped Intervention/UA Override (Honza)

honza: as discussed on slack with Dennis, I was wondering about the process from your point of view as to what is happening around every cycle of shipped Interventions/ Overrides

oana: at each cycle, 2 weeks before the release date, we perform a verification, both manually and automatically, for the list of the Inverventios/Overrides.

We check with both the Interventions/Overrides enabled/disabled, and based on the results, we conclude if the Interventions/Overrides are needed or not.

If the issue is no longer reproducible with the Interventions/Overrides disabled, we will submit a Bugzilla report to request the removal of the issue from the list.

honza: I have seen that some Bugzilla reports for Interventions/Overrides have a corresponding GitHub issue.

oana: some of them have a corresponding GitHub issue because they were first signaled using Webcompat reporter, and they are added to the "See Also" field in order to give context when investigating.

honza: is there a list regarding the Interventions/Overrides?

oana: We have a list we use, created by Dennis

but also in `about:compat`, we can see the active Interventions/Overrides that are in place and their corresponding Bugzilla task.

oana: at each run, we create our own list (containing both the Bugzilla and the Github issue, add status and comments)

oana: we'll create an Intervention/UA Override guidelines document asap

[FYI] Q3 Planning is in progress (SV)

June 14 2022

Firefox Release vs Firefox Nightly reproducibility (SV)

Currently, issues that are reproducible on Release but they are not reproducible on Nightly, we close them as Won't fix, with the message for the user to test on the next release.

We previously agree to this approach with Karl. Should we continue doing so?

honza: we can keep this approach

[FYI] Bug reported QA flow (SV)

We've created a chart and some guidelines on the work performed by SV QA team after a bug is reported on platform.

 Chart with the QA flow:
 Guidelines - flow explained:  

honza: can this be shared with Webcompat team?

oana: yes, I've created a copy for the team using a Mozilla account, everyone should have now access to view and comment

[Honza] WebCompat Repos

What is the relation and how the process looks like?

honza: we discussed it, all good

Fast Response Triage details (SV)

Are there any updates on this topic?

honza: things are moving, work in progress, I'll keep you updated

paul: this will be helpful since all the team members are working on it

May 31 2022

Welcome Honza

Introduction to webcompat. QA tasks.

Previous Sync meetings:

- Sync meetings with Karl

- Sync meetings with Mike