Compatibility/Meetings/2024-01-09
From MozillaWiki
< Compatibility | Meetings
- Web Compatibility Meeting Meeting - 2024-01-09
- Minutes: Previous 2023-11-21
Minutes
- Scribe: Dennis
- Chair: Raul
🔥 Google Search on Android is on fire - quick update on the status quo 🔥 (Dennis)
- Incident with Google search returning a blank page for Android users.
- Google are rolling out a fix on their end; this is in progress and seems to fix the issue, but full rollout might take a few more hours.
- Appears to just be broken UA sniffing in the server side code.
- Dennis missed breakfast.
Proactive & Reactive WebCompat in 2024 (Honza)
- Proactive
* Interop 2024 - reach a good score on the public dashboard * Provide tooling helping to understand changes in the score
- Reactive
* Ship new system for collecting, triaging and prioritizing web compat issues * Form a reasonable metric what we can influence * Move the metric in the right direction
- Projects
- Reactive WebCompat System (reporter tool, triaging dashboard, analysis KB & SoWC Report)
- Firefox not supported Analysis (crawler & classification engine)
- Collect engagement data (time spent on page, site popularity in Firefox, Tombstone telemetry analysis)
- Regular diagnoses (+ feedback for the DevTools team)
- Interop
- WebDX
- PBM testing
- Honza: I wanted to summarize the goals and expecations for this year, focussing on H1. We want to have a list of individual goals. No deadline yet, but last time, it was at the end of February. Let's aim for having a good idea by the end of January, so we have time.
- Honza: The list of Proactive expecations are for the entire WebCompat area, not just work for our team - a lot of the work is on other team's plates (like Platform work). For some things, we're directly involved: provide good tooling to understand the impact of changes, why the score changed, what made the impact, and run some predictions.
- Honza: The reactive part is mostly our area. We're shipping a new system for collecting reports, currently going through an experiment. Building the new triage dashboard, identify root causes, recommend work to the platform teams, ... Part of that is coming up with a metric. Lots of teams track metrics: the perf team, for example, is tracking Speedometer. Everyone knows this is hard, but we should try coming up with something. After we shipped the reporter and the tooling, it will become easier to understand the data, and how we could use it to build a metric.
- Honza: When we have a metric, we can check if the metric is moving into the right direction, if our recommendations to other teams make sense, and use it to make predictions and set goals. We'll probably be shipping the new reporting system in February, so Q1 will be a lot about learning and adjusting the tooling. In Q2, we'll be using the system, and we might get a good idea on what a metric could be: just the absolute number of reports? the number of sites claiming to not support Firefox? use additional data (like user engagement data)? Ted will be helping us with collecting the right data, and getting related things done. Hopefully, in H2, we can work on moving that metric into the right direction.
- Honza: We already have some ideas on how to improve the reporter and related systems, like also collecting reports from Mobile, or collecting screenshots. We'll need multiple iterations on the QA triage tool to make it work well for them, we'll need to use our data to calculate some ROI, and to build a new State of WebCompat report.
- Honza: Some side-projects will be happening as wel: things like crawling the web for "unsupported" messages. We might have to do some things ourselves, as other teams are overloaded.
- Honza: Collecting engagement data could enable us to see if our changes have an actual impact to user. We can use some datapoints to better identify push factors, but this needs some analysis.
- Honza: There are also just regular issue diagnosis. We need to get back to it, and the hope is that when we have the new system in place, we can be smarter about deciding where to spend our time on. Part of that is providing good feedback to the DevTools team to help them make our work easier.
- Honza: There's alos Interop/WebDX work. There won't be much coding invovled in this, but lots of comms.
- Honza: Eventually, we should go over the issues reported for PBM, and see if they're fixed or not, and provide a report about what's still broken in PBM.
- Honza: We should break those projects down into specific tasks, and assign those tasks to people. This can be the basis for individual goals.
- James: The step between the reporter tool, the triage dashboard, and the diagnosis work is the WebCompat knowledge base, which we have to keep updated. This also will be useful to provide input to the Interop project next year, identify focus areas, ...
- Honza: Maybe we can have an internal deadline for when we want to have the next State of WebComapt report ready, at least at the end of Q2.
- James: +1. At the end of Q1, we'll at least have a list of sites where people file reports on, but we won't yet have a full list of root causes. By the end of Q2, we should have more data on root causes, and this could become the State of WebCompat report. One thing that we agreed upon in Berlin is that we should get back to working on issues, instead of "just" building infrastructure, and we need to do that to have good data.
- Honza: Good data is also needed to decide what the metric should be.
- James: For the "Firefox is Unsupported" project, we should just try to build something. We have ideas for a source of URLs, and we can build local crawlers and run them one someone's machine to see how that looks like on a small subset of sites. If we want to do more, we can look into how/where to deploy that to somewhere bigger, but at least we'd be having an idea on what we want to do.
- Honza: I agree. When I was testing WebDriver BiDi, it was fairly easy to get started. If we have a small list of sites, we can jut run it. Ksenia made good progress on a classification system, so maybe we can just run it.
- Ksenia: We'd need to test both Desktop and Mobile, which might be more complicated.
- James: If we can get it running on Desktop, that'd be a good start. We can explore running it in the emulator, for example, but that'd be slower and more complicated. So let's maybe start on Desktop and see how it goes. We have experience with automating stuff with the Android Emulator, but it's a bit harder, but not impossible - we've done it before.
- Ksenia: So far, I only have a basic Selenium script, and I just overrode the UA string to "test mobile".
- Tom: We'd probably want to tweak that a bit, like emulating touch screens, or set specific screen sizes.
- Honza: How would we get the list of URLs?
- James and Ksenia: CrUX, the most popular domains in each country.
- James: Brian had a bunch of stuff to do that. Worth noting that there isn't that much overlap between countries, so the list will be large. We could also just test on the Global Top 10k or something. If we just want to get root domains, then we have lots of sources, like Tranco.
- Honza: Ksenia, is it possible to run the classification locally with a list like that?
- Ksenia: Yes, I downloaded the model locally and have already integrated it into my Selenium script.