Compatibility/Meetings/2022-07-26

From MozillaWiki
Jump to: navigation, search

Minutes

Scribed by Thomas

Report: Web compat risks (Honza)

  • Summary from James: Risk Factors (based on State of CSS 2021 survey, risk: Firefox is the only one not supporting it)
 * CSS Container Queries
 * CSS :has()

Notes:

  • Honza: these are the top two issues as found by the survey. Anything we can do better?
  • James: to get this result, I looked at the spreadsheet of issues categorized (by bgrinstead?) of APIs supported by Chrome nightlies but not Firefox nightlies. Risk is higher the closer Chrome is to shipping a feature compared to us. But we should have known this already since earlier Chrome and Safari were pushing these for interop2022. But regardless, using the method I did isn't perfect, as the data is quite coarse (caniuse.com doesn't really break down beyond "is this feature supported" very well). Maybe browser compat data might give finer data, but then it's hard to tell what's important.
  • Dennis: I think in the future, we will have a stronger signal from our daily work and find it easier to identify trends without retroactive study.

Report: Calculating issue score (Honza)

Introduce list of new properties (factors) we set as part of the triage instead of calculating severity/user_base_impact at the triage time [1]

  • url (tracking issue)
  • site (URL of broken site or page)
  • platform (list of affected platforms)
  • last_reproduced (most recent date the issue was successfully reproduced)
  • intervention (URL of intervention that is shipping or has been shipped. Link to the code in the GitHub repository, and use the canonical URLs to ensure persistance over time)
  • impact (Type of breakage: "site_broken", "feature_broken", "significant_visual", "minor_visual", "unsupported_message")
  • affected_users (What fraction of users are affected. 'all' where any site user is likely to run into the issue, 'some' for issues that are common but many users will not experience, and 'few' where the breakage depends on an unusual configuration or similar.)
  • resolution (If the issue no longer reproduces on this site, the kind of change that happened. 'site_change' if there was a general redesign or the site is no longer online, 'site_fixed' if the specific issue was patched.)
  • notes (Any additional notes about why the other fields for this issue are set to the given values.

There is also [2] from Kate Hudson about measuring ROI/user impact for WebCompat issues using Privacy Preserving Measurements over DAP.

Notes:

  • Honza: what kind of formula do we want to use, given the inputs above? Do we need more fields, or is this good enough?
  • James: I think we should separate "affected users", not just base it on "site rank". It would be a fractional multiplier; if every user is affected, not just ones relying on specific bits of the site that are broken, it would be closer to 1.0 (ie, if only one type of login is broken, it would be lower than if the whole site is broken).
  • Dennis: another example is that we get a lot of reports about broken facebook.com but it's really specific games on FB that are broken, not the entirety of FB, so we do need to be finer-grained.
  • James: right, Tranco is ETLD+1, so we would see "all of Google is broken", not just a lesser-used feature or page affecting far less users. We need to be able to "suppress" the importance of an issue based on such factors.
  • Honza: so aside from that and coming up with a formula, we also should consider when we will make sure issues are annotated with these fields... the fast-triage meeting, perhaps?
  • James: one obvious thing to do (maybe unfair), if you're the person who added the knowledge base entry, you are responsible for updating it? That would require less synchronous time during a meeting. We could also just try it together during the next meeting to make sure everyone is on the same page with what to do here.
  • James: we can probably backfill to some extent with an automated script that checks the related bz/webcompat.com bugs.
  • Tom: If someone didn't find the time to update their own entires and we update it, we can add the fields there.
  • Tom: given our deadline for the demo report is this week, let's just all pitch in what time we can. And if we find that things are mis-pioritized, we can keep changing our formula.
  • Dennis: we might also delay the deadline for the prototype report if we find it's not any good.
  • Honza: it doesn't have to be perfect, we can always do several prototypes until we're happy.
  • James: user counts are probably an important factor. platform too. we can weight fields like platform based on relative number of Firefox users using that platform
  • Honza: using the same factors/numbers like perf team does?
  • James: right, for instance Android might end up with less importance this way, but we can also tweak it based on business goals, not just raw user numbers.
  • James: last reproduced might not be that important/useful. if we have a good intervention, we might not need to prioritize as much, too. impact... is hard to turn into a number/multiplier. affected users we can just turn into a multipler; "few" could be 0.1, for instance. resolved could just be a multiplier of 0.0. notes (Tom: sorry, I didn't hear this part)
  • Dennis: then it's probably worth just iterating on an initial guess.
  • James: I think maybe I will try to make an initial implementation.
  • Honza: again, the first prototype doesn't have to be perfect. as long as we have enough data to try to find the top 5.
  • Honza: Kate's project could help us know how many users actually do visit known non-interoperable sites using a telemetry mechanism built into the browser, but it will take an unknown amount of time to get this data.
  • Dennis: what population of users will it work on? all users, only if users opt into it, etc?
  • Honza: I don't know yet, I only have the details in the doc linked above.
  • Dennis: ok. We just have to be mindful of bias in the telemetry data.
  • James: there is a link in that doc (implementation plan) which suggests it will be enabled for the broader population, presumably except those who have disabled telemetry.
  • James: my initial reading implies that this is just going to count URLs being visited, not data like console logs. So it's hard to gauge how useful it will actually be.
  • Honza: it tells us how often users load a page (by URL), not individual features.
  • Tom: just knowing the URLs alone could help us validate whether our Tranco numbers are reflective of our userbase's real browsing.
  • James: right, it's just not going to help us really know if individual webcompat issues are being hit. I think talking with Kate to confirm what is being measured is wise now.
  • Honza: overall, let's just add in all the data we can this week, then we'll all pick what *we* would consider the top 5. This should help us with the risks, but trends are still going to be a question mark. I will ask Softvision to keep an eye out and mention it in their highlights. For instance maybe there is a trend with broken APIs in Private Browsing mode.
  • Raul: we make daily notes, so we'll gather URLs and other notes and summarize them in our Friday highlights.
  • Honza: could you find any trends (1 or 2) that you've found over the past few weeks?
  • Raul: I think so.
  • Honza: ok, so if we could have all of this data, then hopefully next Tuesday we'll have something to look at.
  • Dennis: there is a lot of good data in the QA triage.
  • [TODO] Everyone to edit their own entries and add the new fields.
  • [TODO] James to write an implementation of the calculation after that.
  • [TODO] Everyone to pick their personal top5 issues by the end of this week.

Report: Trends

QA weekly triage trends ([github](https://github.com/mozilla/webcompat-team-okrs/issues/262))

Highlights:

  • Embedded YouTube video "Share" button is not functional
  • There are a lot of reports about Business Apple not being supported on Firefox ([3])
  • Photoshop is unsupported in Firefox - work in progress ([4], known issue)
  • There are a lot of duplicates for m.imgur.com site - in Private Mode the site does not load ([5], case: storage API not available in private mode)
  • Few duplicates is related to Google Meet broken audio in calls ([6], cause: storage API not available in private mode)
  • There are a lot of duplicates for web.whatsapp.com not loading in PRIVATE MODE ([7], cause: storage API not available in private mode, other platform [8] WebRTC/Graphics)