Compatibility/Meetings/2022-07-19

From MozillaWiki
Jump to: navigation, search

Minutes

Scribed by Ksenia

What to do about automated tests for interventions? (Tom)

  • honza: Are they proving useful/valuable? Do we spend more time to get them covering more interventions?
  • tom: we haven't been keeping up with the tests. are they as useful, should we invest more time in them?
  • james: did we get to the point where they were set up to run automatically?
  • tom: Oana was running them, but she is on PTO
  • raul: From my point of view they run smoothly, but Oana has more experience with it
  • tom: if we work on it more, it could be a much better tool (tests on Android, etc.). wonder if we should spend more time on it? and do we have time to spend on them every release cycle?
  • dennis: I do believe tests are important and we should spend time on them. Right now more tests are failing than passing. Wonder if there is something we should be doing differently to make them more reliable
  • dennis: https://github.com/mozilla/webcompat-team-okrs/issues/250#issuecomment-1152173351
  • tom: I was thinking to get tests up to date during 105. Some of the interventions are failing because site has changed, but that is expected. Also the test suite has some bugs.
  • james: If we think they're useful, then we have to commit to it properly, otherwise we will alsways end up in this situation. Android testing shouldn't be a problem and I'm willing to help with that.
  • tom: We have to resolve any reliability issues for the test runner. Once that's done, we can run tests automatically.
  • james: yeah, I agree. we can also set it up on a schedule or on a every central build
  • tom: we need to collect data on failures as well
  • dennis: I like the idea of interventions being tested automatically and it's a good idea to keep working on them
  • [TODO] Tom to spend some time on the tests next few release cycle to make sure test harness i before s reliable

State of WebCompat Report - Prototype (Honza)

  • honza: Prototype template based on collected feedback.
  • honza: we need to create a prototype of the report. There are 4 sessions: Summary, Top 20 issues, Risks and Trends. What are you thoughts on the prototype? How feasible is it?
  • dennis: I think that it looks reasonable, but I'm conserned that we don't have time to publish it.
  • honza: we only need to come up with a prototype by the end of the month
  • james: what do we want to have in the prototype by the end of July? Should we pick 5 most important issues?
  • honza: How feasible is to pick top 5 and maybe try to fill out other sections, as an exercise.
  • james: do we want to define an algorithm that determines what's the most important, based on a some score (user impact, severity, etc.)?
  • dennis: we probably should try
  • honza: what do you think about Risks?
  • james: there is https://docs.google.com/spreadsheets/d/1avkVHk21Vf4TkQLftFEF7K-IsmkS2rHhGMfQ-UQ25vE/edit#gid=904556128
  • honza: any other actionable steps?
  • dennis: we should set up a meeting
  • james: we should do investigation before next week and possibly agree on the top issues, but if not we can set up a meeting
  • [TODO] Everyone to start the discussion next week how we can determine to before p 5 issues

Allow reporting web compat issues from DevTools (Honza)

  • Honza: Should we support web compat issue reporting from DevTools? All channels? Nightly/DevEdition only?
  • dennis: the main problem is that it could be encouraging to report issues on local dev environment and we can't do anything about it
  • james: could we check that the url is publicly accessible? or ask for a testcase.
  • dennis: we usually mark all tescases that not on production as P3
  • james: maybe this reporting should have a different flow
  • dennis: we could try enabling it, it's a good idea
  • honza: it could be done in steps, maybe Nightly/dev edition first