User:Ashughes/PI Requests v2

From MozillaWiki
Jump to: navigation, search

Service Model SLA

Update: SLA has been revised (26-JAN-2018)

Our commitment is that 90% of requests get a response within 48 hours 80% of requests get a response within 48 hours and 100% of requests get a response within one week. That doesn't mean that we have to work on every request but we at least need to triage, prioritize, and respond to the request within the SLA. If the request has been deemed a high priority it will be assigned and followed up by the assignee. If the request has been deemed a low priority a response should be given which explains the reason and when the requestor can expect work to begin.

How it Works

  1. Email is sent to pi-request at mozilla dot com
  2. Request details are added to the spreadsheet
  3. Someone responds to the request in accordance with SLA
  4. Request is completed in due course, spreadsheet is updated along the way (ideally)

Roadmap

Phase 1: Stakeholder Engagement [DONE]
Phase 1a: Gather Feedback [DONE]
Phase 1b: Interviews [DONE]
Phase 2: Documenting Requirements [ON TRACK] : Due mid-Feb 2018
Phase 3: Implementing a Prototype [ON TRACK] : Due early-Mar 2018
Phase 4: Stakeholder Review Due 1H'2018
Phase 5: Iteration Due 2H'2018

Phase I: Stakeholder Engagement

Gathering Feedback

In this first part of Phase 1, I will gather feedback from stakeholders (anyone who's interacted PIRv1.0) via survey and document any key findings.

Survey Results

Responses
  • 32 / 108 responded (30% return rate)
  • 59% from Firefox/Platform developers
  • 22% from QA
  • 19% from other groups
Key Findings
Overall Engineers QA Others
Satisfaction 3.3 3.4 3.0 3.2
Agree Disagree Score
Overall Engineers QA Others Overall Engineers QA Others
Better than Bugzilla? 19% 10% 43% 20% 41% 52% 14% 40% -22%
Better than Github? 19% 16% 14% 40% 34% 37% 43% 20% -21%
Better than Trello? 31% 19% 29% 40% 31% 26% 57% 20% 0%
Better than Wiki? 31% 26% 29% 60% 13% 21% 0% 0% +18%
The PI Request system... Agree Disagree Score
Overall Engineers QA Others Overall Engineers QA Others
provides clear lines of communication 7 5 1 1 0 0 0 0 +7
makes it easier to plan resources 6 0 3 3 1 0 1 0 +5
provides a clear status of work 4 0 2 2 7 4 2 1 -3
provides a clear scope of work 5 3 2 0 2 0 2 0 +3
provides a clear understanding of the process 2 1 0 1 2 2 0 0 0
reveals artifacts/history of a request 0 0 0 0 4 2 1 1 -4
has data that is current and accurate 0 0 0 0 1 0 1 0 -1
provides a single, consolidated source of truth 2 1 0 1 1 1 0 0 +1
makes it easy to find a specific request 0 0 0 0 2 0 1 1 -2
makes it easy to report a request 5 3 2 0 0 0 0 0 +5
provides rapid response to testing needs 2 0 0 0 0 0 0 0 +2

Interviews

Update: This phase has been completed (26-JAN-2018)

In this second part of Phase 1, I will conduct interviews with survey respondents to clarify and do a deeper dive into the feedback they submitted, and their requirements. While these interviews are happening the survey should be sent again to those who did not respond, including any stakeholder who may have been left out.

To respect the privacy and anonymity of participants individual responses will not be revealed. Key findings will be documented as part of the next phase.

Phase II: Requirements Documentation

Update: This phase has begun in tandem with Phase III (26-JAN-2018)
After consultation with peers, Service Now has emerged as the best possible candidate on the surface. The next step is to develop a workflow design document together for the Service Now team. This document will be reviewed and the workflows implemented in the current Service Now instance for experimentation. The design document should be completed by mid-February with the Service Now setup implemented by early-March.

This phase will be completed in a couple stages. First I will summarize the feedback I received through interviews into criteria that can be used to evaluate other tools and pain points that can be addressed in the current tool in the short-term. Second I will document the most common workflows so that those can be implemented in the initial version of the new tool. Anything that does not fit in to either of these stages will be documented for implementation in future iterations.

Anticipated Areas of Concern:

  • Metrics
  • Alerting
  • Linking to other tools (bugzilla, github, trello, etc)
  • Easy to make a request
  • Easy to discover request status
  • Clear task assignment
  • Clear task scope
  • Clear indication/definition of "done"

Documenting Key Criteria

Requested Features

  • Be able to trigger sign-off emails in a predefined format either by clicking a button or automatically when certain criteria are met.
  • Templatize the requestor interface to enforce high-quality standards for incoming requests and to eliminate the guess work of what information is needed.
  • Send periodic status updates to the requestor, particularly if the task is large in scope.
  • Be able to set feedback intervals (realtime, weekly, on completion, etc)
  • Be able to see overall progress at a team level
  • Work should be tagged by target milestone
  • Be able to search for a specific request or a bucket of requests more easily
  • Be able to see which features come in / ship late
  • Be able to hide some requests by default particularly in case when security is a concern
  • If we use an existing tool make sure the branding clearly differentiates PI so that requests doing inadvertently go into the wrong pipeline (eg. we don't want SD requests in PI, nor PI requests in SD)
  • Be able to archive old requests so they don't add noise to existing/incoming work but searchable for historical/metrics purposes
  • Ensure we maintain the improved communication between QA and Dev, reduce the need for QA to hunt down work they should be doing
  • Be able to trigger one-to-one follow up conversations either automatically or manually once a task has been assigned (often times these conversations inform testing needs and eliminates guess work)
  • Be able to request features and do sentiment analysis of the tool
  • Be able to estimate scope accurately
  • Be able to request specific skills to streamline assignment of qualified testers
  • Facilitate a kickoff meeting and have video/transcript as an artifact of the request
  • Have a list of commonly requested follow up info (build, steps, screenshots, etc)
  • Make sure everything is clearly documented and that there are clear examples of a "good" request
  • Be able to have a clear indication of turnaround time on when a request will be acknowledged, assigned, and completed
  • Important to have continuity of tester, have them participate in meetings, etc. The system should track a testers previous work so requests can be assigned according to not just skills but developed experience/relationships.
  • Testers need to be able to ask questions and make suggestions of things the requestor may not anticipate
  • Need to provide a value proposition over simply need-info flagging a tester on a bug, the key is to have someone assigned quickly
  • Be able to submit a request by sending an email based on a predefined template
  • Requestor should be able to update the task if scope/timelines change
  • Be able to track various stages of sign-off (eg. Trello supports a checklist -- Mid-Nightly, Pre-Beta, Pre-Release)
  • Request should show its own history including comments
  • Users should be able to subscribe to a request or categories of requests
  • Users should be able to get notifications of changes
  • Integrate into existing workflows (EPMs use Trello, Developers use Bugzilla/Github)
  • Requests should hold requestors hand in terms of testing needs, platforms, devices, necessary reference documents, skills needs, timelines, etc
  • Enable UX with tester to develop clear sense of testing needs and complexity
  • If a feature is dependent on a PI request and that feature moves milestones in Trello the PI Request should be updated and stakeholders notified
  • Develop a workflow for requests that come in really early, on schedule, late, and adhoc
  • Request status should include external dependencies (eg. under review, under UX, ready for QA, QA complete, ready for release)
  • Periodically survey users of the system to gauge satisfaction
  • Be able to report on various metrics including ability to scope accurately, ability to complete work, ability to resource according to incoming demands, ability to meet SLA, plan for future, etc
  • Alerting to mitigate requests slipping through the cracks or slipping deadlines
  • Autocompletion of similar requests to prevent submitting duplicates but also to assist with cloning a similar task
  • Be able to set up and schedule recurring requests
  • Be able to identify if a request is necessary or not (sometimes people file requests just to be safe) and have a workflow to deal with unnecessary requests or prevent unnecessary requests from being filed altogether
  • Have clear definitions of jargon, terminology, status, etc
  • Not everything belongs in PI, need a way to weed these out before submission and suggest more appropriate tools (eg, Bugzilla)
  • When conversations happen on Slack/IRC or elsewhere have a way to include that as an artifact of the request
  • Automatic translation would be useful for ESL users
  • Think about community participation in PI as a usecase
  • Be able to resurrect cold requests
  • Be able to track PI resource allocation and availability
  • Not everything has feature release criteria -- feature release criteria should be baked in to the system but be smarter about when/why users are prompted for it
  • Suggest types of testing and needs based on similar requests
  • Mapping to other tracking docs (eg. Feature release tracking GDocs, Trello, Bugzilla, etc) would be useful and perhaps could be made redundant if the new system provides these features
  • Be able to prioritize work at large in the same place we track individual request status
  • Be able to provide more transparency around prioritization decisions
  • Be able to deprioritize tasks
  • Be able to query for specific tasks or terms
  • For services it would be useful to call out client vs server-side testing
  • Be able to delegate sub-requests and roll out the workload for a project
  • Be able to request multiple people to work on a task or different aspects of a task
  • Provide regular feedback on the progress of a request
  • Provide regular feedback on interacting with PI to identify opportunities for tools / process improvements
  • Need to be able to have a relationship between Trello, Bugzilla, and PI for security reviews (all features should be triaged for Security Assurance)
  • Support tracking work by due date
  • Dashboard of work in progress, work in queue, work completed and metrics to support resourcing decisions

The PI Request system should...

Pain Points

  • Submitting a PI Request seems redundant if PI resources will not be assigned; particularly true in cases where a team has dedicated QA or when the work involved is strictly landing code via Bugzilla (eg. updates to Talos, infrastructure configuration changes, etc).
    • Perhaps for teams with a dedicated QA what makes the most sense is A) no requests come through PI unless additional help is required (ie. not enough cycles) or B) QA person is technically part of PI although permanently assigned to their team and *all* their requests come through PI but are automatically assigned to them without prioritizaiton -- in this case handing it off for prioritization by SV is at their discretion. (Should run this past teams that have this Mobile and Services)
  • It's very difficult to know the current status of a request, both in terms of requests in backlog and active requests. Some people send an email to the person assigned to the work others try to navigate the spreadsheet.
  • PMs use Trello or Github cards to track on-going and upcoming feature work. These systems need to work together and integrate more easily. There needs to be a single source of truth for the entire life-cycle of a feature and that easily visualizes the entire scope of completed/active/incoming work.
  • Automated testing is done, for the most part, in our CI systems. There needs to be a way to integrate these results.
  • It's difficult to know when (timing and what projects) a PI Request is needed. Not all work needs a PI Request.
  • It's difficult to know if a Trello card has a matching PI Request and vice versa due to a discrepancy in naming convention and a lack of automatic or forced-manual linkage.
  • Request process we go though every release (ie. get your requests in by X) requires a lot of prodding of developers. If I know it’s going to ship in a later milestone it would be nice to be able to get that in early and tag it so I don’t have to worry about it.
  • Not all requests coming in to PI need testing but it would still be useful from a backlog Management perspective
  • It's difficult to see a high-level overview of the status of work (eg. With Trello it's easy to see where the overall work is at -- which cards are in each bucket)
  • While past requests can be filtered out it adds to the overhead of the spreadsheet
  • Cannot view testcases for other, similar requests
  • Cannot view other requests to use as an example
  • Disparate systems for tracking (BZ, PI) and comms (ML, BZ, Email, Vidyo)
  • Some testers need mentorship when it comes to a request of a highly technical nature, eg. writing scripts to test/debug an issue.
  • There is a lot of back and forth before a request is ready to be worked on
  • Not all requests are features; some are requests for automation, regression testing, A/B testing, etc; asking for information not relevant to the request wastes time
  • "Timeline"/"Duration" is really confusing and poorly defined
  • Testplans are not immediately discoverable unless requested
  • Need to improve clarity and transparency around where my request is in queue, priority and prioritization process, and current status
  • Its unclear who is consuming the information added/revised in spreadsheet, comments in particular
  • API testing is usually covered well by automation and we just need help edge-case hunting, this use case isn't covered well by the current system.
  • Scope needs to be locked in to prevent creeping demands on resources
  • It is unclear sometimes if/when/who should respond to a request
  • People are not proactive enough about filing security review requests
  • There is not a good way to track when something is due apart from Firefox milestones or ambiguous priority numbers

Roles

  • Customer: a person submitting a request
  • Manager: a person managing the incoming requests
  • Handler: a person working on a request

Workflows

QA PMs:

  • Check Trello 2x/week
  • Add any new features to a tracking spreadsheet
  • Cross reference feature tracking spreadsheet to PI spreadsheet
  • Other team leads will triage bugzilla for fix verifications

Developers:

EPMs:

Release Managers:

QA Engineers:

Automation Engineers:

Security Engineers:

Suggested V1 Tweaks

  • Track milestones
  • Track scope estimations and actuals
  • Separate spreadsheets per milestone with a single repository (could be a wiki page)
  • Enforce manual linkage between Trello and PI Requests via template

Workflow Designs

Phase III: Implementing a Prototype

Update: This phase has begun in tandem with Phase II (26-JAN-2018)
After consultation with peers, Service Now has emerged as the best possible candidate on the surface. The next step is to develop a workflow design document together for the Service Now team. This document will be reviewed and the workflows implemented in the current Service Now instance for experimentation. The design document should be completed by mid-February with the Service Now setup implemented by early-March.

In this phase, I will take the criteria identified in Phase II and use that to evaluate the best candidate tool to use as a foundation for v2.

Possible Tools

  • Trello
  • Bugzilla
  • Service-Now (aka HUB)
  • Google Doc
  • Wiki
  • ...