https://wiki.mozilla.org/api.php?action=feedcontributions&user=Davehunt&feedformat=atomMozillaWiki - User contributions [en]2024-03-29T00:06:17ZUser contributionsMediaWiki 1.27.4https://wiki.mozilla.org/index.php?title=Performance/Triage&diff=1249149Performance/Triage2023-12-04T14:37:41Z<p>Davehunt: /* Performance triage (pending-needinfo) */ update secondary triage query to only include pending-needinfo bugs that have open needinfos</p>
<hr />
<div>{{DISPLAYTITLE:Performance Triage}}<br />
<br />
{{message/box|If you have any feedback/suggestions/questions regarding the performance triage process, you can share them in {{matrix|perf-triage}}, or reach out to {{people|davehunt|Dave Hunt}} or {{people|frankd|Frank Doty}}.}}<br />
<br />
= Nomination =<br />
== Bugzilla ==<br />
To (re)nominate a bug for triage, set the [[../Bugzilla#Project Flag|Performance Impact flag]] in Bugzilla to <code>?</code><br />
<br />
This can be found by clicking '''Show Advanced Fields''' followed by '''Set bug flags''' when entering a new bug:<br />
<br />
[[File:Bugzilla performance nomination on new bug form.png|none]]<br />
<br />
Or by expanding the '''Tracking''' section when editing an existing bug:<br />
<br />
[[File:Screenshot 2022-02-24 at 19.53.54.png|none]]<br />
<br />
== GitHub ==<br />
To nominate a bug for triage, add the '''Performance''' label to an issue. This can be done by filing an new issue with the "Performance issue" template:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.22.58.png|none|Screenshot of file a "Performance issue" template on GitHub]]<br />
<br />
Or by opening an existing issue on GitHub and selecting the label from the right-hand bar:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.32.09.png|Screenshot of adding a performance label on GitHub]]<br />
<br />
Currently, only the following GitHub repositories are supported:<br />
* [https://github.com/mozilla-mobile/fenix/ fenix]<br />
* [https://github.com/mozilla-mobile/android-components/ android-components]<br />
* [https://github.com/mozilla-mobile/focus-android/ focus-android]<br />
<br />
= Queries =<br />
== Performance triage ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "?",<br />
"f3": "CP",<br />
"f4": "OP",<br />
"f5": "product",<br />
"o5": "equals",<br />
"v5": "Core",<br />
"f6": "component",<br />
"o6": "equals",<br />
"v6": "Performance",<br />
"f7": "keywords",<br />
"o7": "notsubstring",<br />
"v7": "meta",<br />
"f8": "cf_performance_impact",<br />
"o8": "isempty",<br />
"f9": "CP",<br />
"f10": "OP",<br />
"f11": "cf_performance_impact",<br />
"o11": "equals",<br />
"v11": "pending-needinfo",<br />
"f12": "flagtypes.name",<br />
"o12": "notsubstring",<br />
"v12": "needinfo",<br />
"f13": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Performance triage (pending-needinfo) ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage (pending-needinfo)",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "pending-needinfo",<br />
"f3": "flagtypes.name",<br />
"o3": "allwordssubstr",<br />
"v3": "needinfo",<br />
"f4": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Recently opened bugs with performance keywords in the summary ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Recently opened bugs with performance keywords in the summary",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"chfield": "[Bug creation]",<br />
"chfieldfrom": "-2w",<br />
"keywords": "crash, intermittent-failure, meta",<br />
"keywords_type": "nowords",<br />
"short_desc": "perf \"load time\" responsiveness jank fast slow memory battery heat GPU CPU SLA",<br />
"short_desc_type": "anywordssubstr",<br />
"f1": "OP",<br />
"f2": "product",<br />
"o2": "equals",<br />
"v2": "Core",<br />
"f3": "product",<br />
"o3": "equals",<br />
"v3": "Fenix",<br />
"f4": "product",<br />
"o4": "equals",<br />
"v4": "Firefox",<br />
"f5": "product",<br />
"o5": "equals",<br />
"v5": "Firefox for iOS",<br />
"f6": "product",<br />
"o6": "equals",<br />
"v6": "Focus",<br />
"f7": "product",<br />
"o7": "equals",<br />
"v7": "Focus-iOS",<br />
"f8": "product",<br />
"o8": "equals",<br />
"v8": "GeckoView",<br />
"f9": "CP",<br />
"f10": "component",<br />
"o10": "notequals",<br />
"v10": "Performance",<br />
"f11": "cf_performance_impact",<br />
"o11": "isempty",<br />
"j1": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
= Triage process =<br />
== Introduction ==<br />
The goal of performance triage is to identify the extent to which bugs impact the performance of our products, and to move these bugs towards an actionable state. The goal is not to diagnose or fix bugs during triage. We triage bugs that have been nominated for triage and bugs in the Core::Performance component that do not have the performance impact project flag set.<br />
<br />
During triage we may do any/all of the following:<br />
* Request further information from the reporter (such as a profile)<br />
* Set the performance impact project flag<br />
* Add performance keywords<br />
* Move the bug to a more appropriate component<br />
<br />
== Who is responsible for triage? ==<br />
Everyone is welcome to take part in triage. By default, everyone on the performance team is enrolled in [[#Triage rotation|triage rotation]], but we also have participants from outside the team.<br />
<br />
== How do I schedule a triage meeting? ==<br />
If you are on triage duty, you will receive an invitation as a reminder to schedule the triage meeting on the [https://calendar.google.com/calendar/u/0?cid=bW96aWxsYS5jb21fOWJrNWYycnFkZXVpcDM4amJlbGQ4NGtwcWNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ shared performance calendar] with the nominated sheriffs invited at a time that works for them. The responsibility of scheduling the meeting falls to the lead sheriff. Once a triage meeting has been scheduled, it’s a good idea to remove the reminder event from the calendar to avoid confusion. It’s a good idea to use the shared calendar, as this increases the visibility of the performance triage and allows other members of the team to contribute or observe the process.<br />
<br />
== What if a sheriff is unavailable? ==<br />
The rotation script is not perfect, and doesn’t know when people are on PTO or otherwise unavailable. If the lead sheriff is available, it is their responsibility to either schedule the triage with the remaining available sheriff or to identify a suitable substitute for the unavailable sheriff(s). If the lead sheriff is unavailable, this responsibility passes onto the remaining available sheriffs.<br />
<br />
== How do I run a triage meeting? ==<br />
The following describes the triage process to follow during the meeting:<br />
<br />
# Ask if others would prefer you to share your screen. This can be especially helpful for those new to triage.<br />
# Open the [[#Performance triage|first triage query]] to show bugs nominated for triage or in the Core::Performance component without the performance impact project flag set. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* Bugs that look like tasks that were filed by members of the Performance team will generally need to be moved to the Core::Performance Engineering component.<br />
#* For defects: Determine if the bug is reproducible and actionable. If not, add a needinfo for the reporter asking for more information, set the performance impact project flag to pending-needinfo, and then move onto the next bug. We have a [[#New bug|template]] that you can modify as needed.<br />
#* For all bugs (including enhancements):<br />
#** Set the [[#How do I determine the performance impact project flag?|performance impact project flag]].<br />
#** Add the appropriate [[#How do I determine the performance keywords?|performance keywords]].<br />
#** Move the bug to the correct [[#How do I determine the correct Bugzilla component?|Bugzilla component]].<br />
# Open the [[#Performance triage (pending-needinfo)|second triage query]] to show bugs that are waiting further information to determine the performance impact. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* If the performance impact project flag was set to pending-needinfo less than 2 weeks ago, move onto the next bug.<br />
#* If the performance impact project flag was set to pending-needinfo more than 2 weeks ago but less than 2 months ago, consider adding a needinfo for either: another reporter of the issue, someone with access to the appropriate platform(s) to attempt to reproduce the issue, or a relevant subject matter expert.<br />
#* If the performance impact project flag was set to pending-needinfo more than 2 months ago, close the bug as inactive. You can modify the [[#No response from reporter|inactive bug template]] as needed.<br />
# If time permits, open the [[#Recently opened bugs with performance keywords in the summary|third triage query]] to show recently opened bugs with performance related keywords in the summary. If any of these look like performance bugs, they can either be triaged the same way as bugs in the initial query or they can be [[#Bugzilla|nominated for triage]] in a subsequent meeting.<br />
<br />
== What if things don't go as expected? ==<br />
Don't panic! The triage process is not expected to be perfect, and can improve with your feedback. Maybe the result of the triage calculator doesn't feel right, or you find a scenario that's not covered in these guidelines. In this case we recommend that you bring it up in {{matrix|perf-triage}}, or consider scheduling a short meeting with some triage leads (you can see some recent leads in the [[#Triage rotation|triage rotation]]). If in doubt, leave a comment on the bug with your thoughts and move on. There's a chance someone will respond, but if not the next performance triage sheriffs may have some other ideas.<br />
<br />
== How do I determine the performance impact project flag? ==<br />
The [[../Bugzilla#Project Flag|performance impact project flag]] is used to indicate a bug’s relationship to the performance of our products. It can be applied to all bugs, and not only defects. The [[#Triage calculator|triage calculator]] should be used to help determine the most appropriate value for this flag. In addition to setting the performance impact project flag, make sure to use the “Copy Bugzilla Comment” button and paste this as a comment on the bug.<br />
<br />
If you do not have enough information to set the performance impact project flag, open a needinfo request against an appropriate individual (such as a reporter), and set the performance impact project flag to pending-needinfo.<br />
<br />
For more information about what this flag, and it's settings mean see this [https://blog.mozilla.org/performance/2022/11/07/understanding-performance-impact/ blog post].<br />
<br />
== How do I determine the performance keywords? ==<br />
There are several [[../Bugzilla#Keywords|performance related keywords]], which can be helpful to understand how our performance issues are distributed, or whenever there’s a concerted effort to improve a particular aspect of our products. The [[#Triage calculator|triage calculator]] may recommend keywords to set, and by typing “perf:” in the keywords field in Bugzilla, you will see the available options. Select all that apply to the bug.<br />
<br />
== How do I determine the correct Bugzilla component? ==<br />
Ideally we would only have bugs in the Core::Performance component that are the responsibility of the engineers in the performance team. For performance bugs to have the best chance of being fixed, it's important to assign them to the correct component. In some cases the correct component will be obvious from the bug summary, description, or steps to reproduce. In other cases, you may need to do a bit more work to identify the component. For example, if there's a profile associated with the bug, you could see where the majority of time is being spent using the category annotations.<br />
<br />
== How do I read a performance profile? ==<br />
It's useful to be able to understand a profile generated by the [https://profiler.firefox.com/ Firefox Profiler], and hopefully someone in the triage meeting will be able to help. If you find an interesting profile, or just want to understand how to use them to analyse a performance problem, we encourage you to post a link to the profile (or bug) in [https://chat.mozilla.org/#/room/#joy-of-profiling:mozilla.org #joy-of-profiling] where someone will be happy to help. The profile may even be analysed during one of the regular "Joy of Profiling" open sessions that can be found on the [https://calendar.google.com/calendar/embed?src=c_cbjhkf8gu6anajlklhuo04hpko%40group.calendar.google.com&ctz=Europe%2FLondon Performance Office Hours calendar].<br />
<br />
= Triage calculator =<br />
The [https://mozilla.github.io/perf-triage/calculator.html Performance Impact Calculator] was developed to assist in identifying and applying the [[../Bugzilla#Project Flag|performance impact project flag]] and [[../Bugzilla#Keywords|performance keywords]] consistently. If you have feedback or would like to suggest changes to this tool, please share these in the [https://chat.mozilla.org/#/room/#perf-triage:mozilla.org #perf-triage Matrix channel].<br />
<br />
= Triage rotation =<br />
The sheriffs are allocated on a weekly basis, which is published [https://mozilla.github.io/perf-triage/ here]. The rotation is generated by [https://github.com/mozilla/perf-triage/blob/main/rotation.py this script].<br />
<br />
= Templates =<br />
== New bug ==<br />
This template is included in the description for new bugs opened in the Core::Performance component. If a bug is opened in another component and then moved to Core::Performance, this template can be used as needed to request additional information from the reporter.<br />
<br />
<pre><br />
### Basic information<br />
<br />
Steps to Reproduce:<br />
<br />
<br />
Expected Results:<br />
<br />
<br />
Actual Results:<br />
<br />
<br />
---<br />
<br />
### Performance recording (profile)<br />
<br />
Profile URL:<br />
(If this report is about slow performance or high CPU usage, please capture a performance profile by following the instructions at https://profiler.firefox.com/. Then upload the profile and insert the link here.)<br />
<br />
#### System configuration:<br />
<br />
OS version:<br />
GPU model:<br />
Number of cores: <br />
Amount of memory (RAM): <br />
<br />
### More information<br />
<br />
Please consider attaching the following information after filing this bug, if relevant:<br />
<br />
- Screenshot / screen recording<br />
- Anonymized about:memory dump, for issues with memory usage<br />
- Troubleshooting information: Go to about:support, click "Copy text to clipboard", paste it to a file, save it, and attach the file here.<br />
<br />
---<br />
<br />
Thanks so much for your help.<br />
</pre><br />
<br />
== Moved to Core::Performance ==<br />
<pre><br />
This bug was moved into the Performance component. Reporter, could you make sure the following information is on this bug?<br />
<br />
- For slowness or high CPU usage, capture a profile with http://profiler.firefox.com/ , upload it and share the link here.<br />
- For memory usage issues, capture a memory dump from about:memory and attach it to this bug.<br />
- Troubleshooting information: Go to about:support, click "Copy raw data to clipboard", paste it into a file, save it, and attach the file here.<br />
<br />
Thank you.<br />
</pre><br />
<br />
== No longer able to reproduce ==<br />
<pre>This bug doesn’t seem to happen anymore in current versions of Firefox. Please reopen or file a new bug if you see it again.</pre><br />
<br />
== No response from reporter ==<br />
<pre>With no answer from the reporter, we don’t have enough data to reproduce and/or fix this issue. Please reopen or file a new bug with more information if you see it again.</pre><br />
<br />
== Expected behaviour ==<br />
<pre>This is expected behavior. Please reopen or file a new bug if you think otherwise.</pre><br />
<br />
== Website issue ==<br />
<pre>According to the investigation, this is a website issue. Please reopen or file a new bug if you think otherwise.</pre></div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Triage&diff=1249148Performance/Triage2023-12-04T14:34:25Z<p>Davehunt: /* Performance triage */ update primary query to show pending-needinfo bugs without any open needinfos</p>
<hr />
<div>{{DISPLAYTITLE:Performance Triage}}<br />
<br />
{{message/box|If you have any feedback/suggestions/questions regarding the performance triage process, you can share them in {{matrix|perf-triage}}, or reach out to {{people|davehunt|Dave Hunt}} or {{people|frankd|Frank Doty}}.}}<br />
<br />
= Nomination =<br />
== Bugzilla ==<br />
To (re)nominate a bug for triage, set the [[../Bugzilla#Project Flag|Performance Impact flag]] in Bugzilla to <code>?</code><br />
<br />
This can be found by clicking '''Show Advanced Fields''' followed by '''Set bug flags''' when entering a new bug:<br />
<br />
[[File:Bugzilla performance nomination on new bug form.png|none]]<br />
<br />
Or by expanding the '''Tracking''' section when editing an existing bug:<br />
<br />
[[File:Screenshot 2022-02-24 at 19.53.54.png|none]]<br />
<br />
== GitHub ==<br />
To nominate a bug for triage, add the '''Performance''' label to an issue. This can be done by filing an new issue with the "Performance issue" template:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.22.58.png|none|Screenshot of file a "Performance issue" template on GitHub]]<br />
<br />
Or by opening an existing issue on GitHub and selecting the label from the right-hand bar:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.32.09.png|Screenshot of adding a performance label on GitHub]]<br />
<br />
Currently, only the following GitHub repositories are supported:<br />
* [https://github.com/mozilla-mobile/fenix/ fenix]<br />
* [https://github.com/mozilla-mobile/android-components/ android-components]<br />
* [https://github.com/mozilla-mobile/focus-android/ focus-android]<br />
<br />
= Queries =<br />
== Performance triage ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "?",<br />
"f3": "CP",<br />
"f4": "OP",<br />
"f5": "product",<br />
"o5": "equals",<br />
"v5": "Core",<br />
"f6": "component",<br />
"o6": "equals",<br />
"v6": "Performance",<br />
"f7": "keywords",<br />
"o7": "notsubstring",<br />
"v7": "meta",<br />
"f8": "cf_performance_impact",<br />
"o8": "isempty",<br />
"f9": "CP",<br />
"f10": "OP",<br />
"f11": "cf_performance_impact",<br />
"o11": "equals",<br />
"v11": "pending-needinfo",<br />
"f12": "flagtypes.name",<br />
"o12": "notsubstring",<br />
"v12": "needinfo",<br />
"f13": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Performance triage (pending-needinfo) ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage (pending-needinfo)",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "pending-needinfo",<br />
"f3": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Recently opened bugs with performance keywords in the summary ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Recently opened bugs with performance keywords in the summary",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"chfield": "[Bug creation]",<br />
"chfieldfrom": "-2w",<br />
"keywords": "crash, intermittent-failure, meta",<br />
"keywords_type": "nowords",<br />
"short_desc": "perf \"load time\" responsiveness jank fast slow memory battery heat GPU CPU SLA",<br />
"short_desc_type": "anywordssubstr",<br />
"f1": "OP",<br />
"f2": "product",<br />
"o2": "equals",<br />
"v2": "Core",<br />
"f3": "product",<br />
"o3": "equals",<br />
"v3": "Fenix",<br />
"f4": "product",<br />
"o4": "equals",<br />
"v4": "Firefox",<br />
"f5": "product",<br />
"o5": "equals",<br />
"v5": "Firefox for iOS",<br />
"f6": "product",<br />
"o6": "equals",<br />
"v6": "Focus",<br />
"f7": "product",<br />
"o7": "equals",<br />
"v7": "Focus-iOS",<br />
"f8": "product",<br />
"o8": "equals",<br />
"v8": "GeckoView",<br />
"f9": "CP",<br />
"f10": "component",<br />
"o10": "notequals",<br />
"v10": "Performance",<br />
"f11": "cf_performance_impact",<br />
"o11": "isempty",<br />
"j1": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
= Triage process =<br />
== Introduction ==<br />
The goal of performance triage is to identify the extent to which bugs impact the performance of our products, and to move these bugs towards an actionable state. The goal is not to diagnose or fix bugs during triage. We triage bugs that have been nominated for triage and bugs in the Core::Performance component that do not have the performance impact project flag set.<br />
<br />
During triage we may do any/all of the following:<br />
* Request further information from the reporter (such as a profile)<br />
* Set the performance impact project flag<br />
* Add performance keywords<br />
* Move the bug to a more appropriate component<br />
<br />
== Who is responsible for triage? ==<br />
Everyone is welcome to take part in triage. By default, everyone on the performance team is enrolled in [[#Triage rotation|triage rotation]], but we also have participants from outside the team.<br />
<br />
== How do I schedule a triage meeting? ==<br />
If you are on triage duty, you will receive an invitation as a reminder to schedule the triage meeting on the [https://calendar.google.com/calendar/u/0?cid=bW96aWxsYS5jb21fOWJrNWYycnFkZXVpcDM4amJlbGQ4NGtwcWNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ shared performance calendar] with the nominated sheriffs invited at a time that works for them. The responsibility of scheduling the meeting falls to the lead sheriff. Once a triage meeting has been scheduled, it’s a good idea to remove the reminder event from the calendar to avoid confusion. It’s a good idea to use the shared calendar, as this increases the visibility of the performance triage and allows other members of the team to contribute or observe the process.<br />
<br />
== What if a sheriff is unavailable? ==<br />
The rotation script is not perfect, and doesn’t know when people are on PTO or otherwise unavailable. If the lead sheriff is available, it is their responsibility to either schedule the triage with the remaining available sheriff or to identify a suitable substitute for the unavailable sheriff(s). If the lead sheriff is unavailable, this responsibility passes onto the remaining available sheriffs.<br />
<br />
== How do I run a triage meeting? ==<br />
The following describes the triage process to follow during the meeting:<br />
<br />
# Ask if others would prefer you to share your screen. This can be especially helpful for those new to triage.<br />
# Open the [[#Performance triage|first triage query]] to show bugs nominated for triage or in the Core::Performance component without the performance impact project flag set. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* Bugs that look like tasks that were filed by members of the Performance team will generally need to be moved to the Core::Performance Engineering component.<br />
#* For defects: Determine if the bug is reproducible and actionable. If not, add a needinfo for the reporter asking for more information, set the performance impact project flag to pending-needinfo, and then move onto the next bug. We have a [[#New bug|template]] that you can modify as needed.<br />
#* For all bugs (including enhancements):<br />
#** Set the [[#How do I determine the performance impact project flag?|performance impact project flag]].<br />
#** Add the appropriate [[#How do I determine the performance keywords?|performance keywords]].<br />
#** Move the bug to the correct [[#How do I determine the correct Bugzilla component?|Bugzilla component]].<br />
# Open the [[#Performance triage (pending-needinfo)|second triage query]] to show bugs that are waiting further information to determine the performance impact. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* If the performance impact project flag was set to pending-needinfo less than 2 weeks ago, move onto the next bug.<br />
#* If the performance impact project flag was set to pending-needinfo more than 2 weeks ago but less than 2 months ago, consider adding a needinfo for either: another reporter of the issue, someone with access to the appropriate platform(s) to attempt to reproduce the issue, or a relevant subject matter expert.<br />
#* If the performance impact project flag was set to pending-needinfo more than 2 months ago, close the bug as inactive. You can modify the [[#No response from reporter|inactive bug template]] as needed.<br />
# If time permits, open the [[#Recently opened bugs with performance keywords in the summary|third triage query]] to show recently opened bugs with performance related keywords in the summary. If any of these look like performance bugs, they can either be triaged the same way as bugs in the initial query or they can be [[#Bugzilla|nominated for triage]] in a subsequent meeting.<br />
<br />
== What if things don't go as expected? ==<br />
Don't panic! The triage process is not expected to be perfect, and can improve with your feedback. Maybe the result of the triage calculator doesn't feel right, or you find a scenario that's not covered in these guidelines. In this case we recommend that you bring it up in {{matrix|perf-triage}}, or consider scheduling a short meeting with some triage leads (you can see some recent leads in the [[#Triage rotation|triage rotation]]). If in doubt, leave a comment on the bug with your thoughts and move on. There's a chance someone will respond, but if not the next performance triage sheriffs may have some other ideas.<br />
<br />
== How do I determine the performance impact project flag? ==<br />
The [[../Bugzilla#Project Flag|performance impact project flag]] is used to indicate a bug’s relationship to the performance of our products. It can be applied to all bugs, and not only defects. The [[#Triage calculator|triage calculator]] should be used to help determine the most appropriate value for this flag. In addition to setting the performance impact project flag, make sure to use the “Copy Bugzilla Comment” button and paste this as a comment on the bug.<br />
<br />
If you do not have enough information to set the performance impact project flag, open a needinfo request against an appropriate individual (such as a reporter), and set the performance impact project flag to pending-needinfo.<br />
<br />
For more information about what this flag, and it's settings mean see this [https://blog.mozilla.org/performance/2022/11/07/understanding-performance-impact/ blog post].<br />
<br />
== How do I determine the performance keywords? ==<br />
There are several [[../Bugzilla#Keywords|performance related keywords]], which can be helpful to understand how our performance issues are distributed, or whenever there’s a concerted effort to improve a particular aspect of our products. The [[#Triage calculator|triage calculator]] may recommend keywords to set, and by typing “perf:” in the keywords field in Bugzilla, you will see the available options. Select all that apply to the bug.<br />
<br />
== How do I determine the correct Bugzilla component? ==<br />
Ideally we would only have bugs in the Core::Performance component that are the responsibility of the engineers in the performance team. For performance bugs to have the best chance of being fixed, it's important to assign them to the correct component. In some cases the correct component will be obvious from the bug summary, description, or steps to reproduce. In other cases, you may need to do a bit more work to identify the component. For example, if there's a profile associated with the bug, you could see where the majority of time is being spent using the category annotations.<br />
<br />
== How do I read a performance profile? ==<br />
It's useful to be able to understand a profile generated by the [https://profiler.firefox.com/ Firefox Profiler], and hopefully someone in the triage meeting will be able to help. If you find an interesting profile, or just want to understand how to use them to analyse a performance problem, we encourage you to post a link to the profile (or bug) in [https://chat.mozilla.org/#/room/#joy-of-profiling:mozilla.org #joy-of-profiling] where someone will be happy to help. The profile may even be analysed during one of the regular "Joy of Profiling" open sessions that can be found on the [https://calendar.google.com/calendar/embed?src=c_cbjhkf8gu6anajlklhuo04hpko%40group.calendar.google.com&ctz=Europe%2FLondon Performance Office Hours calendar].<br />
<br />
= Triage calculator =<br />
The [https://mozilla.github.io/perf-triage/calculator.html Performance Impact Calculator] was developed to assist in identifying and applying the [[../Bugzilla#Project Flag|performance impact project flag]] and [[../Bugzilla#Keywords|performance keywords]] consistently. If you have feedback or would like to suggest changes to this tool, please share these in the [https://chat.mozilla.org/#/room/#perf-triage:mozilla.org #perf-triage Matrix channel].<br />
<br />
= Triage rotation =<br />
The sheriffs are allocated on a weekly basis, which is published [https://mozilla.github.io/perf-triage/ here]. The rotation is generated by [https://github.com/mozilla/perf-triage/blob/main/rotation.py this script].<br />
<br />
= Templates =<br />
== New bug ==<br />
This template is included in the description for new bugs opened in the Core::Performance component. If a bug is opened in another component and then moved to Core::Performance, this template can be used as needed to request additional information from the reporter.<br />
<br />
<pre><br />
### Basic information<br />
<br />
Steps to Reproduce:<br />
<br />
<br />
Expected Results:<br />
<br />
<br />
Actual Results:<br />
<br />
<br />
---<br />
<br />
### Performance recording (profile)<br />
<br />
Profile URL:<br />
(If this report is about slow performance or high CPU usage, please capture a performance profile by following the instructions at https://profiler.firefox.com/. Then upload the profile and insert the link here.)<br />
<br />
#### System configuration:<br />
<br />
OS version:<br />
GPU model:<br />
Number of cores: <br />
Amount of memory (RAM): <br />
<br />
### More information<br />
<br />
Please consider attaching the following information after filing this bug, if relevant:<br />
<br />
- Screenshot / screen recording<br />
- Anonymized about:memory dump, for issues with memory usage<br />
- Troubleshooting information: Go to about:support, click "Copy text to clipboard", paste it to a file, save it, and attach the file here.<br />
<br />
---<br />
<br />
Thanks so much for your help.<br />
</pre><br />
<br />
== Moved to Core::Performance ==<br />
<pre><br />
This bug was moved into the Performance component. Reporter, could you make sure the following information is on this bug?<br />
<br />
- For slowness or high CPU usage, capture a profile with http://profiler.firefox.com/ , upload it and share the link here.<br />
- For memory usage issues, capture a memory dump from about:memory and attach it to this bug.<br />
- Troubleshooting information: Go to about:support, click "Copy raw data to clipboard", paste it into a file, save it, and attach the file here.<br />
<br />
Thank you.<br />
</pre><br />
<br />
== No longer able to reproduce ==<br />
<pre>This bug doesn’t seem to happen anymore in current versions of Firefox. Please reopen or file a new bug if you see it again.</pre><br />
<br />
== No response from reporter ==<br />
<pre>With no answer from the reporter, we don’t have enough data to reproduce and/or fix this issue. Please reopen or file a new bug with more information if you see it again.</pre><br />
<br />
== Expected behaviour ==<br />
<pre>This is expected behavior. Please reopen or file a new bug if you think otherwise.</pre><br />
<br />
== Website issue ==<br />
<pre>According to the investigation, this is a website issue. Please reopen or file a new bug if you think otherwise.</pre></div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Triage&diff=1249146Performance/Triage2023-12-04T13:26:16Z<p>Davehunt: Updated guidelines for new pending-needinfo value of performance impact project flag.</p>
<hr />
<div>{{DISPLAYTITLE:Performance Triage}}<br />
<br />
{{message/box|If you have any feedback/suggestions/questions regarding the performance triage process, you can share them in {{matrix|perf-triage}}, or reach out to {{people|davehunt|Dave Hunt}} or {{people|frankd|Frank Doty}}.}}<br />
<br />
= Nomination =<br />
== Bugzilla ==<br />
To (re)nominate a bug for triage, set the [[../Bugzilla#Project Flag|Performance Impact flag]] in Bugzilla to <code>?</code><br />
<br />
This can be found by clicking '''Show Advanced Fields''' followed by '''Set bug flags''' when entering a new bug:<br />
<br />
[[File:Bugzilla performance nomination on new bug form.png|none]]<br />
<br />
Or by expanding the '''Tracking''' section when editing an existing bug:<br />
<br />
[[File:Screenshot 2022-02-24 at 19.53.54.png|none]]<br />
<br />
== GitHub ==<br />
To nominate a bug for triage, add the '''Performance''' label to an issue. This can be done by filing an new issue with the "Performance issue" template:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.22.58.png|none|Screenshot of file a "Performance issue" template on GitHub]]<br />
<br />
Or by opening an existing issue on GitHub and selecting the label from the right-hand bar:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.32.09.png|Screenshot of adding a performance label on GitHub]]<br />
<br />
Currently, only the following GitHub repositories are supported:<br />
* [https://github.com/mozilla-mobile/fenix/ fenix]<br />
* [https://github.com/mozilla-mobile/android-components/ android-components]<br />
* [https://github.com/mozilla-mobile/focus-android/ focus-android]<br />
<br />
= Queries =<br />
== Performance triage ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "?",<br />
"f3": "CP",<br />
"f4": "OP",<br />
"f5": "product",<br />
"o5": "equals",<br />
"v5": "Core",<br />
"f6": "component",<br />
"o6": "equals",<br />
"v6": "Performance",<br />
"f7": "keywords",<br />
"o7": "notsubstring",<br />
"v7": "meta",<br />
"f8": "cf_performance_impact",<br />
"o8": "isempty",<br />
"f9": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Performance triage (pending-needinfo) ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage (pending-needinfo)",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "pending-needinfo",<br />
"f3": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Recently opened bugs with performance keywords in the summary ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Recently opened bugs with performance keywords in the summary",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"chfield": "[Bug creation]",<br />
"chfieldfrom": "-2w",<br />
"keywords": "crash, intermittent-failure, meta",<br />
"keywords_type": "nowords",<br />
"short_desc": "perf \"load time\" responsiveness jank fast slow memory battery heat GPU CPU SLA",<br />
"short_desc_type": "anywordssubstr",<br />
"f1": "OP",<br />
"f2": "product",<br />
"o2": "equals",<br />
"v2": "Core",<br />
"f3": "product",<br />
"o3": "equals",<br />
"v3": "Fenix",<br />
"f4": "product",<br />
"o4": "equals",<br />
"v4": "Firefox",<br />
"f5": "product",<br />
"o5": "equals",<br />
"v5": "Firefox for iOS",<br />
"f6": "product",<br />
"o6": "equals",<br />
"v6": "Focus",<br />
"f7": "product",<br />
"o7": "equals",<br />
"v7": "Focus-iOS",<br />
"f8": "product",<br />
"o8": "equals",<br />
"v8": "GeckoView",<br />
"f9": "CP",<br />
"f10": "component",<br />
"o10": "notequals",<br />
"v10": "Performance",<br />
"f11": "cf_performance_impact",<br />
"o11": "isempty",<br />
"j1": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
= Triage process =<br />
== Introduction ==<br />
The goal of performance triage is to identify the extent to which bugs impact the performance of our products, and to move these bugs towards an actionable state. The goal is not to diagnose or fix bugs during triage. We triage bugs that have been nominated for triage and bugs in the Core::Performance component that do not have the performance impact project flag set.<br />
<br />
During triage we may do any/all of the following:<br />
* Request further information from the reporter (such as a profile)<br />
* Set the performance impact project flag<br />
* Add performance keywords<br />
* Move the bug to a more appropriate component<br />
<br />
== Who is responsible for triage? ==<br />
Everyone is welcome to take part in triage. By default, everyone on the performance team is enrolled in [[#Triage rotation|triage rotation]], but we also have participants from outside the team.<br />
<br />
== How do I schedule a triage meeting? ==<br />
If you are on triage duty, you will receive an invitation as a reminder to schedule the triage meeting on the [https://calendar.google.com/calendar/u/0?cid=bW96aWxsYS5jb21fOWJrNWYycnFkZXVpcDM4amJlbGQ4NGtwcWNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ shared performance calendar] with the nominated sheriffs invited at a time that works for them. The responsibility of scheduling the meeting falls to the lead sheriff. Once a triage meeting has been scheduled, it’s a good idea to remove the reminder event from the calendar to avoid confusion. It’s a good idea to use the shared calendar, as this increases the visibility of the performance triage and allows other members of the team to contribute or observe the process.<br />
<br />
== What if a sheriff is unavailable? ==<br />
The rotation script is not perfect, and doesn’t know when people are on PTO or otherwise unavailable. If the lead sheriff is available, it is their responsibility to either schedule the triage with the remaining available sheriff or to identify a suitable substitute for the unavailable sheriff(s). If the lead sheriff is unavailable, this responsibility passes onto the remaining available sheriffs.<br />
<br />
== How do I run a triage meeting? ==<br />
The following describes the triage process to follow during the meeting:<br />
<br />
# Ask if others would prefer you to share your screen. This can be especially helpful for those new to triage.<br />
# Open the [[#Performance triage|first triage query]] to show bugs nominated for triage or in the Core::Performance component without the performance impact project flag set. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* Bugs that look like tasks that were filed by members of the Performance team will generally need to be moved to the Core::Performance Engineering component.<br />
#* For defects: Determine if the bug is reproducible and actionable. If not, add a needinfo for the reporter asking for more information, set the performance impact project flag to pending-needinfo, and then move onto the next bug. We have a [[#New bug|template]] that you can modify as needed.<br />
#* For all bugs (including enhancements):<br />
#** Set the [[#How do I determine the performance impact project flag?|performance impact project flag]].<br />
#** Add the appropriate [[#How do I determine the performance keywords?|performance keywords]].<br />
#** Move the bug to the correct [[#How do I determine the correct Bugzilla component?|Bugzilla component]].<br />
# Open the [[#Performance triage (pending-needinfo)|second triage query]] to show bugs that are waiting further information to determine the performance impact. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* If the performance impact project flag was set to pending-needinfo less than 2 weeks ago, move onto the next bug.<br />
#* If the performance impact project flag was set to pending-needinfo more than 2 weeks ago but less than 2 months ago, consider adding a needinfo for either: another reporter of the issue, someone with access to the appropriate platform(s) to attempt to reproduce the issue, or a relevant subject matter expert.<br />
#* If the performance impact project flag was set to pending-needinfo more than 2 months ago, close the bug as inactive. You can modify the [[#No response from reporter|inactive bug template]] as needed.<br />
# If time permits, open the [[#Recently opened bugs with performance keywords in the summary|third triage query]] to show recently opened bugs with performance related keywords in the summary. If any of these look like performance bugs, they can either be triaged the same way as bugs in the initial query or they can be [[#Bugzilla|nominated for triage]] in a subsequent meeting.<br />
<br />
== What if things don't go as expected? ==<br />
Don't panic! The triage process is not expected to be perfect, and can improve with your feedback. Maybe the result of the triage calculator doesn't feel right, or you find a scenario that's not covered in these guidelines. In this case we recommend that you bring it up in {{matrix|perf-triage}}, or consider scheduling a short meeting with some triage leads (you can see some recent leads in the [[#Triage rotation|triage rotation]]). If in doubt, leave a comment on the bug with your thoughts and move on. There's a chance someone will respond, but if not the next performance triage sheriffs may have some other ideas.<br />
<br />
== How do I determine the performance impact project flag? ==<br />
The [[../Bugzilla#Project Flag|performance impact project flag]] is used to indicate a bug’s relationship to the performance of our products. It can be applied to all bugs, and not only defects. The [[#Triage calculator|triage calculator]] should be used to help determine the most appropriate value for this flag. In addition to setting the performance impact project flag, make sure to use the “Copy Bugzilla Comment” button and paste this as a comment on the bug.<br />
<br />
If you do not have enough information to set the performance impact project flag, open a needinfo request against an appropriate individual (such as a reporter), and set the performance impact project flag to pending-needinfo.<br />
<br />
For more information about what this flag, and it's settings mean see this [https://blog.mozilla.org/performance/2022/11/07/understanding-performance-impact/ blog post].<br />
<br />
== How do I determine the performance keywords? ==<br />
There are several [[../Bugzilla#Keywords|performance related keywords]], which can be helpful to understand how our performance issues are distributed, or whenever there’s a concerted effort to improve a particular aspect of our products. The [[#Triage calculator|triage calculator]] may recommend keywords to set, and by typing “perf:” in the keywords field in Bugzilla, you will see the available options. Select all that apply to the bug.<br />
<br />
== How do I determine the correct Bugzilla component? ==<br />
Ideally we would only have bugs in the Core::Performance component that are the responsibility of the engineers in the performance team. For performance bugs to have the best chance of being fixed, it's important to assign them to the correct component. In some cases the correct component will be obvious from the bug summary, description, or steps to reproduce. In other cases, you may need to do a bit more work to identify the component. For example, if there's a profile associated with the bug, you could see where the majority of time is being spent using the category annotations.<br />
<br />
== How do I read a performance profile? ==<br />
It's useful to be able to understand a profile generated by the [https://profiler.firefox.com/ Firefox Profiler], and hopefully someone in the triage meeting will be able to help. If you find an interesting profile, or just want to understand how to use them to analyse a performance problem, we encourage you to post a link to the profile (or bug) in [https://chat.mozilla.org/#/room/#joy-of-profiling:mozilla.org #joy-of-profiling] where someone will be happy to help. The profile may even be analysed during one of the regular "Joy of Profiling" open sessions that can be found on the [https://calendar.google.com/calendar/embed?src=c_cbjhkf8gu6anajlklhuo04hpko%40group.calendar.google.com&ctz=Europe%2FLondon Performance Office Hours calendar].<br />
<br />
= Triage calculator =<br />
The [https://mozilla.github.io/perf-triage/calculator.html Performance Impact Calculator] was developed to assist in identifying and applying the [[../Bugzilla#Project Flag|performance impact project flag]] and [[../Bugzilla#Keywords|performance keywords]] consistently. If you have feedback or would like to suggest changes to this tool, please share these in the [https://chat.mozilla.org/#/room/#perf-triage:mozilla.org #perf-triage Matrix channel].<br />
<br />
= Triage rotation =<br />
The sheriffs are allocated on a weekly basis, which is published [https://mozilla.github.io/perf-triage/ here]. The rotation is generated by [https://github.com/mozilla/perf-triage/blob/main/rotation.py this script].<br />
<br />
= Templates =<br />
== New bug ==<br />
This template is included in the description for new bugs opened in the Core::Performance component. If a bug is opened in another component and then moved to Core::Performance, this template can be used as needed to request additional information from the reporter.<br />
<br />
<pre><br />
### Basic information<br />
<br />
Steps to Reproduce:<br />
<br />
<br />
Expected Results:<br />
<br />
<br />
Actual Results:<br />
<br />
<br />
---<br />
<br />
### Performance recording (profile)<br />
<br />
Profile URL:<br />
(If this report is about slow performance or high CPU usage, please capture a performance profile by following the instructions at https://profiler.firefox.com/. Then upload the profile and insert the link here.)<br />
<br />
#### System configuration:<br />
<br />
OS version:<br />
GPU model:<br />
Number of cores: <br />
Amount of memory (RAM): <br />
<br />
### More information<br />
<br />
Please consider attaching the following information after filing this bug, if relevant:<br />
<br />
- Screenshot / screen recording<br />
- Anonymized about:memory dump, for issues with memory usage<br />
- Troubleshooting information: Go to about:support, click "Copy text to clipboard", paste it to a file, save it, and attach the file here.<br />
<br />
---<br />
<br />
Thanks so much for your help.<br />
</pre><br />
<br />
== Moved to Core::Performance ==<br />
<pre><br />
This bug was moved into the Performance component. Reporter, could you make sure the following information is on this bug?<br />
<br />
- For slowness or high CPU usage, capture a profile with http://profiler.firefox.com/ , upload it and share the link here.<br />
- For memory usage issues, capture a memory dump from about:memory and attach it to this bug.<br />
- Troubleshooting information: Go to about:support, click "Copy raw data to clipboard", paste it into a file, save it, and attach the file here.<br />
<br />
Thank you.<br />
</pre><br />
<br />
== No longer able to reproduce ==<br />
<pre>This bug doesn’t seem to happen anymore in current versions of Firefox. Please reopen or file a new bug if you see it again.</pre><br />
<br />
== No response from reporter ==<br />
<pre>With no answer from the reporter, we don’t have enough data to reproduce and/or fix this issue. Please reopen or file a new bug with more information if you see it again.</pre><br />
<br />
== Expected behaviour ==<br />
<pre>This is expected behavior. Please reopen or file a new bug if you think otherwise.</pre><br />
<br />
== Website issue ==<br />
<pre>According to the investigation, this is a website issue. Please reopen or file a new bug if you think otherwise.</pre></div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Triage&diff=1249145Performance/Triage2023-12-04T13:14:39Z<p>Davehunt: /* Performance triage */ update query to not exclude needinfos</p>
<hr />
<div>{{DISPLAYTITLE:Performance Triage}}<br />
<br />
{{message/box|If you have any feedback/suggestions/questions regarding the performance triage process, you can share them in {{matrix|perf-triage}}, or reach out to {{people|davehunt|Dave Hunt}} or {{people|frankd|Frank Doty}}.}}<br />
<br />
= Nomination =<br />
== Bugzilla ==<br />
To (re)nominate a bug for triage, set the [[../Bugzilla#Project Flag|Performance Impact flag]] in Bugzilla to <code>?</code><br />
<br />
This can be found by clicking '''Show Advanced Fields''' followed by '''Set bug flags''' when entering a new bug:<br />
<br />
[[File:Bugzilla performance nomination on new bug form.png|none]]<br />
<br />
Or by expanding the '''Tracking''' section when editing an existing bug:<br />
<br />
[[File:Screenshot 2022-02-24 at 19.53.54.png|none]]<br />
<br />
== GitHub ==<br />
To nominate a bug for triage, add the '''Performance''' label to an issue. This can be done by filing an new issue with the "Performance issue" template:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.22.58.png|none|Screenshot of file a "Performance issue" template on GitHub]]<br />
<br />
Or by opening an existing issue on GitHub and selecting the label from the right-hand bar:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.32.09.png|Screenshot of adding a performance label on GitHub]]<br />
<br />
Currently, only the following GitHub repositories are supported:<br />
* [https://github.com/mozilla-mobile/fenix/ fenix]<br />
* [https://github.com/mozilla-mobile/android-components/ android-components]<br />
* [https://github.com/mozilla-mobile/focus-android/ focus-android]<br />
<br />
= Queries =<br />
== Performance triage ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "?",<br />
"f3": "CP",<br />
"f4": "OP",<br />
"f5": "product",<br />
"o5": "equals",<br />
"v5": "Core",<br />
"f6": "component",<br />
"o6": "equals",<br />
"v6": "Performance",<br />
"f7": "keywords",<br />
"o7": "notsubstring",<br />
"v7": "meta",<br />
"f8": "cf_performance_impact",<br />
"o8": "isempty",<br />
"f9": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Performance triage (pending-needinfo) ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage (pending-needinfo)",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "pending-needinfo",<br />
"f3": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Recently opened bugs with performance keywords in the summary ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Recently opened bugs with performance keywords in the summary",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"chfield": "[Bug creation]",<br />
"chfieldfrom": "-2w",<br />
"keywords": "crash, intermittent-failure, meta",<br />
"keywords_type": "nowords",<br />
"short_desc": "perf \"load time\" responsiveness jank fast slow memory battery heat GPU CPU SLA",<br />
"short_desc_type": "anywordssubstr",<br />
"f1": "OP",<br />
"f2": "product",<br />
"o2": "equals",<br />
"v2": "Core",<br />
"f3": "product",<br />
"o3": "equals",<br />
"v3": "Fenix",<br />
"f4": "product",<br />
"o4": "equals",<br />
"v4": "Firefox",<br />
"f5": "product",<br />
"o5": "equals",<br />
"v5": "Firefox for iOS",<br />
"f6": "product",<br />
"o6": "equals",<br />
"v6": "Focus",<br />
"f7": "product",<br />
"o7": "equals",<br />
"v7": "Focus-iOS",<br />
"f8": "product",<br />
"o8": "equals",<br />
"v8": "GeckoView",<br />
"f9": "CP",<br />
"f10": "component",<br />
"o10": "notequals",<br />
"v10": "Performance",<br />
"f11": "cf_performance_impact",<br />
"o11": "isempty",<br />
"j1": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
= Triage process =<br />
== Introduction ==<br />
The goal of performance triage is to identify the extent to which bugs impact the performance of our products, and to move these bugs towards an actionable state. The goal is not to diagnose or fix bugs during triage. We triage bugs that have been nominated for triage and bugs in the Core::Performance component that do not have the performance impact project flag set.<br />
<br />
During triage we may do any/all of the following:<br />
* Request further information from the reporter (such as a profile)<br />
* Set the performance impact project flag<br />
* Add performance keywords<br />
* Move the bug to a more appropriate component<br />
<br />
== Who is responsible for triage? ==<br />
Everyone is welcome to take part in triage. By default, everyone on the performance team is enrolled in [[#Triage rotation|triage rotation]], but we also have participants from outside the team.<br />
<br />
== How do I schedule a triage meeting? ==<br />
If you are on triage duty, you will receive an invitation as a reminder to schedule the triage meeting on the [https://calendar.google.com/calendar/u/0?cid=bW96aWxsYS5jb21fOWJrNWYycnFkZXVpcDM4amJlbGQ4NGtwcWNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ shared performance calendar] with the nominated sheriffs invited at a time that works for them. The responsibility of scheduling the meeting falls to the lead sheriff. Once a triage meeting has been scheduled, it’s a good idea to remove the reminder event from the calendar to avoid confusion. It’s a good idea to use the shared calendar, as this increases the visibility of the performance triage and allows other members of the team to contribute or observe the process.<br />
<br />
== What if a sheriff is unavailable? ==<br />
The rotation script is not perfect, and doesn’t know when people are on PTO or otherwise unavailable. If the lead sheriff is available, it is their responsibility to either schedule the triage with the remaining available sheriff or to identify a suitable substitute for the unavailable sheriff(s). If the lead sheriff is unavailable, this responsibility passes onto the remaining available sheriffs.<br />
<br />
== How do I run a triage meeting? ==<br />
The following describes the triage process to follow during the meeting:<br />
<br />
# Ask if others would prefer you to share your screen. This can be especially helpful for those new to triage.<br />
# Open the [[#Performance triage|first triage query]] to show bugs nominated for triage or in the Core::Performance component without the performance impact project flag set. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* Bugs that look like tasks that were filed by members of the Performance team will generally need to be moved to the Core::Performance Engineering component.<br />
#* For defects: Determine if the bug is reproducible and actionable. If not, add a needinfo for the reporter asking for more information and move onto the next bug. We have a [[#New bug|template]] that you can modify as needed.<br />
#* For all bugs (including enhancements):<br />
#** Set the [[#How do I determine the performance impact project flag?|performance impact project flag]].<br />
#** Add the appropriate [[#How do I determine the performance keywords?|performance keywords]].<br />
#** Move the bug to the correct [[#How do I determine the correct Bugzilla component?|Bugzilla component]].<br />
# Open the [[#Performance triage (pending needinfo)|second triage query]] to show bugs that have open needinfo requests. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* If the needinfo was set less than 2 weeks ago, move onto the next bug.<br />
#* If the needinfo was set more than 2 weeks ago but less than 2 months ago, consider adding a needinfo for either: another reporter of the issue, someone with access to the appropriate platform(s) to attempt to reproduce the issue, or a relevant subject matter expert.<br />
#* If the open needinfo was set more than 2 months ago, close the bug as inactive. You can modify the [[#No response from reporter|inactive bug template]] as needed.<br />
# If time permits, open the [[#Recently opened bugs with performance keywords in the summary|third triage query]] to show recently opened bugs with performance related keywords in the summary. If any of these look like performance bugs, they can either be triaged the same way as bugs in the initial query or they can be [[#Bugzilla|nominated for triage]] in a subsequent meeting.<br />
<br />
== What if things don't go as expected? ==<br />
Don't panic! The triage process is not expected to be perfect, and can improve with your feedback. Maybe the result of the triage calculator doesn't feel right, or you find a scenario that's not covered in these guidelines. In this case we recommend that you bring it up in {{matrix|perf-triage}}, or consider scheduling a short meeting with some triage leads (you can see some recent leads in the [[#Triage rotation|triage rotation]]). If in doubt, leave a comment on the bug with your thoughts and move on. There's a chance someone will respond, but if not the next performance triage sheriffs may have some other ideas.<br />
<br />
== How do I determine the performance impact project flag? ==<br />
The [[../Bugzilla#Project Flag|performance impact project flag]] is used to indicate a bug’s relationship to the performance of our products. It can be applied to all bugs, and not only defects. The [[#Triage calculator|triage calculator]] should be used to help determine the most appropriate value for this flag. In addition to setting the performance impact project flag, make sure to use the “Copy Bugzilla Comment” button and paste this as a comment on the bug.<br />
<br />
For more information about what this flag, and it's settings mean see this [https://blog.mozilla.org/performance/2022/11/07/understanding-performance-impact/ blog post].<br />
<br />
== How do I determine the performance keywords? ==<br />
There are several [[../Bugzilla#Keywords|performance related keywords]], which can be helpful to understand how our performance issues are distributed, or whenever there’s a concerted effort to improve a particular aspect of our products. The [[#Triage calculator|triage calculator]] may recommend keywords to set, and by typing “perf:” in the keywords field in Bugzilla, you will see the available options. Select all that apply to the bug.<br />
<br />
== How do I determine the correct Bugzilla component? ==<br />
Ideally we would only have bugs in the Core::Performance component that are the responsibility of the engineers in the performance team. For performance bugs to have the best chance of being fixed, it's important to assign them to the correct component. In some cases the correct component will be obvious from the bug summary, description, or steps to reproduce. In other cases, you may need to do a bit more work to identify the component. For example, if there's a profile associated with the bug, you could see where the majority of time is being spent using the category annotations.<br />
<br />
== How do I read a performance profile? ==<br />
It's useful to be able to understand a profile generated by the [https://profiler.firefox.com/ Firefox Profiler], and hopefully someone in the triage meeting will be able to help. If you find an interesting profile, or just want to understand how to use them to analyse a performance problem, we encourage you to post a link to the profile (or bug) in [https://chat.mozilla.org/#/room/#joy-of-profiling:mozilla.org #joy-of-profiling] where someone will be happy to help. The profile may even be analysed during one of the regular "Joy of Profiling" open sessions that can be found on the [https://calendar.google.com/calendar/embed?src=c_cbjhkf8gu6anajlklhuo04hpko%40group.calendar.google.com&ctz=Europe%2FLondon Performance Office Hours calendar].<br />
<br />
= Triage calculator =<br />
The [https://mozilla.github.io/perf-triage/calculator.html Performance Impact Calculator] was developed to assist in identifying and applying the [[../Bugzilla#Project Flag|performance impact project flag]] and [[../Bugzilla#Keywords|performance keywords]] consistently. If you have feedback or would like to suggest changes to this tool, please share these in the [https://chat.mozilla.org/#/room/#perf-triage:mozilla.org #perf-triage Matrix channel].<br />
<br />
= Triage rotation =<br />
The sheriffs are allocated on a weekly basis, which is published [https://mozilla.github.io/perf-triage/ here]. The rotation is generated by [https://github.com/mozilla/perf-triage/blob/main/rotation.py this script].<br />
<br />
= Templates =<br />
== New bug ==<br />
This template is included in the description for new bugs opened in the Core::Performance component. If a bug is opened in another component and then moved to Core::Performance, this template can be used as needed to request additional information from the reporter.<br />
<br />
<pre><br />
### Basic information<br />
<br />
Steps to Reproduce:<br />
<br />
<br />
Expected Results:<br />
<br />
<br />
Actual Results:<br />
<br />
<br />
---<br />
<br />
### Performance recording (profile)<br />
<br />
Profile URL:<br />
(If this report is about slow performance or high CPU usage, please capture a performance profile by following the instructions at https://profiler.firefox.com/. Then upload the profile and insert the link here.)<br />
<br />
#### System configuration:<br />
<br />
OS version:<br />
GPU model:<br />
Number of cores: <br />
Amount of memory (RAM): <br />
<br />
### More information<br />
<br />
Please consider attaching the following information after filing this bug, if relevant:<br />
<br />
- Screenshot / screen recording<br />
- Anonymized about:memory dump, for issues with memory usage<br />
- Troubleshooting information: Go to about:support, click "Copy text to clipboard", paste it to a file, save it, and attach the file here.<br />
<br />
---<br />
<br />
Thanks so much for your help.<br />
</pre><br />
<br />
== Moved to Core::Performance ==<br />
<pre><br />
This bug was moved into the Performance component. Reporter, could you make sure the following information is on this bug?<br />
<br />
- For slowness or high CPU usage, capture a profile with http://profiler.firefox.com/ , upload it and share the link here.<br />
- For memory usage issues, capture a memory dump from about:memory and attach it to this bug.<br />
- Troubleshooting information: Go to about:support, click "Copy raw data to clipboard", paste it into a file, save it, and attach the file here.<br />
<br />
Thank you.<br />
</pre><br />
<br />
== No longer able to reproduce ==<br />
<pre>This bug doesn’t seem to happen anymore in current versions of Firefox. Please reopen or file a new bug if you see it again.</pre><br />
<br />
== No response from reporter ==<br />
<pre>With no answer from the reporter, we don’t have enough data to reproduce and/or fix this issue. Please reopen or file a new bug with more information if you see it again.</pre><br />
<br />
== Expected behaviour ==<br />
<pre>This is expected behavior. Please reopen or file a new bug if you think otherwise.</pre><br />
<br />
== Website issue ==<br />
<pre>According to the investigation, this is a website issue. Please reopen or file a new bug if you think otherwise.</pre></div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Triage&diff=1249144Performance/Triage2023-12-04T13:04:42Z<p>Davehunt: /* Performance triage (pending needinfo) */</p>
<hr />
<div>{{DISPLAYTITLE:Performance Triage}}<br />
<br />
{{message/box|If you have any feedback/suggestions/questions regarding the performance triage process, you can share them in {{matrix|perf-triage}}, or reach out to {{people|davehunt|Dave Hunt}} or {{people|frankd|Frank Doty}}.}}<br />
<br />
= Nomination =<br />
== Bugzilla ==<br />
To (re)nominate a bug for triage, set the [[../Bugzilla#Project Flag|Performance Impact flag]] in Bugzilla to <code>?</code><br />
<br />
This can be found by clicking '''Show Advanced Fields''' followed by '''Set bug flags''' when entering a new bug:<br />
<br />
[[File:Bugzilla performance nomination on new bug form.png|none]]<br />
<br />
Or by expanding the '''Tracking''' section when editing an existing bug:<br />
<br />
[[File:Screenshot 2022-02-24 at 19.53.54.png|none]]<br />
<br />
== GitHub ==<br />
To nominate a bug for triage, add the '''Performance''' label to an issue. This can be done by filing an new issue with the "Performance issue" template:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.22.58.png|none|Screenshot of file a "Performance issue" template on GitHub]]<br />
<br />
Or by opening an existing issue on GitHub and selecting the label from the right-hand bar:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.32.09.png|Screenshot of adding a performance label on GitHub]]<br />
<br />
Currently, only the following GitHub repositories are supported:<br />
* [https://github.com/mozilla-mobile/fenix/ fenix]<br />
* [https://github.com/mozilla-mobile/android-components/ android-components]<br />
* [https://github.com/mozilla-mobile/focus-android/ focus-android]<br />
<br />
= Queries =<br />
== Performance triage ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "?",<br />
"f3": "flagtypes.name",<br />
"o3": "notsubstring",<br />
"v3": "needinfo",<br />
"f4": "CP",<br />
"f5": "OP",<br />
"f6": "product",<br />
"o6": "equals",<br />
"v6": "Core",<br />
"f7": "component",<br />
"o7": "equals",<br />
"v7": "Performance",<br />
"f8": "keywords",<br />
"o8": "notsubstring",<br />
"v8": "meta",<br />
"f9": "cf_performance_impact",<br />
"o9": "isempty",<br />
"f10": "flagtypes.name",<br />
"o10": "notsubstring",<br />
"v10": "needinfo",<br />
"f11": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Performance triage (pending-needinfo) ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage (pending-needinfo)",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "pending-needinfo",<br />
"f3": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Recently opened bugs with performance keywords in the summary ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Recently opened bugs with performance keywords in the summary",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"chfield": "[Bug creation]",<br />
"chfieldfrom": "-2w",<br />
"keywords": "crash, intermittent-failure, meta",<br />
"keywords_type": "nowords",<br />
"short_desc": "perf \"load time\" responsiveness jank fast slow memory battery heat GPU CPU SLA",<br />
"short_desc_type": "anywordssubstr",<br />
"f1": "OP",<br />
"f2": "product",<br />
"o2": "equals",<br />
"v2": "Core",<br />
"f3": "product",<br />
"o3": "equals",<br />
"v3": "Fenix",<br />
"f4": "product",<br />
"o4": "equals",<br />
"v4": "Firefox",<br />
"f5": "product",<br />
"o5": "equals",<br />
"v5": "Firefox for iOS",<br />
"f6": "product",<br />
"o6": "equals",<br />
"v6": "Focus",<br />
"f7": "product",<br />
"o7": "equals",<br />
"v7": "Focus-iOS",<br />
"f8": "product",<br />
"o8": "equals",<br />
"v8": "GeckoView",<br />
"f9": "CP",<br />
"f10": "component",<br />
"o10": "notequals",<br />
"v10": "Performance",<br />
"f11": "cf_performance_impact",<br />
"o11": "isempty",<br />
"j1": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
= Triage process =<br />
== Introduction ==<br />
The goal of performance triage is to identify the extent to which bugs impact the performance of our products, and to move these bugs towards an actionable state. The goal is not to diagnose or fix bugs during triage. We triage bugs that have been nominated for triage and bugs in the Core::Performance component that do not have the performance impact project flag set.<br />
<br />
During triage we may do any/all of the following:<br />
* Request further information from the reporter (such as a profile)<br />
* Set the performance impact project flag<br />
* Add performance keywords<br />
* Move the bug to a more appropriate component<br />
<br />
== Who is responsible for triage? ==<br />
Everyone is welcome to take part in triage. By default, everyone on the performance team is enrolled in [[#Triage rotation|triage rotation]], but we also have participants from outside the team.<br />
<br />
== How do I schedule a triage meeting? ==<br />
If you are on triage duty, you will receive an invitation as a reminder to schedule the triage meeting on the [https://calendar.google.com/calendar/u/0?cid=bW96aWxsYS5jb21fOWJrNWYycnFkZXVpcDM4amJlbGQ4NGtwcWNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ shared performance calendar] with the nominated sheriffs invited at a time that works for them. The responsibility of scheduling the meeting falls to the lead sheriff. Once a triage meeting has been scheduled, it’s a good idea to remove the reminder event from the calendar to avoid confusion. It’s a good idea to use the shared calendar, as this increases the visibility of the performance triage and allows other members of the team to contribute or observe the process.<br />
<br />
== What if a sheriff is unavailable? ==<br />
The rotation script is not perfect, and doesn’t know when people are on PTO or otherwise unavailable. If the lead sheriff is available, it is their responsibility to either schedule the triage with the remaining available sheriff or to identify a suitable substitute for the unavailable sheriff(s). If the lead sheriff is unavailable, this responsibility passes onto the remaining available sheriffs.<br />
<br />
== How do I run a triage meeting? ==<br />
The following describes the triage process to follow during the meeting:<br />
<br />
# Ask if others would prefer you to share your screen. This can be especially helpful for those new to triage.<br />
# Open the [[#Performance triage|first triage query]] to show bugs nominated for triage or in the Core::Performance component without the performance impact project flag set. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* Bugs that look like tasks that were filed by members of the Performance team will generally need to be moved to the Core::Performance Engineering component.<br />
#* For defects: Determine if the bug is reproducible and actionable. If not, add a needinfo for the reporter asking for more information and move onto the next bug. We have a [[#New bug|template]] that you can modify as needed.<br />
#* For all bugs (including enhancements):<br />
#** Set the [[#How do I determine the performance impact project flag?|performance impact project flag]].<br />
#** Add the appropriate [[#How do I determine the performance keywords?|performance keywords]].<br />
#** Move the bug to the correct [[#How do I determine the correct Bugzilla component?|Bugzilla component]].<br />
# Open the [[#Performance triage (pending needinfo)|second triage query]] to show bugs that have open needinfo requests. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* If the needinfo was set less than 2 weeks ago, move onto the next bug.<br />
#* If the needinfo was set more than 2 weeks ago but less than 2 months ago, consider adding a needinfo for either: another reporter of the issue, someone with access to the appropriate platform(s) to attempt to reproduce the issue, or a relevant subject matter expert.<br />
#* If the open needinfo was set more than 2 months ago, close the bug as inactive. You can modify the [[#No response from reporter|inactive bug template]] as needed.<br />
# If time permits, open the [[#Recently opened bugs with performance keywords in the summary|third triage query]] to show recently opened bugs with performance related keywords in the summary. If any of these look like performance bugs, they can either be triaged the same way as bugs in the initial query or they can be [[#Bugzilla|nominated for triage]] in a subsequent meeting.<br />
<br />
== What if things don't go as expected? ==<br />
Don't panic! The triage process is not expected to be perfect, and can improve with your feedback. Maybe the result of the triage calculator doesn't feel right, or you find a scenario that's not covered in these guidelines. In this case we recommend that you bring it up in {{matrix|perf-triage}}, or consider scheduling a short meeting with some triage leads (you can see some recent leads in the [[#Triage rotation|triage rotation]]). If in doubt, leave a comment on the bug with your thoughts and move on. There's a chance someone will respond, but if not the next performance triage sheriffs may have some other ideas.<br />
<br />
== How do I determine the performance impact project flag? ==<br />
The [[../Bugzilla#Project Flag|performance impact project flag]] is used to indicate a bug’s relationship to the performance of our products. It can be applied to all bugs, and not only defects. The [[#Triage calculator|triage calculator]] should be used to help determine the most appropriate value for this flag. In addition to setting the performance impact project flag, make sure to use the “Copy Bugzilla Comment” button and paste this as a comment on the bug.<br />
<br />
For more information about what this flag, and it's settings mean see this [https://blog.mozilla.org/performance/2022/11/07/understanding-performance-impact/ blog post].<br />
<br />
== How do I determine the performance keywords? ==<br />
There are several [[../Bugzilla#Keywords|performance related keywords]], which can be helpful to understand how our performance issues are distributed, or whenever there’s a concerted effort to improve a particular aspect of our products. The [[#Triage calculator|triage calculator]] may recommend keywords to set, and by typing “perf:” in the keywords field in Bugzilla, you will see the available options. Select all that apply to the bug.<br />
<br />
== How do I determine the correct Bugzilla component? ==<br />
Ideally we would only have bugs in the Core::Performance component that are the responsibility of the engineers in the performance team. For performance bugs to have the best chance of being fixed, it's important to assign them to the correct component. In some cases the correct component will be obvious from the bug summary, description, or steps to reproduce. In other cases, you may need to do a bit more work to identify the component. For example, if there's a profile associated with the bug, you could see where the majority of time is being spent using the category annotations.<br />
<br />
== How do I read a performance profile? ==<br />
It's useful to be able to understand a profile generated by the [https://profiler.firefox.com/ Firefox Profiler], and hopefully someone in the triage meeting will be able to help. If you find an interesting profile, or just want to understand how to use them to analyse a performance problem, we encourage you to post a link to the profile (or bug) in [https://chat.mozilla.org/#/room/#joy-of-profiling:mozilla.org #joy-of-profiling] where someone will be happy to help. The profile may even be analysed during one of the regular "Joy of Profiling" open sessions that can be found on the [https://calendar.google.com/calendar/embed?src=c_cbjhkf8gu6anajlklhuo04hpko%40group.calendar.google.com&ctz=Europe%2FLondon Performance Office Hours calendar].<br />
<br />
= Triage calculator =<br />
The [https://mozilla.github.io/perf-triage/calculator.html Performance Impact Calculator] was developed to assist in identifying and applying the [[../Bugzilla#Project Flag|performance impact project flag]] and [[../Bugzilla#Keywords|performance keywords]] consistently. If you have feedback or would like to suggest changes to this tool, please share these in the [https://chat.mozilla.org/#/room/#perf-triage:mozilla.org #perf-triage Matrix channel].<br />
<br />
= Triage rotation =<br />
The sheriffs are allocated on a weekly basis, which is published [https://mozilla.github.io/perf-triage/ here]. The rotation is generated by [https://github.com/mozilla/perf-triage/blob/main/rotation.py this script].<br />
<br />
= Templates =<br />
== New bug ==<br />
This template is included in the description for new bugs opened in the Core::Performance component. If a bug is opened in another component and then moved to Core::Performance, this template can be used as needed to request additional information from the reporter.<br />
<br />
<pre><br />
### Basic information<br />
<br />
Steps to Reproduce:<br />
<br />
<br />
Expected Results:<br />
<br />
<br />
Actual Results:<br />
<br />
<br />
---<br />
<br />
### Performance recording (profile)<br />
<br />
Profile URL:<br />
(If this report is about slow performance or high CPU usage, please capture a performance profile by following the instructions at https://profiler.firefox.com/. Then upload the profile and insert the link here.)<br />
<br />
#### System configuration:<br />
<br />
OS version:<br />
GPU model:<br />
Number of cores: <br />
Amount of memory (RAM): <br />
<br />
### More information<br />
<br />
Please consider attaching the following information after filing this bug, if relevant:<br />
<br />
- Screenshot / screen recording<br />
- Anonymized about:memory dump, for issues with memory usage<br />
- Troubleshooting information: Go to about:support, click "Copy text to clipboard", paste it to a file, save it, and attach the file here.<br />
<br />
---<br />
<br />
Thanks so much for your help.<br />
</pre><br />
<br />
== Moved to Core::Performance ==<br />
<pre><br />
This bug was moved into the Performance component. Reporter, could you make sure the following information is on this bug?<br />
<br />
- For slowness or high CPU usage, capture a profile with http://profiler.firefox.com/ , upload it and share the link here.<br />
- For memory usage issues, capture a memory dump from about:memory and attach it to this bug.<br />
- Troubleshooting information: Go to about:support, click "Copy raw data to clipboard", paste it into a file, save it, and attach the file here.<br />
<br />
Thank you.<br />
</pre><br />
<br />
== No longer able to reproduce ==<br />
<pre>This bug doesn’t seem to happen anymore in current versions of Firefox. Please reopen or file a new bug if you see it again.</pre><br />
<br />
== No response from reporter ==<br />
<pre>With no answer from the reporter, we don’t have enough data to reproduce and/or fix this issue. Please reopen or file a new bug with more information if you see it again.</pre><br />
<br />
== Expected behaviour ==<br />
<pre>This is expected behavior. Please reopen or file a new bug if you think otherwise.</pre><br />
<br />
== Website issue ==<br />
<pre>According to the investigation, this is a website issue. Please reopen or file a new bug if you think otherwise.</pre></div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Triage&diff=1249143Performance/Triage2023-12-04T13:04:24Z<p>Davehunt: /* Performance triage (pending needinfo) */ update query to use pending-needinfo value of project flag instead of needinfo flag</p>
<hr />
<div>{{DISPLAYTITLE:Performance Triage}}<br />
<br />
{{message/box|If you have any feedback/suggestions/questions regarding the performance triage process, you can share them in {{matrix|perf-triage}}, or reach out to {{people|davehunt|Dave Hunt}} or {{people|frankd|Frank Doty}}.}}<br />
<br />
= Nomination =<br />
== Bugzilla ==<br />
To (re)nominate a bug for triage, set the [[../Bugzilla#Project Flag|Performance Impact flag]] in Bugzilla to <code>?</code><br />
<br />
This can be found by clicking '''Show Advanced Fields''' followed by '''Set bug flags''' when entering a new bug:<br />
<br />
[[File:Bugzilla performance nomination on new bug form.png|none]]<br />
<br />
Or by expanding the '''Tracking''' section when editing an existing bug:<br />
<br />
[[File:Screenshot 2022-02-24 at 19.53.54.png|none]]<br />
<br />
== GitHub ==<br />
To nominate a bug for triage, add the '''Performance''' label to an issue. This can be done by filing an new issue with the "Performance issue" template:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.22.58.png|none|Screenshot of file a "Performance issue" template on GitHub]]<br />
<br />
Or by opening an existing issue on GitHub and selecting the label from the right-hand bar:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.32.09.png|Screenshot of adding a performance label on GitHub]]<br />
<br />
Currently, only the following GitHub repositories are supported:<br />
* [https://github.com/mozilla-mobile/fenix/ fenix]<br />
* [https://github.com/mozilla-mobile/android-components/ android-components]<br />
* [https://github.com/mozilla-mobile/focus-android/ focus-android]<br />
<br />
= Queries =<br />
== Performance triage ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "?",<br />
"f3": "flagtypes.name",<br />
"o3": "notsubstring",<br />
"v3": "needinfo",<br />
"f4": "CP",<br />
"f5": "OP",<br />
"f6": "product",<br />
"o6": "equals",<br />
"v6": "Core",<br />
"f7": "component",<br />
"o7": "equals",<br />
"v7": "Performance",<br />
"f8": "keywords",<br />
"o8": "notsubstring",<br />
"v8": "meta",<br />
"f9": "cf_performance_impact",<br />
"o9": "isempty",<br />
"f10": "flagtypes.name",<br />
"o10": "notsubstring",<br />
"v10": "needinfo",<br />
"f11": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Performance triage (pending needinfo) ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage (pending-needinfo)",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "pending-needinfo",<br />
"f3": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Recently opened bugs with performance keywords in the summary ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Recently opened bugs with performance keywords in the summary",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"chfield": "[Bug creation]",<br />
"chfieldfrom": "-2w",<br />
"keywords": "crash, intermittent-failure, meta",<br />
"keywords_type": "nowords",<br />
"short_desc": "perf \"load time\" responsiveness jank fast slow memory battery heat GPU CPU SLA",<br />
"short_desc_type": "anywordssubstr",<br />
"f1": "OP",<br />
"f2": "product",<br />
"o2": "equals",<br />
"v2": "Core",<br />
"f3": "product",<br />
"o3": "equals",<br />
"v3": "Fenix",<br />
"f4": "product",<br />
"o4": "equals",<br />
"v4": "Firefox",<br />
"f5": "product",<br />
"o5": "equals",<br />
"v5": "Firefox for iOS",<br />
"f6": "product",<br />
"o6": "equals",<br />
"v6": "Focus",<br />
"f7": "product",<br />
"o7": "equals",<br />
"v7": "Focus-iOS",<br />
"f8": "product",<br />
"o8": "equals",<br />
"v8": "GeckoView",<br />
"f9": "CP",<br />
"f10": "component",<br />
"o10": "notequals",<br />
"v10": "Performance",<br />
"f11": "cf_performance_impact",<br />
"o11": "isempty",<br />
"j1": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
= Triage process =<br />
== Introduction ==<br />
The goal of performance triage is to identify the extent to which bugs impact the performance of our products, and to move these bugs towards an actionable state. The goal is not to diagnose or fix bugs during triage. We triage bugs that have been nominated for triage and bugs in the Core::Performance component that do not have the performance impact project flag set.<br />
<br />
During triage we may do any/all of the following:<br />
* Request further information from the reporter (such as a profile)<br />
* Set the performance impact project flag<br />
* Add performance keywords<br />
* Move the bug to a more appropriate component<br />
<br />
== Who is responsible for triage? ==<br />
Everyone is welcome to take part in triage. By default, everyone on the performance team is enrolled in [[#Triage rotation|triage rotation]], but we also have participants from outside the team.<br />
<br />
== How do I schedule a triage meeting? ==<br />
If you are on triage duty, you will receive an invitation as a reminder to schedule the triage meeting on the [https://calendar.google.com/calendar/u/0?cid=bW96aWxsYS5jb21fOWJrNWYycnFkZXVpcDM4amJlbGQ4NGtwcWNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ shared performance calendar] with the nominated sheriffs invited at a time that works for them. The responsibility of scheduling the meeting falls to the lead sheriff. Once a triage meeting has been scheduled, it’s a good idea to remove the reminder event from the calendar to avoid confusion. It’s a good idea to use the shared calendar, as this increases the visibility of the performance triage and allows other members of the team to contribute or observe the process.<br />
<br />
== What if a sheriff is unavailable? ==<br />
The rotation script is not perfect, and doesn’t know when people are on PTO or otherwise unavailable. If the lead sheriff is available, it is their responsibility to either schedule the triage with the remaining available sheriff or to identify a suitable substitute for the unavailable sheriff(s). If the lead sheriff is unavailable, this responsibility passes onto the remaining available sheriffs.<br />
<br />
== How do I run a triage meeting? ==<br />
The following describes the triage process to follow during the meeting:<br />
<br />
# Ask if others would prefer you to share your screen. This can be especially helpful for those new to triage.<br />
# Open the [[#Performance triage|first triage query]] to show bugs nominated for triage or in the Core::Performance component without the performance impact project flag set. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* Bugs that look like tasks that were filed by members of the Performance team will generally need to be moved to the Core::Performance Engineering component.<br />
#* For defects: Determine if the bug is reproducible and actionable. If not, add a needinfo for the reporter asking for more information and move onto the next bug. We have a [[#New bug|template]] that you can modify as needed.<br />
#* For all bugs (including enhancements):<br />
#** Set the [[#How do I determine the performance impact project flag?|performance impact project flag]].<br />
#** Add the appropriate [[#How do I determine the performance keywords?|performance keywords]].<br />
#** Move the bug to the correct [[#How do I determine the correct Bugzilla component?|Bugzilla component]].<br />
# Open the [[#Performance triage (pending needinfo)|second triage query]] to show bugs that have open needinfo requests. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* If the needinfo was set less than 2 weeks ago, move onto the next bug.<br />
#* If the needinfo was set more than 2 weeks ago but less than 2 months ago, consider adding a needinfo for either: another reporter of the issue, someone with access to the appropriate platform(s) to attempt to reproduce the issue, or a relevant subject matter expert.<br />
#* If the open needinfo was set more than 2 months ago, close the bug as inactive. You can modify the [[#No response from reporter|inactive bug template]] as needed.<br />
# If time permits, open the [[#Recently opened bugs with performance keywords in the summary|third triage query]] to show recently opened bugs with performance related keywords in the summary. If any of these look like performance bugs, they can either be triaged the same way as bugs in the initial query or they can be [[#Bugzilla|nominated for triage]] in a subsequent meeting.<br />
<br />
== What if things don't go as expected? ==<br />
Don't panic! The triage process is not expected to be perfect, and can improve with your feedback. Maybe the result of the triage calculator doesn't feel right, or you find a scenario that's not covered in these guidelines. In this case we recommend that you bring it up in {{matrix|perf-triage}}, or consider scheduling a short meeting with some triage leads (you can see some recent leads in the [[#Triage rotation|triage rotation]]). If in doubt, leave a comment on the bug with your thoughts and move on. There's a chance someone will respond, but if not the next performance triage sheriffs may have some other ideas.<br />
<br />
== How do I determine the performance impact project flag? ==<br />
The [[../Bugzilla#Project Flag|performance impact project flag]] is used to indicate a bug’s relationship to the performance of our products. It can be applied to all bugs, and not only defects. The [[#Triage calculator|triage calculator]] should be used to help determine the most appropriate value for this flag. In addition to setting the performance impact project flag, make sure to use the “Copy Bugzilla Comment” button and paste this as a comment on the bug.<br />
<br />
For more information about what this flag, and it's settings mean see this [https://blog.mozilla.org/performance/2022/11/07/understanding-performance-impact/ blog post].<br />
<br />
== How do I determine the performance keywords? ==<br />
There are several [[../Bugzilla#Keywords|performance related keywords]], which can be helpful to understand how our performance issues are distributed, or whenever there’s a concerted effort to improve a particular aspect of our products. The [[#Triage calculator|triage calculator]] may recommend keywords to set, and by typing “perf:” in the keywords field in Bugzilla, you will see the available options. Select all that apply to the bug.<br />
<br />
== How do I determine the correct Bugzilla component? ==<br />
Ideally we would only have bugs in the Core::Performance component that are the responsibility of the engineers in the performance team. For performance bugs to have the best chance of being fixed, it's important to assign them to the correct component. In some cases the correct component will be obvious from the bug summary, description, or steps to reproduce. In other cases, you may need to do a bit more work to identify the component. For example, if there's a profile associated with the bug, you could see where the majority of time is being spent using the category annotations.<br />
<br />
== How do I read a performance profile? ==<br />
It's useful to be able to understand a profile generated by the [https://profiler.firefox.com/ Firefox Profiler], and hopefully someone in the triage meeting will be able to help. If you find an interesting profile, or just want to understand how to use them to analyse a performance problem, we encourage you to post a link to the profile (or bug) in [https://chat.mozilla.org/#/room/#joy-of-profiling:mozilla.org #joy-of-profiling] where someone will be happy to help. The profile may even be analysed during one of the regular "Joy of Profiling" open sessions that can be found on the [https://calendar.google.com/calendar/embed?src=c_cbjhkf8gu6anajlklhuo04hpko%40group.calendar.google.com&ctz=Europe%2FLondon Performance Office Hours calendar].<br />
<br />
= Triage calculator =<br />
The [https://mozilla.github.io/perf-triage/calculator.html Performance Impact Calculator] was developed to assist in identifying and applying the [[../Bugzilla#Project Flag|performance impact project flag]] and [[../Bugzilla#Keywords|performance keywords]] consistently. If you have feedback or would like to suggest changes to this tool, please share these in the [https://chat.mozilla.org/#/room/#perf-triage:mozilla.org #perf-triage Matrix channel].<br />
<br />
= Triage rotation =<br />
The sheriffs are allocated on a weekly basis, which is published [https://mozilla.github.io/perf-triage/ here]. The rotation is generated by [https://github.com/mozilla/perf-triage/blob/main/rotation.py this script].<br />
<br />
= Templates =<br />
== New bug ==<br />
This template is included in the description for new bugs opened in the Core::Performance component. If a bug is opened in another component and then moved to Core::Performance, this template can be used as needed to request additional information from the reporter.<br />
<br />
<pre><br />
### Basic information<br />
<br />
Steps to Reproduce:<br />
<br />
<br />
Expected Results:<br />
<br />
<br />
Actual Results:<br />
<br />
<br />
---<br />
<br />
### Performance recording (profile)<br />
<br />
Profile URL:<br />
(If this report is about slow performance or high CPU usage, please capture a performance profile by following the instructions at https://profiler.firefox.com/. Then upload the profile and insert the link here.)<br />
<br />
#### System configuration:<br />
<br />
OS version:<br />
GPU model:<br />
Number of cores: <br />
Amount of memory (RAM): <br />
<br />
### More information<br />
<br />
Please consider attaching the following information after filing this bug, if relevant:<br />
<br />
- Screenshot / screen recording<br />
- Anonymized about:memory dump, for issues with memory usage<br />
- Troubleshooting information: Go to about:support, click "Copy text to clipboard", paste it to a file, save it, and attach the file here.<br />
<br />
---<br />
<br />
Thanks so much for your help.<br />
</pre><br />
<br />
== Moved to Core::Performance ==<br />
<pre><br />
This bug was moved into the Performance component. Reporter, could you make sure the following information is on this bug?<br />
<br />
- For slowness or high CPU usage, capture a profile with http://profiler.firefox.com/ , upload it and share the link here.<br />
- For memory usage issues, capture a memory dump from about:memory and attach it to this bug.<br />
- Troubleshooting information: Go to about:support, click "Copy raw data to clipboard", paste it into a file, save it, and attach the file here.<br />
<br />
Thank you.<br />
</pre><br />
<br />
== No longer able to reproduce ==<br />
<pre>This bug doesn’t seem to happen anymore in current versions of Firefox. Please reopen or file a new bug if you see it again.</pre><br />
<br />
== No response from reporter ==<br />
<pre>With no answer from the reporter, we don’t have enough data to reproduce and/or fix this issue. Please reopen or file a new bug with more information if you see it again.</pre><br />
<br />
== Expected behaviour ==<br />
<pre>This is expected behavior. Please reopen or file a new bug if you think otherwise.</pre><br />
<br />
== Website issue ==<br />
<pre>According to the investigation, this is a website issue. Please reopen or file a new bug if you think otherwise.</pre></div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Bugzilla&diff=1249142Performance/Bugzilla2023-12-04T11:42:35Z<p>Davehunt: /* Project Flag */</p>
<hr />
<div>= Workflows =<br />
* [[../Triage|Triage]]<br />
<br />
= Queries =<br />
* [[Performance/Triage#Queries|Triage]]<br />
<br />
= Reports =<br />
* [https://bugzilla.mozilla.org/report.cgi?x_axis_field=cf_performance_impact&y_axis_field=component&query_format=report-table&resolution=---&j_top=AND&f1=cf_performance_impact&o1=isnotempty&v1=&f2=cf_performance_impact&o2=nowordssubstr&v2=-%2C%3F&format=table&action=wrap Performance Impact × Component]<br />
* [https://bugzilla.mozilla.org/report.cgi?x_axis_field=cf_performance_impact&y_axis_field=priority&query_format=report-table&resolution=---&j_top=AND&f1=cf_performance_impact&o1=isnotempty&v1=&f2=cf_performance_impact&o2=nowordssubstr&v2=-%2C%3F&format=table&action=wrap Performance Impact × Priority]<br />
* [https://bugzilla.mozilla.org/report.cgi?x_axis_field=cf_performance_impact&y_axis_field=bug_severity&query_format=report-table&resolution=---&j_top=AND&f1=cf_performance_impact&o1=isnotempty&v1=&f2=cf_performance_impact&o2=nowordssubstr&v2=-%2C%3F&format=table&action=wrap Performance Impact × Severity]<br />
* [https://mozilla.pettay.fi/performance.html Performance × Component, including deltas over 30 days]<br />
<br />
= Project Flag =<br />
We have a '''Performance Impact''' project flag used for triage nomination and prioritisation. Anyone can (re)nominate bugs for triage, but only members of the [[#Groups|perf-triage-team group]] can prioritise or mark bugs as unrelated to performance.<br />
<br />
{| class="wikitable"<br />
|-<br />
! Value !! Description<br />
|-<br />
| [https://bugzilla.mozilla.org/buglist.cgi?f1=cf_performance_impact&o1=equals&v1=%3F&resolution=---&query_format=advanced ?] || Add bug to triage queue.<br />
|-<br />
| [https://bugzilla.mozilla.org/buglist.cgi?f1=cf_performance_impact&o1=equals&v1=none&resolution=---&query_format=advanced none] || Bug has no impact on product performance.<br />
|-<br />
| [https://bugzilla.mozilla.org/buglist.cgi?f1=cf_performance_impact&o1=equals&v1=high&resolution=---&query_format=advanced high] || Bug has a high impact on the product quality and experience. These bugs should ideally be resolved within 3-6 months.<br />
|-<br />
| [https://bugzilla.mozilla.org/buglist.cgi?f1=cf_performance_impact&o1=equals&v1=medium&resolution=---&query_format=advanced medium] || Bug has a noticeable impact for a number of users. These bugs should ideally be resolved within 6-12 months.<br />
|-<br />
| [https://bugzilla.mozilla.org/buglist.cgi?f1=cf_performance_impact&o1=equals&v1=low&resolution=---&query_format=advanced low] || Bug has a noticeable performance impact but affects a group of users small enough, or the impact is low enough that it should be resolved in accordance with other priorities.<br />
|-<br />
| [https://bugzilla.mozilla.org/buglist.cgi?f1=cf_performance_impact&o1=equals&v1=pending-needinfo&resolution=---&query_format=advanced pending-needinfo] || Bug is waiting on further information before impact can be determined.<br />
|}<br />
<br />
= Keywords =<br />
* '''{{BugzillaKeyword|perf}}''' A bug that affects speed or responsiveness. (For memory use issues, use "memory-footprint" or "memory-leak" instead.)<br />
* '''{{BugzillaKeyword|perf-alert}}''' Associated with a performance alert.<br />
* '''{{BugzillaKeyword|perf:responsiveness}}''' The issue affects the promptness of the browser’s response to user input.<br />
* '''{{BugzillaKeyword|perf:resource-use}}''' The issue affects resource use excessively: cpu, gpu, ram, disk access, power, etc.<br />
* '''{{BugzillaKeyword|perf:pageload}}''' The issue affects the initial loading of websites.<br />
* '''{{BugzillaKeyword|perf:frontend}}''' The issue affects the browser front-end (i.e. the Firefox UI.)<br />
* '''{{BugzillaKeyword|perf:animation}}''' The issue affects the smoothness of animations.<br />
* '''{{BugzillaKeyword|perf:startup}}''' The issue affects application startup.<br />
<br />
= Groups =<br />
* [https://bugzilla.mozilla.org/page.cgi?id=group_members.html&group=perf-triage-team perf-triage-team]</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/FOSDEM_2024_Call_for_Participation&diff=1248908Performance/FOSDEM 2024 Call for Participation2023-11-10T13:38:11Z<p>Davehunt: Created page with "{{DISPLAYTITLE:FOSDEM 2024 - Call for Participation}} We are excited to announce that the '''Web Performance''' devroom will be returning for '''FOSDEM 2024'''! This time the..."</p>
<hr />
<div>{{DISPLAYTITLE:FOSDEM 2024 - Call for Participation}}<br />
We are excited to announce that the '''Web Performance''' devroom will be returning for '''FOSDEM 2024'''! This time the room is a joint effort of Mozilla and Wikimedia Foundation.<br />
<br />
FOSDEM is the largest Free and Open Source Software event in Europe. It offers free software communities a place to meet, share ideas and collaborate. FOSDEM 2024 will take place on February 3-4th 2024 in Brussels, Belgium. You can learn more at https://fosdem.org/2024/.<br />
<br />
== Web Performance devroom ==<br />
The web performance devroom will focus on ongoing open standards development and FLOSS projects that support performance of the world wide web. The web performance field is driven by open standards (browser APIs, internet protocols), and relies on many FLOSS tools to monitor, analyze and improve performance.<br />
<br />
Most conferences about this topic tend to focus on best practices and standards once they’ve been finalized, while the ongoing creation of standards and FLOSS tooling development is rarely talked about. The objective of this devroom is to bring focus to the future of web performance through open standards, and open source developer tools.<br />
<br />
Here’s some topic suggestions:<br />
* FLOSS tools to monitor, measure, analyze, optimize backend or frontend web performance<br />
* Open standards (eg. HTTP/3, upcoming JS/HTML/CSS standards)<br />
* Academic research on web performance<br />
* Share your web performance story: Share your challenges and your solutions and what you learned <br />
* Ethical web performance: Address the balance between performance optimization and user privacy, especially in the context of increasing concerns about data collection practices by large corporations.<br />
<br />
== Submission process ==<br />
Please submit your talk proposal by '''December 1st'''. We will let you know if your call has been accepted by '''December 15th'''.<br />
<br />
All talks will be in person, and can be 15 to 45 minutes in duration (including Q&A).<br />
<br />
You can submit your talk proposal at https://fosdem.org/submit making sure to select '''Web Performance''' as the track.</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Platforms&diff=1248858TestEngineering/Performance/Platforms2023-11-06T20:44:32Z<p>Davehunt: Redirected page to Performance/Platforms</p>
<hr />
<div>#REDIRECT [[Performance/Platforms]]</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Platforms&diff=1248857Performance/Platforms2023-11-06T20:43:29Z<p>Davehunt: Created page with "This page details the hardware profiles of all machines used for running performance tests in automation. All performance tests run on physical hardware. Every attempt to run..."</p>
<hr />
<div>This page details the hardware profiles of all machines used for running performance tests in automation. All performance tests run on physical hardware. Every attempt to run these tests in a virtualised environment has resulted in the hypervisor getting in the way and creating too much noise in the tests, no matter how much we try and tweak it.<br />
<br />
= Support =<br />
All hardware listed below is maintained by Mozilla's operations team.<br />
<br />
= Desktop =<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/HPE+Moonshot HPE Moonshot] ==<br />
The HPE Moonshot System supports up to 45 servers in a single chassis. Each server resides on a cartridge.<br />
* '''Platforms''': linux64, windows10-64.<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
<br />
HPE Moonshot 1500 System (45 cartridges per 4.3U)<br />
1500W Hot Plug redundant (1+1) Power Supply<br />
45 m710x ProLiant cartridges<br />
1 [https://ark.intel.com/content/www/us/en/ark/products/93741/intel-xeon-processor-e3-1585l-v5-8m-cache-3-00-ghz.html Intel E3-1585L v5] 3.0GHz CPU (4 cores)<br />
8GB DDR4 2400MHz RAM<br />
1 256GB PCIe M.2 2280 SSD<br />
1 64GB SATA M.2 2242 SSD<br />
1 Intel Iris Pro Graphics P580<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/Apple+Mac+Mini+R8 Apple Mac Mini R8] ==<br />
* '''Platforms''': macosx<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
* '''Note''': All Mac Minis have EDID devices attached that set the resolution<br />
<br />
Model Name: Mac Mini<br />
Model Identifier: [https://everymac.com/ultimate-mac-lookup/?search_keywords=Macmini8,1 Macmini8,1]<br />
Processor Name: 6-Core Intel Core i7 (i7-8700B)<br />
Processor Speed: 3.2 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== MacBook Pro ==<br />
{{warning|These will be removed via {{bug|1828660}}}}<br />
* '''Platforms''': macosx1014-64-power<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops currently for power testing<br />
<br />
Model Name: MacBook Pro laptop<br />
Model Identifier: MacBookPro15,1<br />
Processor Name: Intel Core i7 (I7-9750H)<br />
Processor Speed: 2.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
GPU:<br />
Intel UHD Graphics 630<br />
Radeon Pro 555X<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== Windows 2017 Reference ==<br />
* '''Platforms''': windows10-64-ref-hw-2017<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Acer Aspire 15<br />
Model Identifier: E5-575-33BM<br />
Processor Name: Intel Core i3-7100U<br />
Processor Speed: 2.4 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD Graphics 620<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 3 MB<br />
Memory: 4GB DDR4<br />
Disk: 1TB SATA Hard Drive (5400RPM)<br />
Resolution: 1920 x 1080<br />
<br />
== Windows 2018 Reference ==<br />
* '''Platforms''': TBA<br />
* '''Location''': TBA<br />
* '''Note''': no devices available in CI<br />
<br />
Model Name: Dell Inspiron 15 3000<br />
Model Identifier: inspiron15<br />
Processor Name: Intel Celeron N3060<br />
Processor Speed: 1.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD 400<br />
L2 Cache: 2MB<br />
Memory: 4GB DDR4<br />
Disk: 500GB <br />
Resolution: 1920 x 1080<br />
<br />
== Windows ARM64 ==<br />
* '''Platforms''': win64-aarch64<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Lenovo Yoga C630<br />
Model Identifier: C630<br />
Processor Name: Qualcomm Snapdragon 850<br />
Processor Speed: 2.96 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 8<br />
GPU: Qualcomm Adreno 630<br />
Memory: 8GB<br />
Disk: 128GB SSD<br />
Resolution: 1920 x 1080<br />
<br />
= Mobile =<br />
== Samsung A51 ==<br />
* '''Platforms''': android-hw-a51<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 27 devices total (17 @4GB RAM, 4 @6GB RAM, 8 @8GB RAM) (27 for perf)<br />
<br />
Model Name: Samsung A51<br />
Model Identifier: SM_A515F<br />
Processor Name: Exynos 9611<br />
Processor Speed: 4x2.3 GHz Cortex-A73 & 4x1.7 GHz Cortex-A53<br />
Number of Processors: 2<br />
Total Number of Cores: 8 (4 each)<br />
GPU: Mali-G72 MP3<br />
Memory: 4 | 6 | 8 GB (see note above)<br />
Disk: 64 | 128 | 256 GB</div>Davehunthttps://wiki.mozilla.org/index.php?title=Template:Matrix&diff=1248796Template:Matrix2023-10-31T11:41:49Z<p>Davehunt: </p>
<hr />
<div>[https://chat.mozilla.org/#/room/#{{{1}}}:mozilla.org #{{{1}}}]{{#if:{{{2|}}}| - {{{2}}}}}</div>Davehunthttps://wiki.mozilla.org/index.php?title=Template:People&diff=1248795Template:People2023-10-31T11:40:57Z<p>Davehunt: </p>
<hr />
<div>{{#if:{{{2|}}}|[https://people.mozilla.org/p/{{{1}}} {{{2}}}]|[https://people.mozilla.org/p/{{{1}}} {{{1}}}]}}</div>Davehunthttps://wiki.mozilla.org/index.php?title=Template:People&diff=1248794Template:People2023-10-31T11:39:06Z<p>Davehunt: </p>
<hr />
<div>{{#if:{{{name|}}}|<br />
[https://people.mozilla.org/p/{{{username}}} {{{name}}}]|<br />
[https://people.mozilla.org/p/{{{username}}} {{{username}}}]<br />
}}</div>Davehunthttps://wiki.mozilla.org/index.php?title=Template:People&diff=1248793Template:People2023-10-31T11:38:44Z<p>Davehunt: </p>
<hr />
<div>{{#if:{{{name|}}}|<br />
[https://people.mozilla.org/p/{{{username}}}|{{{name}}}]|<br />
[https://people.mozilla.org/p/{{{username}}}|{{{username}}}]<br />
}}</div>Davehunthttps://wiki.mozilla.org/index.php?title=Template:People&diff=1248792Template:People2023-10-31T11:33:05Z<p>Davehunt: </p>
<hr />
<div>[https://people.mozilla.org/p/{{{username}}} {{#ifeq:{{{name|name}}}|{{{username}}}}}]</div>Davehunthttps://wiki.mozilla.org/index.php?title=Template:People&diff=1248791Template:People2023-10-31T11:32:27Z<p>Davehunt: </p>
<hr />
<div>[https://people.mozilla.org/p/{{{username}}} {{#ifeq:{{{name|}}}|{{{username}}}}}]</div>Davehunthttps://wiki.mozilla.org/index.php?title=Template:People&diff=1248790Template:People2023-10-31T11:28:50Z<p>Davehunt: </p>
<hr />
<div>[https://people.mozilla.org/p/{{{username}}} {{{name|{{{username}}}}}}]</div>Davehunthttps://wiki.mozilla.org/index.php?title=Template:People&diff=1248789Template:People2023-10-31T11:28:36Z<p>Davehunt: </p>
<hr />
<div>[https://people.mozilla.org/p/{{{username}}} {{{name|{{username}}}}}]</div>Davehunthttps://wiki.mozilla.org/index.php?title=Template:People&diff=1248788Template:People2023-10-31T11:28:10Z<p>Davehunt: Created page with "[https://people.mozilla.org/p/{{{username}}} {{{name|username}}}]"</p>
<hr />
<div>[https://people.mozilla.org/p/{{{username}}} {{{name|username}}}]</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Triage&diff=1248787Performance/Triage2023-10-31T11:16:42Z<p>Davehunt: </p>
<hr />
<div>{{DISPLAYTITLE:Performance Triage}}<br />
<br />
{{message/box|If you have any feedback/suggestions/questions regarding the performance triage process, you can share them in {{matrix|perf-triage}}, or reach out to {{people|davehunt|Dave Hunt}} or {{people|frankd|Frank Doty}}.}}<br />
<br />
= Nomination =<br />
== Bugzilla ==<br />
To (re)nominate a bug for triage, set the [[../Bugzilla#Project Flag|Performance Impact flag]] in Bugzilla to <code>?</code><br />
<br />
This can be found by clicking '''Show Advanced Fields''' followed by '''Set bug flags''' when entering a new bug:<br />
<br />
[[File:Bugzilla performance nomination on new bug form.png|none]]<br />
<br />
Or by expanding the '''Tracking''' section when editing an existing bug:<br />
<br />
[[File:Screenshot 2022-02-24 at 19.53.54.png|none]]<br />
<br />
== GitHub ==<br />
To nominate a bug for triage, add the '''Performance''' label to an issue. This can be done by filing an new issue with the "Performance issue" template:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.22.58.png|none|Screenshot of file a "Performance issue" template on GitHub]]<br />
<br />
Or by opening an existing issue on GitHub and selecting the label from the right-hand bar:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.32.09.png|Screenshot of adding a performance label on GitHub]]<br />
<br />
Currently, only the following GitHub repositories are supported:<br />
* [https://github.com/mozilla-mobile/fenix/ fenix]<br />
* [https://github.com/mozilla-mobile/android-components/ android-components]<br />
* [https://github.com/mozilla-mobile/focus-android/ focus-android]<br />
<br />
= Queries =<br />
== Performance triage ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "?",<br />
"f3": "flagtypes.name",<br />
"o3": "notsubstring",<br />
"v3": "needinfo",<br />
"f4": "CP",<br />
"f5": "OP",<br />
"f6": "product",<br />
"o6": "equals",<br />
"v6": "Core",<br />
"f7": "component",<br />
"o7": "equals",<br />
"v7": "Performance",<br />
"f8": "keywords",<br />
"o8": "notsubstring",<br />
"v8": "meta",<br />
"f9": "cf_performance_impact",<br />
"o9": "isempty",<br />
"f10": "flagtypes.name",<br />
"o10": "notsubstring",<br />
"v10": "needinfo",<br />
"f11": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Performance triage (pending needinfo) ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage (pending needinfo)",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "?",<br />
"f3": "flagtypes.name",<br />
"o3": "substring",<br />
"v3": "needinfo",<br />
"f4": "CP",<br />
"f5": "OP",<br />
"f6": "product",<br />
"o6": "equals",<br />
"v6": "Core",<br />
"f7": "component",<br />
"o7": "equals",<br />
"v7": "Performance",<br />
"f8": "keywords",<br />
"o8": "notsubstring",<br />
"v8": "meta",<br />
"f9": "cf_performance_impact",<br />
"o9": "isempty",<br />
"f10": "flagtypes.name",<br />
"o10": "substring",<br />
"v10": "needinfo",<br />
"f11": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Recently opened bugs with performance keywords in the summary ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Recently opened bugs with performance keywords in the summary",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"chfield": "[Bug creation]",<br />
"chfieldfrom": "-2w",<br />
"keywords": "crash, intermittent-failure, meta",<br />
"keywords_type": "nowords",<br />
"short_desc": "perf \"load time\" responsiveness jank fast slow memory battery heat GPU CPU SLA",<br />
"short_desc_type": "anywordssubstr",<br />
"f1": "OP",<br />
"f2": "product",<br />
"o2": "equals",<br />
"v2": "Core",<br />
"f3": "product",<br />
"o3": "equals",<br />
"v3": "Fenix",<br />
"f4": "product",<br />
"o4": "equals",<br />
"v4": "Firefox",<br />
"f5": "product",<br />
"o5": "equals",<br />
"v5": "Firefox for iOS",<br />
"f6": "product",<br />
"o6": "equals",<br />
"v6": "Focus",<br />
"f7": "product",<br />
"o7": "equals",<br />
"v7": "Focus-iOS",<br />
"f8": "product",<br />
"o8": "equals",<br />
"v8": "GeckoView",<br />
"f9": "CP",<br />
"f10": "component",<br />
"o10": "notequals",<br />
"v10": "Performance",<br />
"f11": "cf_performance_impact",<br />
"o11": "isempty",<br />
"j1": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
= Triage process =<br />
== Introduction ==<br />
The goal of performance triage is to identify the extent to which bugs impact the performance of our products, and to move these bugs towards an actionable state. The goal is not to diagnose or fix bugs during triage. We triage bugs that have been nominated for triage and bugs in the Core::Performance component that do not have the performance impact project flag set.<br />
<br />
During triage we may do any/all of the following:<br />
* Request further information from the reporter (such as a profile)<br />
* Set the performance impact project flag<br />
* Add performance keywords<br />
* Move the bug to a more appropriate component<br />
<br />
== Who is responsible for triage? ==<br />
Everyone is welcome to take part in triage. By default, everyone on the performance team is enrolled in [[#Triage rotation|triage rotation]], but we also have participants from outside the team.<br />
<br />
== How do I schedule a triage meeting? ==<br />
If you are on triage duty, you will receive an invitation as a reminder to schedule the triage meeting on the [https://calendar.google.com/calendar/u/0?cid=bW96aWxsYS5jb21fOWJrNWYycnFkZXVpcDM4amJlbGQ4NGtwcWNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ shared performance calendar] with the nominated sheriffs invited at a time that works for them. The responsibility of scheduling the meeting falls to the lead sheriff. Once a triage meeting has been scheduled, it’s a good idea to remove the reminder event from the calendar to avoid confusion. It’s a good idea to use the shared calendar, as this increases the visibility of the performance triage and allows other members of the team to contribute or observe the process.<br />
<br />
== What if a sheriff is unavailable? ==<br />
The rotation script is not perfect, and doesn’t know when people are on PTO or otherwise unavailable. If the lead sheriff is available, it is their responsibility to either schedule the triage with the remaining available sheriff or to identify a suitable substitute for the unavailable sheriff(s). If the lead sheriff is unavailable, this responsibility passes onto the remaining available sheriffs.<br />
<br />
== How do I run a triage meeting? ==<br />
The following describes the triage process to follow during the meeting:<br />
<br />
# Ask if others would prefer you to share your screen. This can be especially helpful for those new to triage.<br />
# Open the [[#Performance triage|first triage query]] to show bugs nominated for triage or in the Core::Performance component without the performance impact project flag set. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* Bugs that look like tasks that were filed by members of the Performance team will generally need to be moved to the Core::Performance Engineering component.<br />
#* For defects: Determine if the bug is reproducible and actionable. If not, add a needinfo for the reporter asking for more information and move onto the next bug. We have a [[#New bug|template]] that you can modify as needed.<br />
#* For all bugs (including enhancements):<br />
#** Set the [[#How do I determine the performance impact project flag?|performance impact project flag]].<br />
#** Add the appropriate [[#How do I determine the performance keywords?|performance keywords]].<br />
#** Move the bug to the correct [[#How do I determine the correct Bugzilla component?|Bugzilla component]].<br />
# Open the [[#Performance triage (pending needinfo)|second triage query]] to show bugs that have open needinfo requests. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* If the needinfo was set less than 2 weeks ago, move onto the next bug.<br />
#* If the needinfo was set more than 2 weeks ago but less than 2 months ago, consider adding a needinfo for either: another reporter of the issue, someone with access to the appropriate platform(s) to attempt to reproduce the issue, or a relevant subject matter expert.<br />
#* If the open needinfo was set more than 2 months ago, close the bug as inactive. You can modify the [[#No response from reporter|inactive bug template]] as needed.<br />
# If time permits, open the [[#Recently opened bugs with performance keywords in the summary|third triage query]] to show recently opened bugs with performance related keywords in the summary. If any of these look like performance bugs, they can either be triaged the same way as bugs in the initial query or they can be [[#Bugzilla|nominated for triage]] in a subsequent meeting.<br />
<br />
== What if things don't go as expected? ==<br />
Don't panic! The triage process is not expected to be perfect, and can improve with your feedback. Maybe the result of the triage calculator doesn't feel right, or you find a scenario that's not covered in these guidelines. In this case we recommend that you bring it up in {{matrix|perf-triage}}, or consider scheduling a short meeting with some triage leads (you can see some recent leads in the [[#Triage rotation|triage rotation]]). If in doubt, leave a comment on the bug with your thoughts and move on. There's a chance someone will respond, but if not the next performance triage sheriffs may have some other ideas.<br />
<br />
== How do I determine the performance impact project flag? ==<br />
The [[../Bugzilla#Project Flag|performance impact project flag]] is used to indicate a bug’s relationship to the performance of our products. It can be applied to all bugs, and not only defects. The [[#Triage calculator|triage calculator]] should be used to help determine the most appropriate value for this flag. In addition to setting the performance impact project flag, make sure to use the “Copy Bugzilla Comment” button and paste this as a comment on the bug.<br />
<br />
For more information about what this flag, and it's settings mean see this [https://blog.mozilla.org/performance/2022/11/07/understanding-performance-impact/ blog post].<br />
<br />
== How do I determine the performance keywords? ==<br />
There are several [[../Bugzilla#Keywords|performance related keywords]], which can be helpful to understand how our performance issues are distributed, or whenever there’s a concerted effort to improve a particular aspect of our products. The [[#Triage calculator|triage calculator]] may recommend keywords to set, and by typing “perf:” in the keywords field in Bugzilla, you will see the available options. Select all that apply to the bug.<br />
<br />
== How do I determine the correct Bugzilla component? ==<br />
Ideally we would only have bugs in the Core::Performance component that are the responsibility of the engineers in the performance team. For performance bugs to have the best chance of being fixed, it's important to assign them to the correct component. In some cases the correct component will be obvious from the bug summary, description, or steps to reproduce. In other cases, you may need to do a bit more work to identify the component. For example, if there's a profile associated with the bug, you could see where the majority of time is being spent using the category annotations.<br />
<br />
== How do I read a performance profile? ==<br />
It's useful to be able to understand a profile generated by the [https://profiler.firefox.com/ Firefox Profiler], and hopefully someone in the triage meeting will be able to help. If you find an interesting profile, or just want to understand how to use them to analyse a performance problem, we encourage you to post a link to the profile (or bug) in [https://chat.mozilla.org/#/room/#joy-of-profiling:mozilla.org #joy-of-profiling] where someone will be happy to help. The profile may even be analysed during one of the regular "Joy of Profiling" open sessions that can be found on the [https://calendar.google.com/calendar/embed?src=c_cbjhkf8gu6anajlklhuo04hpko%40group.calendar.google.com&ctz=Europe%2FLondon Performance Office Hours calendar].<br />
<br />
= Triage calculator =<br />
The [https://mozilla.github.io/perf-triage/calculator.html Performance Impact Calculator] was developed to assist in identifying and applying the [[../Bugzilla#Project Flag|performance impact project flag]] and [[../Bugzilla#Keywords|performance keywords]] consistently. If you have feedback or would like to suggest changes to this tool, please share these in the [https://chat.mozilla.org/#/room/#perf-triage:mozilla.org #perf-triage Matrix channel].<br />
<br />
= Triage rotation =<br />
The sheriffs are allocated on a weekly basis, which is published [https://mozilla.github.io/perf-triage/ here]. The rotation is generated by [https://github.com/mozilla/perf-triage/blob/main/rotation.py this script].<br />
<br />
= Templates =<br />
== New bug ==<br />
This template is included in the description for new bugs opened in the Core::Performance component. If a bug is opened in another component and then moved to Core::Performance, this template can be used as needed to request additional information from the reporter.<br />
<br />
<pre><br />
### Basic information<br />
<br />
Steps to Reproduce:<br />
<br />
<br />
Expected Results:<br />
<br />
<br />
Actual Results:<br />
<br />
<br />
---<br />
<br />
### Performance recording (profile)<br />
<br />
Profile URL:<br />
(If this report is about slow performance or high CPU usage, please capture a performance profile by following the instructions at https://profiler.firefox.com/. Then upload the profile and insert the link here.)<br />
<br />
#### System configuration:<br />
<br />
OS version:<br />
GPU model:<br />
Number of cores: <br />
Amount of memory (RAM): <br />
<br />
### More information<br />
<br />
Please consider attaching the following information after filing this bug, if relevant:<br />
<br />
- Screenshot / screen recording<br />
- Anonymized about:memory dump, for issues with memory usage<br />
- Troubleshooting information: Go to about:support, click "Copy text to clipboard", paste it to a file, save it, and attach the file here.<br />
<br />
---<br />
<br />
Thanks so much for your help.<br />
</pre><br />
<br />
== Moved to Core::Performance ==<br />
<pre><br />
This bug was moved into the Performance component. Reporter, could you make sure the following information is on this bug?<br />
<br />
- For slowness or high CPU usage, capture a profile with http://profiler.firefox.com/ , upload it and share the link here.<br />
- For memory usage issues, capture a memory dump from about:memory and attach it to this bug.<br />
- Troubleshooting information: Go to about:support, click "Copy raw data to clipboard", paste it into a file, save it, and attach the file here.<br />
<br />
Thank you.<br />
</pre><br />
<br />
== No longer able to reproduce ==<br />
<pre>This bug doesn’t seem to happen anymore in current versions of Firefox. Please reopen or file a new bug if you see it again.</pre><br />
<br />
== No response from reporter ==<br />
<pre>With no answer from the reporter, we don’t have enough data to reproduce and/or fix this issue. Please reopen or file a new bug with more information if you see it again.</pre><br />
<br />
== Expected behaviour ==<br />
<pre>This is expected behavior. Please reopen or file a new bug if you think otherwise.</pre><br />
<br />
== Website issue ==<br />
<pre>According to the investigation, this is a website issue. Please reopen or file a new bug if you think otherwise.</pre></div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Triage&diff=1248786Performance/Triage2023-10-31T11:16:23Z<p>Davehunt: </p>
<hr />
<div>{{DISPLAYTITLE:Performance Triage}}<br />
<br />
{{message/box|If you have any feedback/suggestions/questions regarding the performance triage process, you can share them in {{matrix|perf-triage}}, or reach out to {{people|davehunt|Dave Hunt}} or {{people|frankd|Frank Doty}}.<br />
<br />
= Nomination =<br />
== Bugzilla ==<br />
To (re)nominate a bug for triage, set the [[../Bugzilla#Project Flag|Performance Impact flag]] in Bugzilla to <code>?</code><br />
<br />
This can be found by clicking '''Show Advanced Fields''' followed by '''Set bug flags''' when entering a new bug:<br />
<br />
[[File:Bugzilla performance nomination on new bug form.png|none]]<br />
<br />
Or by expanding the '''Tracking''' section when editing an existing bug:<br />
<br />
[[File:Screenshot 2022-02-24 at 19.53.54.png|none]]<br />
<br />
== GitHub ==<br />
To nominate a bug for triage, add the '''Performance''' label to an issue. This can be done by filing an new issue with the "Performance issue" template:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.22.58.png|none|Screenshot of file a "Performance issue" template on GitHub]]<br />
<br />
Or by opening an existing issue on GitHub and selecting the label from the right-hand bar:<br />
<br />
[[File:Screen Shot 2022-05-24 at 11.32.09.png|Screenshot of adding a performance label on GitHub]]<br />
<br />
Currently, only the following GitHub repositories are supported:<br />
* [https://github.com/mozilla-mobile/fenix/ fenix]<br />
* [https://github.com/mozilla-mobile/android-components/ android-components]<br />
* [https://github.com/mozilla-mobile/focus-android/ focus-android]<br />
<br />
= Queries =<br />
== Performance triage ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "?",<br />
"f3": "flagtypes.name",<br />
"o3": "notsubstring",<br />
"v3": "needinfo",<br />
"f4": "CP",<br />
"f5": "OP",<br />
"f6": "product",<br />
"o6": "equals",<br />
"v6": "Core",<br />
"f7": "component",<br />
"o7": "equals",<br />
"v7": "Performance",<br />
"f8": "keywords",<br />
"o8": "notsubstring",<br />
"v8": "meta",<br />
"f9": "cf_performance_impact",<br />
"o9": "isempty",<br />
"f10": "flagtypes.name",<br />
"o10": "notsubstring",<br />
"v10": "needinfo",<br />
"f11": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Performance triage (pending needinfo) ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Performance Triage (pending needinfo)",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"f1": "OP",<br />
"f2": "cf_performance_impact",<br />
"o2": "equals",<br />
"v2": "?",<br />
"f3": "flagtypes.name",<br />
"o3": "substring",<br />
"v3": "needinfo",<br />
"f4": "CP",<br />
"f5": "OP",<br />
"f6": "product",<br />
"o6": "equals",<br />
"v6": "Core",<br />
"f7": "component",<br />
"o7": "equals",<br />
"v7": "Performance",<br />
"f8": "keywords",<br />
"o8": "notsubstring",<br />
"v8": "meta",<br />
"f9": "cf_performance_impact",<br />
"o9": "isempty",<br />
"f10": "flagtypes.name",<br />
"o10": "substring",<br />
"v10": "needinfo",<br />
"f11": "CP",<br />
"j_top": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
== Recently opened bugs with performance keywords in the summary ==<br />
<bugzilla><br />
{<br />
"query_based_on": "Recently opened bugs with performance keywords in the summary",<br />
"query_format": "advanced",<br />
"resolution": "---",<br />
"chfield": "[Bug creation]",<br />
"chfieldfrom": "-2w",<br />
"keywords": "crash, intermittent-failure, meta",<br />
"keywords_type": "nowords",<br />
"short_desc": "perf \"load time\" responsiveness jank fast slow memory battery heat GPU CPU SLA",<br />
"short_desc_type": "anywordssubstr",<br />
"f1": "OP",<br />
"f2": "product",<br />
"o2": "equals",<br />
"v2": "Core",<br />
"f3": "product",<br />
"o3": "equals",<br />
"v3": "Fenix",<br />
"f4": "product",<br />
"o4": "equals",<br />
"v4": "Firefox",<br />
"f5": "product",<br />
"o5": "equals",<br />
"v5": "Firefox for iOS",<br />
"f6": "product",<br />
"o6": "equals",<br />
"v6": "Focus",<br />
"f7": "product",<br />
"o7": "equals",<br />
"v7": "Focus-iOS",<br />
"f8": "product",<br />
"o8": "equals",<br />
"v8": "GeckoView",<br />
"f9": "CP",<br />
"f10": "component",<br />
"o10": "notequals",<br />
"v10": "Performance",<br />
"f11": "cf_performance_impact",<br />
"o11": "isempty",<br />
"j1": "OR",<br />
"order": "Bug Number",<br />
"include_fields": "id, summary, status"<br />
}<br />
</bugzilla><br />
<br />
= Triage process =<br />
== Introduction ==<br />
The goal of performance triage is to identify the extent to which bugs impact the performance of our products, and to move these bugs towards an actionable state. The goal is not to diagnose or fix bugs during triage. We triage bugs that have been nominated for triage and bugs in the Core::Performance component that do not have the performance impact project flag set.<br />
<br />
During triage we may do any/all of the following:<br />
* Request further information from the reporter (such as a profile)<br />
* Set the performance impact project flag<br />
* Add performance keywords<br />
* Move the bug to a more appropriate component<br />
<br />
== Who is responsible for triage? ==<br />
Everyone is welcome to take part in triage. By default, everyone on the performance team is enrolled in [[#Triage rotation|triage rotation]], but we also have participants from outside the team.<br />
<br />
== How do I schedule a triage meeting? ==<br />
If you are on triage duty, you will receive an invitation as a reminder to schedule the triage meeting on the [https://calendar.google.com/calendar/u/0?cid=bW96aWxsYS5jb21fOWJrNWYycnFkZXVpcDM4amJlbGQ4NGtwcWNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ shared performance calendar] with the nominated sheriffs invited at a time that works for them. The responsibility of scheduling the meeting falls to the lead sheriff. Once a triage meeting has been scheduled, it’s a good idea to remove the reminder event from the calendar to avoid confusion. It’s a good idea to use the shared calendar, as this increases the visibility of the performance triage and allows other members of the team to contribute or observe the process.<br />
<br />
== What if a sheriff is unavailable? ==<br />
The rotation script is not perfect, and doesn’t know when people are on PTO or otherwise unavailable. If the lead sheriff is available, it is their responsibility to either schedule the triage with the remaining available sheriff or to identify a suitable substitute for the unavailable sheriff(s). If the lead sheriff is unavailable, this responsibility passes onto the remaining available sheriffs.<br />
<br />
== How do I run a triage meeting? ==<br />
The following describes the triage process to follow during the meeting:<br />
<br />
# Ask if others would prefer you to share your screen. This can be especially helpful for those new to triage.<br />
# Open the [[#Performance triage|first triage query]] to show bugs nominated for triage or in the Core::Performance component without the performance impact project flag set. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* Bugs that look like tasks that were filed by members of the Performance team will generally need to be moved to the Core::Performance Engineering component.<br />
#* For defects: Determine if the bug is reproducible and actionable. If not, add a needinfo for the reporter asking for more information and move onto the next bug. We have a [[#New bug|template]] that you can modify as needed.<br />
#* For all bugs (including enhancements):<br />
#** Set the [[#How do I determine the performance impact project flag?|performance impact project flag]].<br />
#** Add the appropriate [[#How do I determine the performance keywords?|performance keywords]].<br />
#** Move the bug to the correct [[#How do I determine the correct Bugzilla component?|Bugzilla component]].<br />
# Open the [[#Performance triage (pending needinfo)|second triage query]] to show bugs that have open needinfo requests. The bugs are sorted from oldest to newest. For each bug in the list, follow these steps:<br />
#* If the needinfo was set less than 2 weeks ago, move onto the next bug.<br />
#* If the needinfo was set more than 2 weeks ago but less than 2 months ago, consider adding a needinfo for either: another reporter of the issue, someone with access to the appropriate platform(s) to attempt to reproduce the issue, or a relevant subject matter expert.<br />
#* If the open needinfo was set more than 2 months ago, close the bug as inactive. You can modify the [[#No response from reporter|inactive bug template]] as needed.<br />
# If time permits, open the [[#Recently opened bugs with performance keywords in the summary|third triage query]] to show recently opened bugs with performance related keywords in the summary. If any of these look like performance bugs, they can either be triaged the same way as bugs in the initial query or they can be [[#Bugzilla|nominated for triage]] in a subsequent meeting.<br />
<br />
== What if things don't go as expected? ==<br />
Don't panic! The triage process is not expected to be perfect, and can improve with your feedback. Maybe the result of the triage calculator doesn't feel right, or you find a scenario that's not covered in these guidelines. In this case we recommend that you bring it up in {{matrix|perf-triage}}, or consider scheduling a short meeting with some triage leads (you can see some recent leads in the [[#Triage rotation|triage rotation]]). If in doubt, leave a comment on the bug with your thoughts and move on. There's a chance someone will respond, but if not the next performance triage sheriffs may have some other ideas.<br />
<br />
== How do I determine the performance impact project flag? ==<br />
The [[../Bugzilla#Project Flag|performance impact project flag]] is used to indicate a bug’s relationship to the performance of our products. It can be applied to all bugs, and not only defects. The [[#Triage calculator|triage calculator]] should be used to help determine the most appropriate value for this flag. In addition to setting the performance impact project flag, make sure to use the “Copy Bugzilla Comment” button and paste this as a comment on the bug.<br />
<br />
For more information about what this flag, and it's settings mean see this [https://blog.mozilla.org/performance/2022/11/07/understanding-performance-impact/ blog post].<br />
<br />
== How do I determine the performance keywords? ==<br />
There are several [[../Bugzilla#Keywords|performance related keywords]], which can be helpful to understand how our performance issues are distributed, or whenever there’s a concerted effort to improve a particular aspect of our products. The [[#Triage calculator|triage calculator]] may recommend keywords to set, and by typing “perf:” in the keywords field in Bugzilla, you will see the available options. Select all that apply to the bug.<br />
<br />
== How do I determine the correct Bugzilla component? ==<br />
Ideally we would only have bugs in the Core::Performance component that are the responsibility of the engineers in the performance team. For performance bugs to have the best chance of being fixed, it's important to assign them to the correct component. In some cases the correct component will be obvious from the bug summary, description, or steps to reproduce. In other cases, you may need to do a bit more work to identify the component. For example, if there's a profile associated with the bug, you could see where the majority of time is being spent using the category annotations.<br />
<br />
== How do I read a performance profile? ==<br />
It's useful to be able to understand a profile generated by the [https://profiler.firefox.com/ Firefox Profiler], and hopefully someone in the triage meeting will be able to help. If you find an interesting profile, or just want to understand how to use them to analyse a performance problem, we encourage you to post a link to the profile (or bug) in [https://chat.mozilla.org/#/room/#joy-of-profiling:mozilla.org #joy-of-profiling] where someone will be happy to help. The profile may even be analysed during one of the regular "Joy of Profiling" open sessions that can be found on the [https://calendar.google.com/calendar/embed?src=c_cbjhkf8gu6anajlklhuo04hpko%40group.calendar.google.com&ctz=Europe%2FLondon Performance Office Hours calendar].<br />
<br />
= Triage calculator =<br />
The [https://mozilla.github.io/perf-triage/calculator.html Performance Impact Calculator] was developed to assist in identifying and applying the [[../Bugzilla#Project Flag|performance impact project flag]] and [[../Bugzilla#Keywords|performance keywords]] consistently. If you have feedback or would like to suggest changes to this tool, please share these in the [https://chat.mozilla.org/#/room/#perf-triage:mozilla.org #perf-triage Matrix channel].<br />
<br />
= Triage rotation =<br />
The sheriffs are allocated on a weekly basis, which is published [https://mozilla.github.io/perf-triage/ here]. The rotation is generated by [https://github.com/mozilla/perf-triage/blob/main/rotation.py this script].<br />
<br />
= Templates =<br />
== New bug ==<br />
This template is included in the description for new bugs opened in the Core::Performance component. If a bug is opened in another component and then moved to Core::Performance, this template can be used as needed to request additional information from the reporter.<br />
<br />
<pre><br />
### Basic information<br />
<br />
Steps to Reproduce:<br />
<br />
<br />
Expected Results:<br />
<br />
<br />
Actual Results:<br />
<br />
<br />
---<br />
<br />
### Performance recording (profile)<br />
<br />
Profile URL:<br />
(If this report is about slow performance or high CPU usage, please capture a performance profile by following the instructions at https://profiler.firefox.com/. Then upload the profile and insert the link here.)<br />
<br />
#### System configuration:<br />
<br />
OS version:<br />
GPU model:<br />
Number of cores: <br />
Amount of memory (RAM): <br />
<br />
### More information<br />
<br />
Please consider attaching the following information after filing this bug, if relevant:<br />
<br />
- Screenshot / screen recording<br />
- Anonymized about:memory dump, for issues with memory usage<br />
- Troubleshooting information: Go to about:support, click "Copy text to clipboard", paste it to a file, save it, and attach the file here.<br />
<br />
---<br />
<br />
Thanks so much for your help.<br />
</pre><br />
<br />
== Moved to Core::Performance ==<br />
<pre><br />
This bug was moved into the Performance component. Reporter, could you make sure the following information is on this bug?<br />
<br />
- For slowness or high CPU usage, capture a profile with http://profiler.firefox.com/ , upload it and share the link here.<br />
- For memory usage issues, capture a memory dump from about:memory and attach it to this bug.<br />
- Troubleshooting information: Go to about:support, click "Copy raw data to clipboard", paste it into a file, save it, and attach the file here.<br />
<br />
Thank you.<br />
</pre><br />
<br />
== No longer able to reproduce ==<br />
<pre>This bug doesn’t seem to happen anymore in current versions of Firefox. Please reopen or file a new bug if you see it again.</pre><br />
<br />
== No response from reporter ==<br />
<pre>With no answer from the reporter, we don’t have enough data to reproduce and/or fix this issue. Please reopen or file a new bug with more information if you see it again.</pre><br />
<br />
== Expected behaviour ==<br />
<pre>This is expected behavior. Please reopen or file a new bug if you think otherwise.</pre><br />
<br />
== Website issue ==<br />
<pre>According to the investigation, this is a website issue. Please reopen or file a new bug if you think otherwise.</pre></div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Tools/Onboarding&diff=1247193Performance/Tools/Onboarding2023-07-24T18:34:33Z<p>Davehunt: </p>
<hr />
<div>{{DISPLAYTITLE:Onboarding with Performance Tools}}<br />
<br />
This page is aimed at people who are new to Mozilla and want to contribute to projects related to Performance Tools. If you run into issues or have doubts, check out the [[#Resources]] section below and don’t hesitate to ask questions. The goal of these steps is to make sure you have the basics of your development environment working. Once you do, we can get you started with working on an actual bug, yay!<br />
<br />
= Getting started =<br />
# Set up a [[#Bugzilla]] account.<br />
# Set up a [[#Phonebook]] account.<br />
# For direct communication with us it will be beneficial to setup [[#Matrix]], join our public channels, and introduce yourself.<br />
<br />
= Getting the code =<br />
<br />
== Performance testing ==<br />
The first thing to do is to get your build environment set up. Follow the Getting Started instructions [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions#Getting_started here]. You can find more instructions [https://firefox-source-docs.mozilla.org/contributing/how_to_contribute_firefox.html here] as well. We suggest using an artifact build when you’re asked since speeds everything up a lot. After you have the build ready and ran `./mach run` successfully you should be good to go. If you hit any issues getting ready, we can help you in {{matrix|perftest}}.<br />
<br />
So you’ve run Firefox locally - now what? It’s time to test it!<br />
<br />
We do performance testing so you will be most interested in [https://firefox-source-docs.mozilla.org/testing/perfdocs/]. A simple test to start with would be this one which runs a google page load test with Raptor:<br />
./mach raptor --test google-search<br />
<br />
The code for these tools resides in [https://dxr.mozilla.org/mozilla-central/source/testing/raptor this folder]. Browsertime code can also be found [https://dxr.mozilla.org/mozilla-central/source/tools/browsertime here] after it’s installed.<br />
<br />
== PerfDocs ==<br />
Even though are primary focus is on performance testing harnesses, like Raptor, we have many other projects to work on too! If you come across bugs that talk about <tt>PerfDocs</tt>, then read on for more information on how to hack on it - it's a bit simpler.<br />
<br />
This project is for building up documentation about all of our tests dynamically, and you can find it in the [https://firefox-source-docs.mozilla.org/testing/perfdocs/index.html Firefox Source Docs]. It has two stages, a verification stage, and a generation stage. The verification stage ensures that all tests are documented and that all documented tests exist. The generation stage, as you may have guessed, generates the documentation! The first stage can be run with:<br />
./mach lint -l perfdocs<br />
This should pass because we can't land patches unless perfdocs is passing. The generation stage is run by calling:<br />
./mach lint -l perfdocs --fix<br />
If no errors are found during the verification (which is always run before generation), then the documentation information is produced. The actual document that was linked above in the source tree docs is produced in continuous integration (you can also do it locally with <tt>./mach doc</tt> if you're interested.<br />
<br />
The whole system is relatively simple, and you can find the code for it in [https://searchfox.org/mozilla-central/source/tools/lint/perfdocs this folder].<br />
<br />
= Work on bugs and get code review =<br />
Once you are familiar with the code of the test harnesses, and the tests you might want to start with your first contribution. You can follow [https://firefox-source-docs.mozilla.org/contributing/how_to_contribute_firefox.html#to-write-a-patch these instructions] on how to submit a patch. You can find review instructions that are specific to the test projects at [[../Testing/Reviews]].<br />
<br />
How you test a patch will change depending on what's being modified. Generally, you will be running Raptor (with commands similar to those listed above) or it's unittests to test your changes, but you can ask us in {{matrix|perftest}} if you're not sure what you should run or if you need help getting a test command working. For the patch reviewer, you can use <tt>#perftest</tt> and someone from the team will review it (or you can put whoever helped you with the patch).<br />
<br />
You can find “good-first-bugs” by looking in codetribute in the [https://codetribute.mozilla.org/projects/automation Test Automation] section, projects from our team include [https://codetribute.mozilla.org/projects/automation?project%3DTalos Talos], [https://codetribute.mozilla.org/projects/automation?project%3DRaptor Raptor], and [https://codetribute.mozilla.org/projects/automation?project%3DPerformance Performance]. Many team members also work on [https://codetribute.mozilla.org/projects/reporting Dashboards and Reporting] so that would be another good place to look. If you’re not sure what you want to hack on, ask us in {{matrix|perftest}} - we’d be happy to help find you something. :)<br />
<br />
= Meetings =<br />
{{:TestEngineering/Performance/Meetings}}<br />
<br />
= Phonebook =<br />
The [https://people.mozilla.org/ people directory] is a secure place to quickly find your team members and easily discover new ones.<br />
<br />
Please ensure your profile has:<br />
* photo (of you!)<br />
* username (this is included in your profile URL)<br />
* [[#GitHub]] identity<br />
* [[#Bugzilla]] identity<br />
* [[#Matrix]] nick<br />
* [[#Slack]] nick<br />
<br />
= Calendar =<br />
There's a Performance [https://calendar.google.com/calendar/embed?src=mozilla.com_9bk5f2rqdeuip38jbeld84kpqc%40group.calendar.google.com shared calendar] ([https://calendar.google.com/calendar/ical/mozilla.com_9bk5f2rqdeuip38jbeld84kpqc%40group.calendar.google.com/public/basic.ics iCal]), which is primarily used for PTO. Add this calendar to your google calendar by taking the iCal link and using it in the "Add Calendar -> From URL" section.<br />
<br />
= PTO =<br />
Add any PTO to the shared calendar (see above) and [https://docs.google.com/document/d/1kHHimZH65Rg_Nzx_JfyPqeXL6ONTse0nG2t9Nkv4eOY/edit# team meeting notes] so the team are aware. During PTO please also update your name in Bugzilla's [https://bugzilla.mozilla.org/userprefs.cgi?tab=account user preferences] to indicate that you are away, and when you will return.<br />
<br />
= Communication =<br />
<br />
== Groups ==<br />
Feel free to sign up to the following groups, and post to them when you have something to share or questions to ask.<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/perftest perftest] is for team communications and setting up test accounts<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/performance performance] is for general discussion and announcements<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/perfteam perfteam] is for the broader performance team<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/perf-sheriffs perf-sheriffs] is for discussions related to performance sheriffing<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/perftest-alerts perftest-alerts] is for alerts related to performance tests<br />
<br />
== [[Matrix]] ==<br />
Join our public channels listed below and introduce yourself. See [[../#Who_we_are]] for who you can ping for help or to chat. We’re nice, I promise, but we might not answer right away due to different time zones, time off, etc. So please be patient. When you want to ask a question on Matrix, just go ahead and ask it even if no one appears to be around/responding. Provide lots of detail so that we have a better chance of helping you. If you don’t get an answer right away, check again in a few hours – someone may have answered you in the meantime. If you’re having trouble reaching us over Matrix, you are welcome to send an email to us instead. It’s a good idea to include your Matrix nick in your email message.<br />
<br />
* {{matrix|perf|General performance chat}}<br />
* {{matrix|perftest|Public channel for testing related projects}}<br />
* {{matrix|profiler|Public channel for the Firefox Profiler project}}<br />
* {{matrix|perfteam|Public performance team channel}}<br />
* {{matrix|perfsheriffs|Performance sheriffs channel}}<br />
<br />
== Slack ==<br />
Here are some useful Slack channels to start with:<br />
* #announcements - Global communication and announcements<br />
* #moco - Used for questions during internal meetings<br />
* #newsroom - Firefox and relevant tech news<br />
* #servicedesk - Internal IT support for employees<br />
* #peopleteam - People support<br />
<br />
= Credentials =<br />
There's a shared [https://1password.com/ 1Password] vault for credentials that you may need to access. Please submit a request for 1Password from [https://mozilla-hub.atlassian.net/servicedesk/customer/portal/6 ServiceDesk]. Once you have an account and the software set up (available on iOS, Android, Windows, macOS) you can be added to the team vault.<br />
<br />
= Hardware =<br />
List any hardware devices that you have assigned to you in [https://docs.google.com/document/d/1T7O7uIM05xG1k5E79GQt4ac2gl6CRpE9P1BqrU-3RS8/edit# this document]. This can be valuable if we need to identify somebody on the team that has a specific device or platform for running tests, reproducing issues, etc. You may need additional hardware such as mobile devices, laptops, etc. You can request this equipment from [https://mozilla.service-now.com/sp The Hub].<br />
<br />
= Bugzilla =<br />
You will need to create an account in Mozilla's instance of Bugzilla. See [[BMO/UserGuide]] for how to get started. It's helpful to include your Matrix/Slack handle in your name field prefixed by <code>:</code> so you can quickly be identified by other users of Bugzilla. Other details that can be helpful are your preferred pronouns, current timezone, and if you're currently on PTO. For example: <br />
<br />
Dave Hunt [:davehunt] [he/him] ⌚BST (away until 1st January 2021)<br />
<br />
== Products/Components ==<br />
The relevant components for the team are:<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Core&component=Gecko%20Profiler#Gecko%20Profiler Core::Gecko Profiler]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Firefox%20Profiler Firefox Profiler]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=AWSY#AWSY Testing::AWSY]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=Condprofile#Condprofile Testing::Condprofile]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=mozperftest#mozperftest Testing::mozperftest]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=Performance#Performance Testing::Performance]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=Raptor#Raptor Testing::Raptor]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=Talos#Talos Testing::Talos]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Tree%20Management&component=Perfherder#Perfherder Tree Management::Perfherder]<br />
<br />
== Whiteboard Entries ==<br />
The following whiteboard entries are used by the team:<br />
* [https://bugzilla.mozilla.org/buglist.cgi?status_whiteboard=%5Bperftest%3Atriage%5D&resolution=---&status_whiteboard_type=allwordssubstr <nowiki>[perftest:triage]</nowiki>] - discussion required in [[TestEngineering/Performance/Triage_Process|triage]].<br />
<br />
== Keywords ==<br />
See [[Performance/Bugzilla#Keywords]]<br />
<br />
== Resources ==<br />
* [https://developer.mozilla.org/en-US/docs/Mozilla/QA/Bug_writing_guidelines Bug writing guidelines]<br />
* [https://bugzilla.mozilla.org/page.cgi?id=etiquette.html Bugzilla etiquette]<br />
<br />
= GitHub =<br />
If you don't already have one, you will need to [https://docs.github.com/en/get-started/onboarding/getting-started-with-your-github-account create a GitHub account] and enable [https://help.github.com/articles/about-two-factor-authentication/ two-factor authentication].<br />
<br />
We have a [https://github.com/orgs/mozilla/teams/perftest GitHub team] for simplifying access to repositories. All team members that belong to the [https://github.com/orgs/mozilla Mozilla organisation] should be added to this team as members. Team maintainers can add new members following the process [[GitHub#Team_Maintainers_.26_Project_Leads|documented here]]. Other contributors will need to be manually granted access to individual repositories as needed.<br />
<br />
= Shared folder =<br />
We have a [https://drive.google.com/drive/u/0/folders/1EyeiuJYmivvY83BCicqeR4-59HRTU-ll shared folder] in Google Drive.<br />
<br />
= Sheriffing =<br />
Performance sheriffs will need to complete the following:<br />
<br />
* Request an LDAP account<br />
* Request commit access:<br />
** Level 1: {{bug|1398609}}<br />
** Level 3: {{bug|1509284}} <br />
* Request access to Treeherder sheriff group: {{bug|1506882}}<br />
* Training (ranked)<br />
* Join the [https://groups.google.com/a/mozilla.com/forum/#!forum/perf-sheriffs perf-sheriffs] Google Group<br />
<br />
= Review policy =<br />
When you push a commit up for review, you should use the following syntax to request review from the [https://phabricator.services.mozilla.com/tag/perftest-reviewers/ perftest review group]: <br />
<br />
r=#perftest<br />
<br />
For most patches, a single r+ from one reviewer is required to be allowed to be sent off for integration. More reviewers can pitch in on the same review, and Lando will in this case automatically rewrite the commit message to show who was involved signing off the patch, for example:<br />
<br />
Bug 1546611 - Fix "None" checks when validating test manifests; r=perftest,dhunt<br />
<br />
[https://secure.phabricator.com/book/phabflavor/article/writing_reviewable_code/ See the section "Write Sensible Commit Messages" here for how to form good commit titles and summaries].<br />
<br />
When you occasionally you have to single out individuals for specific topic expertise, you add an exclamation mark behind the nickname:<br />
<br />
r=#perftest,dhunt!<br />
<br />
This will add the patch to the shared review queue, but also block the review from landing pending Dave's approval. Requested changes by other reviewers will also block the review.<br />
<br />
Note that a [https://phabricator.services.mozilla.com/H216 Herald rule] exists that will set this group as a blocking reviewer for certain paths in the tree. This was configured via {{bug|1618249}}.<br />
<br />
== Module ownership policy ==<br />
If a patch touches code in a module owned by someone outside of the team, you must follow the [https://www.mozilla.org/en-US/about/governance/policies/module-ownership/ module ownership policy] and request review from the module owner or a peer listed in [[Modules]].<br />
<br />
= Duties =<br />
Everyone on the team will be expected to carry out the following duties to ensure effective collaboration both within the team, and with other teams.<br />
<br />
== Code reviews ==<br />
Visit [https://phabricator.services.mozilla.com/differential/ active revisions] in Phabricator every day to:<br />
* Review any '''Must Review''' and '''Ready to Review''' patches. Pay particular attention to any patches that have yourself as the sole reviewer. Consider adding the <code>#perftest</code> review group for a wider audience of reviewers. All review requests should receive a response (not necessarily a complete review) within 1 working day.<br />
* Follow up on any '''Waiting on Review''' patches by prompting appropriate team members for a review.<br />
* Review any '''Waiting on Authors''' patches and prompt the authors if there is an action they need to take.<br />
<br />
== Bugzilla requests ==<br />
Visit the [https://bugzilla.mozilla.org/request.cgi request queue] in Bugzilla every day and filter by your account to see all requests that have been submitted for you. Requests for P1/P2 bugs should recieve a response within 1 working day. All other requests should receive a response within 5 working days.<br />
<br />
= Resources =<br />
These resources might not be directly related to performance tools or the code we work on, but they may have useful information for you to make use of.<br />
# Search Mozilla’s code repositories with [https://searchfox.org/mozilla-central/source/ searchfox].<br />
# [https://mozilla-version-control-tools.readthedocs.org/en/latest/hgmozilla/index.html Mercurial for Mozillians].<br />
# Textbook about general open source practices: [https://quaid.fedorapeople.org/TOS/Practical_Open_Source_Software_Exploration/html/index.html Practical Open Source Software Exploration]<br />
# If you’d rather use git instead of hg, see [https://github.com/glandium/git-cinnabar/wiki/Mozilla:-A-git-workflow-for-Gecko-development git workflow for Gecko development] and/or [https://sny.no/2016/03/geckogit this blog post by :ato].<br />
# A [[Firefox/Dev_Cheatsheet|mercurial firefox cheat-sheet]] with helpful commands that are used often.</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Onboarding&diff=1247192TestEngineering/Performance/Onboarding2023-07-24T17:58:46Z<p>Davehunt: Redirected page to Performance/Tools/Onboarding</p>
<hr />
<div>#REDIRECT [[Performance/Tools/Onboarding]]<br />
<br />
= Getting Started =<br />
If you're looking for a guide to help you get going, the [https://wiki.mozilla.org/TestEngineering/Performance/NewContributors new contributor page] should have all the information you need.<br />
<br />
= Meetings =<br />
{{:TestEngineering/Performance/Meetings}}<br />
<br />
= Phonebook =<br />
The [https://people.mozilla.org/ people directory] is a secure place to quickly find your team members and easily discover new ones.<br />
<br />
Please ensure your profile has:<br />
* photo (of you!)<br />
* username (this is included in your profile URL)<br />
* GitHub identity<br />
* [[#Bugzilla]] identity<br />
* Matrix nick<br />
* Slack nick<br />
<br />
= Calendar =<br />
There's a Performance [https://calendar.google.com/calendar/embed?src=mozilla.com_9bk5f2rqdeuip38jbeld84kpqc%40group.calendar.google.com shared calendar] ([https://calendar.google.com/calendar/ical/mozilla.com_9bk5f2rqdeuip38jbeld84kpqc%40group.calendar.google.com/public/basic.ics iCal]), which is primarily used for PTO. Add this calendar to your google calendar by taking the iCal link and using it in the "Add Calendar -> From URL" section.<br />
<br />
= PTO =<br />
Add any PTO to the shared calendar (see above) and [https://docs.google.com/document/d/1kHHimZH65Rg_Nzx_JfyPqeXL6ONTse0nG2t9Nkv4eOY/edit# team meeting notes] so the team are aware. During PTO please also update your name in Bugzilla's [https://bugzilla.mozilla.org/userprefs.cgi?tab=account user preferences] to indicate that you are away, and when you will return.<br />
<br />
= Communication =<br />
<br />
== Groups ==<br />
Feel free to sign up to the following groups, and post to them when you have something to share or questions to ask.<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/perftest perftest] is for team communications and setting up test accounts<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/performance performance] is for general discussion and announcements<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/perfteam perfteam] is for the broader performance team<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/perf-sheriffs perf-sheriffs] is for discussions related to performance sheriffing<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/perftest-alerts perftest-alerts] is for alerts related to performance tests<br />
<br />
== [[Matrix]] ==<br />
Feel free to browse through the full list at your leisure and find things that interest you. Here are some to start with:<br />
* {{matrix|perf|General performance chat}}<br />
* {{matrix|perftest|Public team channel}}<br />
* {{matrix|perfteam|Public performance team channel}}<br />
* {{matrix|perfsheriffs|Performance sheriffs channel}}<br />
<br />
== Slack ==<br />
Here are some useful Slack channels to start with:<br />
* #announcements - Global communication and announcements<br />
* #moco - Used for questions during internal meetings<br />
* #newsroom - Firefox and relevant tech news<br />
* #servicedesk - Internal IT support for employees<br />
* #peopleteam - People support<br />
<br />
= Credentials =<br />
There's a shared [https://1password.com/ 1Password] vault for credentials that you may need to access. Please submit a request for 1Password from [https://mozilla-hub.atlassian.net/servicedesk/customer/portal/6 ServiceDesk]. Once you have an account and the software set up (available on iOS, Android, Windows, macOS) you can be added to the team vault.<br />
<br />
= Hardware =<br />
List any hardware devices that you have assigned to you in [https://docs.google.com/document/d/1T7O7uIM05xG1k5E79GQt4ac2gl6CRpE9P1BqrU-3RS8/edit# this document]. This can be valuable if we need to identify somebody on the team that has a specific device or platform for running tests, reproducing issues, etc. You may need additional hardware such as mobile devices, laptops, etc. You can request this equipment from [https://mozilla.service-now.com/sp The Hub].<br />
<br />
= Bugzilla =<br />
You will need to create an account in Mozilla's instance of Bugzilla. See [[BMO/UserGuide]] for how to get started. It's helpful to include your Matrix/Slack handle in your name field prefixed by <code>:</code> so you can quickly be identified by other users of Bugzilla. Other details that can be helpful are your preferred pronouns, current timezone, and if you're currently on PTO. For example: <br />
<br />
Dave Hunt [:davehunt] [he/him] ⌚BST (away until 1st January 2021)<br />
<br />
== Products/Components ==<br />
The relevant components for the team are:<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=AWSY#AWSY Testing::AWSY]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=mozperftest#mozperftest Testing::mozperftest]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=Performance#Performance Testing::Performance]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=Raptor#Raptor Testing::Raptor]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=Talos#Talos Testing::Talos]<br />
<br />
== Whiteboard Entries ==<br />
The following whiteboard entries are used by the team:<br />
* [https://bugzilla.mozilla.org/buglist.cgi?status_whiteboard=%5Bperftest%3Atriage%5D&resolution=---&status_whiteboard_type=allwordssubstr <nowiki>[perftest:triage]</nowiki>] - discussion required in [[TestEngineering/Performance/Triage_Process|triage]].<br />
* [https://bugzilla.mozilla.org/buglist.cgi?status_whiteboard=%5Bperf:responsiveness%5D&resolution=---&status_whiteboard_type=allwordssubstr <nowiki>[perf:responsiveness]</nowiki>] - related to responsiveness.<br />
* [https://bugzilla.mozilla.org/buglist.cgi?status_whiteboard=%5Bperf%3Aworkflow%5D&resolution=---&status_whiteboard_type=allwordssubstr <nowiki>[perf:workflow]</nowiki>] - related to the performance workflow.<br />
<br />
== Keywords ==<br />
See [[Performance/Bugzilla#Keywords]]<br />
<br />
== Resources ==<br />
* [https://developer.mozilla.org/en-US/docs/Mozilla/QA/Bug_writing_guidelines Bug writing guidelines]<br />
* [https://bugzilla.mozilla.org/page.cgi?id=etiquette.html Bugzilla etiquette]<br />
<br />
= GitHub =<br />
If you don't already have one, you will need to [https://docs.github.com/en/get-started/onboarding/getting-started-with-your-github-account create a GitHub account] and enable [https://help.github.com/articles/about-two-factor-authentication/ two-factor authentication].<br />
<br />
We have a [https://github.com/orgs/mozilla/teams/perftest GitHub team] for simplifying access to repositories. All team members that belong to the [https://github.com/orgs/mozilla Mozilla organisation] should be added to this team as members. Team maintainers can add new members following the process [[GitHub#Team_Maintainers_.26_Project_Leads|documented here]]. Other contributors will need to be manually granted access to individual repositories as needed.<br />
<br />
= Shared folder =<br />
We have a [https://drive.google.com/drive/u/0/folders/1EyeiuJYmivvY83BCicqeR4-59HRTU-ll shared folder] in Google Drive.<br />
<br />
= Sheriffing =<br />
Performance sheriffs will need to complete the following:<br />
<br />
* Request an LDAP account<br />
* Request commit access:<br />
** Level 1: {{bug|1398609}}<br />
** Level 3: {{bug|1509284}} <br />
* Request access to Treeherder sheriff group: {{bug|1506882}}<br />
* Training (ranked)<br />
* Join the [https://groups.google.com/a/mozilla.com/forum/#!forum/perf-sheriffs perf-sheriffs] Google Group<br />
<br />
= Review policy =<br />
When you push a commit up for review, you should use the following syntax to request review from the [https://phabricator.services.mozilla.com/tag/perftest-reviewers/ perftest review group]: <br />
<br />
r=#perftest<br />
<br />
For most patches, a single r+ from one reviewer is required to be allowed to be sent off for integration. More reviewers can pitch in on the same review, and Lando will in this case automatically rewrite the commit message to show who was involved signing off the patch, for example:<br />
<br />
Bug 1546611 - Fix "None" checks when validating test manifests; r=perftest,dhunt<br />
<br />
[https://secure.phabricator.com/book/phabflavor/article/writing_reviewable_code/ See the section "Write Sensible Commit Messages" here for how to form good commit titles and summaries].<br />
<br />
When you occasionally you have to single out individuals for specific topic expertise, you add an exclamation mark behind the nickname:<br />
<br />
r=#perftest,dhunt!<br />
<br />
This will add the patch to the shared review queue, but also block the review from landing pending Dave's approval. Requested changes by other reviewers will also block the review.<br />
<br />
Note that a [https://phabricator.services.mozilla.com/H216 Herald rule] exists that will set this group as a blocking reviewer for certain paths in the tree. This was configured via {{bug|1618249}}.<br />
<br />
== Module ownership policy ==<br />
If a patch touches code in a module owned by someone outside of the team, you must follow the [https://www.mozilla.org/en-US/about/governance/policies/module-ownership/ module ownership policy] and request review from the module owner or a peer listed in [[Modules]].<br />
<br />
= Duties =<br />
Everyone on the team will be expected to carry out the following duties to ensure effective collaboration both within the team, and with other teams.<br />
<br />
== Code reviews ==<br />
Visit [https://phabricator.services.mozilla.com/differential/ active revisions] in Phabricator every day to:<br />
* Review any '''Must Review''' and '''Ready to Review''' patches. Pay particular attention to any patches that have yourself as the sole reviewer. Consider adding the <code>#perftest</code> review group for a wider audience of reviewers. All review requests should receive a response (not necessarily a complete review) within 1 working day.<br />
* Follow up on any '''Waiting on Review''' patches by prompting appropriate team members for a review.<br />
* Review any '''Waiting on Authors''' patches and prompt the authors if there is an action they need to take.<br />
<br />
== Bugzilla requests ==<br />
Visit the [https://bugzilla.mozilla.org/request.cgi request queue] in Bugzilla every day and filter by your account to see all requests that have been submitted for you. Requests for P1/P2 bugs should recieve a response within 1 working day. All other requests should receive a response within 5 working days.</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/NewContributors&diff=1247191TestEngineering/Performance/NewContributors2023-07-24T17:57:59Z<p>Davehunt: Redirected page to Performance/Tools/Onboarding</p>
<hr />
<div>#REDIRECT [[Performance/Tools/Onboarding]]<br />
<br />
{{DISPLAYTITLE:New Contributors to Perftest 🔥🦊⏱}}<br />
<br />
This page is aimed at people who are new to Mozilla and want to contribute to Mozilla source code related to Performance Testing. Mozilla has both git and Mercurial repositories, but this guide only describes Mercurial - there are some guides for git in the Resources section.<br />
<br />
If you run into issues or have doubts, check out the Resources section below and don’t hesitate to ask questions. :) The goal of these steps is to make sure you have the basics of your development environment working. Once you do, we can get you started with working on an actual bug, yay!<br />
<br />
== Accounts, communication ==<br />
# Set up a [https://bugzilla.mozilla.org/ Bugzilla] account (and, if you like, a [https://mozillians.org/ Mozillians] profile). Please include your Riot/IRC nickname in both of these accounts so we can work with you more easily. For example, Eve Smith would set the Bugzilla name to “Eve Smith (:esmith)”, where “esmith” is an example of a common Riot/IRC nickname pattern. You can also pick a fun nickname for yourself.<br />
# For direct communication with us it will be beneficial to setup Riot using [https://wiki.mozilla.org/Matrix these instructions].<br />
# Join our [https://chat.mozilla.org/#/room/#perftest:mozilla.org #perftest] channel, and introduce yourself to the team. Check [https://wiki.mozilla.org/TestEngineering/Performance#Who_we_are this page] for who you can ping for help or to chat. We’re nice, I promise, but we might not answer right away due to different time zones, time off, etc. So please be patient.<br />
# When you want to ask a question on Riot, just go ahead and ask it even if no one appears to be around/responding. Provide lots of detail so that we have a better chance of helping you. If you don’t get an answer right away, check again in a few hours – someone may have answered you in the meantime.<br />
# If you’re having trouble reaching us over Riot, you are welcome to send an email to us instead. It’s a good idea to include your Riot nick in your email message.<br />
<br />
== Getting the code, running tests ==<br />
=== Performance Testing ===<br />
The first thing to do is to get your build environment set up. Follow the Getting Started instructions [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions#Getting_started here]. You can find more instructions [https://firefox-source-docs.mozilla.org/contributing/how_to_contribute_firefox.html here] as well. We suggest using an artifact build when you’re asked since speeds everything up a lot. After you have the build ready and ran `./mach run` successfully you should be good to go. If you hit any issues getting ready, we can help you in [https://chat.mozilla.org/#/room/#perftest:mozilla.org #perftest].<br />
<br />
So you’ve run Firefox locally - now what? It’s time to test it!<br />
<br />
We do performance testing so you will be most interested in [https://wiki.mozilla.org/TestEngineering/Performance/Raptor Raptor-webext], and [https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Browsertime Raptor-browsertime]. You can find more information about all of the projects we have [https://wiki.mozilla.org/TestEngineering/Performance#Projects here]. A simple test to start with would be this one which runs a google page load test with Raptor-webext:<br />
./mach raptor --test raptor-tp6-google-firefox<br />
To run Raptor-browsertime on the same page you only need to add the <tt>--browsertime</tt> argument. But first you have to install browsertime locally by running:<br />
./mach browsertime --setup<br />
If everything installed correctly you can now run the following:<br />
./mach raptor --test raptor-tp6-google-firefox --browsertime<br />
Note that Raptor-browsertime is under heavy development at the moment so it’s likely that you’ll hit issues there, but you can file bugs for those or let us know about them so we (or you) can fix them.<br />
<br />
The code for these tools resides in [https://dxr.mozilla.org/mozilla-central/source/testing/raptor this folder]. Browsertime code can also be found [https://dxr.mozilla.org/mozilla-central/source/tools/browsertime here] after it’s installed.<br />
<br />
=== PerfDocs ===<br />
<br />
Even though are primary focus is on performance testing harnesses, like Raptor, we have many other projects to work on too! If you come across bugs that talk about <tt>PerfDocs</tt>, then read on for more information on how to hack on it - it's a bit simpler.<br />
<br />
This project is for building up documentation about all of our tests dynamically, and you can find it in the [https://firefox-source-docs.mozilla.org/testing/perfdocs/raptor.html Firefox Source Tree Docs]. It has two stages, a verification stage, and a generation stage. The verification stage ensures that all tests are documented and that all documented tests exist. The generation stage, as you may have guessed, generates the documentation! The first stage can be run with:<br />
./mach lint -l perfdocs<br />
This should pass because we can't land patches unless perfdocs is passing. The generation stage is run by calling:<br />
./mach lint -l perfdocs --fix<br />
If no errors are found during the verification (which is always run before generation), then the documentation information is produced. The actual document that was linked above in the source tree docs is produced in continuous integration (you can also do it locally with <tt>./mach doc</tt> if you're interested.<br />
<br />
The whole system is relatively simple, and you can find the code for it in [https://searchfox.org/mozilla-central/source/tools/lint/perfdocs this folder].<br />
<br />
== Work on bugs and get code review ==<br />
Once you are familiar with the code of the test harnesses, and the tests you might want to start with your first contribution. You can follow [https://firefox-source-docs.mozilla.org/contributing/how_to_contribute_firefox.html#to-write-a-patch these instructions] on how to submit a patch. You can find review instructions that are specific to the team [https://wiki.mozilla.org/TestEngineering/Performance/Onboarding#Review_policy here].<br />
<br />
How you test a patch will change depending on what's being modified. Generally, you will be running Raptor-webext (with commands similar to those listed above) or it's unit-tests to test your changes, but you can ask us in [https://chat.mozilla.org/#/room/#perftest:mozilla.org #perftest] if you're not sure what you should run or if you need help getting a test command working. For the patch reviewer, you can use <tt>#perftest</tt> and someone from the team will review it (or you can put whoever helped you with the patch).<br />
<br />
You can find “good-first-bugs” by looking in codetribute in the [https://codetribute.mozilla.org/projects/automation Test Automation] section, projects from our team include [https://codetribute.mozilla.org/projects/automation?project%3DTalos Talos], [https://codetribute.mozilla.org/projects/automation?project%3DRaptor Raptor], and [https://codetribute.mozilla.org/projects/automation?project%3DPerformance Performance]. Many team members also work on [https://codetribute.mozilla.org/projects/reporting Dashboards and Reporting] so that would be another good place to look. If you’re not sure what you want to hack on, ask us in #perftest - we’d be happy to help find you something. :)<br />
<br />
== Resources ==<br />
These resources might not be directly related to Performance Testing or the code we work on, but they may have useful information for you to make use of.<br />
# You can find some more information on the team in this [https://wiki.mozilla.org/TestEngineering/Performance/Onboarding onboarding page].<br />
# Search Mozilla’s code repositories with [https://searchfox.org/mozilla-central/source/testing/marionette/ searchfox] or [https://dxr.mozilla.org/mozilla-central/source/ DXR].<br />
# Another [https://ateam-bootcamp.readthedocs.org/en/latest/guide/index.html#new-contributor-guide guide for new contributors]. It has not been updated in a long time but it’s a good general resource if you ever get stuck on something. The most relevant sections to you are about Bugzilla, Mercurial, Python and the Development Process.<br />
# [https://mozilla-version-control-tools.readthedocs.org/en/latest/hgmozilla/index.html Mercurial for Mozillians]<br />
# More general resources are available in this [https://gist.github.com/mjzffr/d2adef328a416081f543 little guide] :maja_zf wrote in 2015 to help a student get started with open source contributions.<br />
# Textbook about general open source practices: [https://quaid.fedorapeople.org/TOS/Practical_Open_Source_Software_Exploration/html/index.html Practical Open Source Software Exploration]<br />
# If you’d rather use git instead of hg, see [https://github.com/glandium/git-cinnabar/wiki/Mozilla:-A-git-workflow-for-Gecko-development git workflow for Gecko development] and/or [https://sny.no/2016/03/geckogit this blog post by :ato].<br />
# A [https://wiki.mozilla.org/Firefox/Dev_Cheatsheet mercurial firefox cheat-sheet] with helpful commands that are used often.<br />
<br />
== Acknowledgements ==<br />
Much of this new contributor guide was based on the [https://firefox-source-docs.mozilla.org/testing/marionette/NewContributors.html Marionette guide] and uses a good amount of information from there.</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance&diff=1247190TestEngineering/Performance2023-07-24T17:57:11Z<p>Davehunt: Redirected page to Performance/Tools</p>
<hr />
<div>#REDIRECT [[Performance/Tools]]<br />
<br />
{{DISPLAYTITLE:Firefox Performance Test Engineering 🔥🦊⏱️}}[[File:Fxperftest.png|thumb|right]]<br />
<br />
= New contributors =<br />
<br />
If you are a new contributor, or would like to start contributing you can find a guide to help you [[/NewContributors|here]].<br />
<br />
= Where to find us =<br />
* [https://chat.mozilla.org/#/room/#perftest:mozilla.org #perftest]<br />
<br />
= Team purpose =<br />
To support the infrastructure and creation of automated tests for evaluating the performance of Firefox products. This provides value by exposing gaps in coverage, revealing areas where we can make performance gains, identifying performance regressions in a timely manner, and by providing key performance metrics that assist in determining how Firefox measures against release criteria.<br />
<br />
= What we do =<br />
* Identification of gaps in performance test infrastructure and monitoring.<br />
* Designing and building performance test infrastructure and monitoring solutions.<br />
* Supporting Firefox engineers on writing performance tests.<br />
* Supporting Firefox engineers on investigating regressions identified by tests.<br />
* Collaboration with release operations on infrastructure requirements.<br />
* Standing up performance tests in continuous integration environments.<br />
* Monitoring performance test results and identifying potential regressions.<br />
* Supporting performance sheriffs with tools to assist in identifying regressions.<br />
* Developing test plans for performance testing.<br />
* Running adhoc manual or partially automated performance testing.<br />
<br />
= What we don't do =<br />
* Maintenance of infrastructure hardware.<br />
* Maintain the continuous integration pipeline.<br />
* Writing/maintaining all performance tests.<br />
<br />
= Meetings =<br />
{{/Meetings}}<br />
<br />
= Onboarding =<br />
Welcome to the team! You are encouraged to improve the [[/Onboarding|onboarding page]]. If you need to ask questions that are not already covered, please update the page so that the next person has a better onboarding experience.<br />
<br />
= Workflow =<br />
* [[/Triage Process/]]<br />
* [[/Review Process/]]<br />
<br />
= Projects =<br />
* [[/Fenix/]]<br />
* [[/Raptor/]]<br />
* [[/Raptor/Browsertime/]]<br />
* [[/Talos/]]<br />
<br />
= Results =<br />
See our {{wip|[[/Results|results page]]}}.<br />
<br />
= Resources =<br />
* [[/Glossary/]]<br />
* [https://docs.google.com/document/d/1SswqYIAm4h8vlwfMc0pfGEJwXpFECyubGDezRZHHPFE/edit Strategies for investigating intermittents]<br />
* [https://docs.google.com/document/d/1HV2_z8hwhI2w8EbURtkYjpikVG5g9QeKEPo9h5msuRs/edit Following up perf bugs]<br />
* [https://docs.google.com/document/d/103SRVVcE2SZNYP3kFXGeiVQrusH2Wj2yv8SWaDvB9SM/edit Excessive Android device queue response plan]<br />
* [[/FAQ#How_can_I_do_a_bisection.3F|Bisection Workflow]]<br />
* [[/Sheriffing/CompareView|CompareView]]</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Tools/Onboarding&diff=1247189Performance/Tools/Onboarding2023-07-24T17:56:03Z<p>Davehunt: Created page with "{{DISPLAYTITLE:Onboarding with Performance Tools}} This page is aimed at people who are new to Mozilla and want to contribute to Mozilla source code related to Performance Te..."</p>
<hr />
<div>{{DISPLAYTITLE:Onboarding with Performance Tools}}<br />
<br />
This page is aimed at people who are new to Mozilla and want to contribute to Mozilla source code related to Performance Testing. Mozilla has both git and Mercurial repositories, but this guide only describes Mercurial - there are some guides for git in the Resources section.<br />
<br />
If you run into issues or have doubts, check out the Resources section below and don’t hesitate to ask questions. :) The goal of these steps is to make sure you have the basics of your development environment working. Once you do, we can get you started with working on an actual bug, yay!<br />
<br />
== Accounts, communication ==<br />
# Set up a [https://bugzilla.mozilla.org/ Bugzilla] account (and, if you like, a [https://mozillians.org/ Mozillians] profile). Please include your Riot/IRC nickname in both of these accounts so we can work with you more easily. For example, Eve Smith would set the Bugzilla name to “Eve Smith (:esmith)”, where “esmith” is an example of a common Riot/IRC nickname pattern. You can also pick a fun nickname for yourself.<br />
# For direct communication with us it will be beneficial to setup Riot using [https://wiki.mozilla.org/Matrix these instructions].<br />
# Join our [https://chat.mozilla.org/#/room/#perftest:mozilla.org #perftest] channel, and introduce yourself to the team. Check [https://wiki.mozilla.org/TestEngineering/Performance#Who_we_are this page] for who you can ping for help or to chat. We’re nice, I promise, but we might not answer right away due to different time zones, time off, etc. So please be patient.<br />
# When you want to ask a question on Riot, just go ahead and ask it even if no one appears to be around/responding. Provide lots of detail so that we have a better chance of helping you. If you don’t get an answer right away, check again in a few hours – someone may have answered you in the meantime.<br />
# If you’re having trouble reaching us over Riot, you are welcome to send an email to us instead. It’s a good idea to include your Riot nick in your email message.<br />
<br />
== Getting the code, running tests ==<br />
=== Performance Testing ===<br />
The first thing to do is to get your build environment set up. Follow the Getting Started instructions [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions#Getting_started here]. You can find more instructions [https://firefox-source-docs.mozilla.org/contributing/how_to_contribute_firefox.html here] as well. We suggest using an artifact build when you’re asked since speeds everything up a lot. After you have the build ready and ran `./mach run` successfully you should be good to go. If you hit any issues getting ready, we can help you in [https://chat.mozilla.org/#/room/#perftest:mozilla.org #perftest].<br />
<br />
So you’ve run Firefox locally - now what? It’s time to test it!<br />
<br />
We do performance testing so you will be most interested in [https://wiki.mozilla.org/TestEngineering/Performance/Raptor Raptor-webext], and [https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Browsertime Raptor-browsertime]. You can find more information about all of the projects we have [https://wiki.mozilla.org/TestEngineering/Performance#Projects here]. A simple test to start with would be this one which runs a google page load test with Raptor-webext:<br />
./mach raptor --test raptor-tp6-google-firefox<br />
To run Raptor-browsertime on the same page you only need to add the <tt>--browsertime</tt> argument. But first you have to install browsertime locally by running:<br />
./mach browsertime --setup<br />
If everything installed correctly you can now run the following:<br />
./mach raptor --test raptor-tp6-google-firefox --browsertime<br />
Note that Raptor-browsertime is under heavy development at the moment so it’s likely that you’ll hit issues there, but you can file bugs for those or let us know about them so we (or you) can fix them.<br />
<br />
The code for these tools resides in [https://dxr.mozilla.org/mozilla-central/source/testing/raptor this folder]. Browsertime code can also be found [https://dxr.mozilla.org/mozilla-central/source/tools/browsertime here] after it’s installed.<br />
<br />
=== PerfDocs ===<br />
<br />
Even though are primary focus is on performance testing harnesses, like Raptor, we have many other projects to work on too! If you come across bugs that talk about <tt>PerfDocs</tt>, then read on for more information on how to hack on it - it's a bit simpler.<br />
<br />
This project is for building up documentation about all of our tests dynamically, and you can find it in the [https://firefox-source-docs.mozilla.org/testing/perfdocs/raptor.html Firefox Source Tree Docs]. It has two stages, a verification stage, and a generation stage. The verification stage ensures that all tests are documented and that all documented tests exist. The generation stage, as you may have guessed, generates the documentation! The first stage can be run with:<br />
./mach lint -l perfdocs<br />
This should pass because we can't land patches unless perfdocs is passing. The generation stage is run by calling:<br />
./mach lint -l perfdocs --fix<br />
If no errors are found during the verification (which is always run before generation), then the documentation information is produced. The actual document that was linked above in the source tree docs is produced in continuous integration (you can also do it locally with <tt>./mach doc</tt> if you're interested.<br />
<br />
The whole system is relatively simple, and you can find the code for it in [https://searchfox.org/mozilla-central/source/tools/lint/perfdocs this folder].<br />
<br />
== Work on bugs and get code review ==<br />
Once you are familiar with the code of the test harnesses, and the tests you might want to start with your first contribution. You can follow [https://firefox-source-docs.mozilla.org/contributing/how_to_contribute_firefox.html#to-write-a-patch these instructions] on how to submit a patch. You can find review instructions that are specific to the team [https://wiki.mozilla.org/TestEngineering/Performance/Onboarding#Review_policy here].<br />
<br />
How you test a patch will change depending on what's being modified. Generally, you will be running Raptor-webext (with commands similar to those listed above) or it's unit-tests to test your changes, but you can ask us in [https://chat.mozilla.org/#/room/#perftest:mozilla.org #perftest] if you're not sure what you should run or if you need help getting a test command working. For the patch reviewer, you can use <tt>#perftest</tt> and someone from the team will review it (or you can put whoever helped you with the patch).<br />
<br />
You can find “good-first-bugs” by looking in codetribute in the [https://codetribute.mozilla.org/projects/automation Test Automation] section, projects from our team include [https://codetribute.mozilla.org/projects/automation?project%3DTalos Talos], [https://codetribute.mozilla.org/projects/automation?project%3DRaptor Raptor], and [https://codetribute.mozilla.org/projects/automation?project%3DPerformance Performance]. Many team members also work on [https://codetribute.mozilla.org/projects/reporting Dashboards and Reporting] so that would be another good place to look. If you’re not sure what you want to hack on, ask us in #perftest - we’d be happy to help find you something. :)<br />
<br />
== Resources ==<br />
These resources might not be directly related to Performance Testing or the code we work on, but they may have useful information for you to make use of.<br />
# You can find some more information on the team in this [https://wiki.mozilla.org/TestEngineering/Performance/Onboarding onboarding page].<br />
# Search Mozilla’s code repositories with [https://searchfox.org/mozilla-central/source/testing/marionette/ searchfox] or [https://dxr.mozilla.org/mozilla-central/source/ DXR].<br />
# Another [https://ateam-bootcamp.readthedocs.org/en/latest/guide/index.html#new-contributor-guide guide for new contributors]. It has not been updated in a long time but it’s a good general resource if you ever get stuck on something. The most relevant sections to you are about Bugzilla, Mercurial, Python and the Development Process.<br />
# [https://mozilla-version-control-tools.readthedocs.org/en/latest/hgmozilla/index.html Mercurial for Mozillians]<br />
# More general resources are available in this [https://gist.github.com/mjzffr/d2adef328a416081f543 little guide] :maja_zf wrote in 2015 to help a student get started with open source contributions.<br />
# Textbook about general open source practices: [https://quaid.fedorapeople.org/TOS/Practical_Open_Source_Software_Exploration/html/index.html Practical Open Source Software Exploration]<br />
# If you’d rather use git instead of hg, see [https://github.com/glandium/git-cinnabar/wiki/Mozilla:-A-git-workflow-for-Gecko-development git workflow for Gecko development] and/or [https://sny.no/2016/03/geckogit this blog post by :ato].<br />
# A [https://wiki.mozilla.org/Firefox/Dev_Cheatsheet mercurial firefox cheat-sheet] with helpful commands that are used often.<br />
<br />
== Acknowledgements ==<br />
Much of this new contributor guide was based on the [https://firefox-source-docs.mozilla.org/testing/marionette/NewContributors.html Marionette guide] and uses a good amount of information from there.<br />
<br />
= Getting Started =<br />
If you're looking for a guide to help you get going, the [https://wiki.mozilla.org/TestEngineering/Performance/NewContributors new contributor page] should have all the information you need.<br />
<br />
= Meetings =<br />
{{:TestEngineering/Performance/Meetings}}<br />
<br />
= Phonebook =<br />
The [https://people.mozilla.org/ people directory] is a secure place to quickly find your team members and easily discover new ones.<br />
<br />
Please ensure your profile has:<br />
* photo (of you!)<br />
* username (this is included in your profile URL)<br />
* GitHub identity<br />
* [[#Bugzilla]] identity<br />
* Matrix nick<br />
* Slack nick<br />
<br />
= Calendar =<br />
There's a Performance [https://calendar.google.com/calendar/embed?src=mozilla.com_9bk5f2rqdeuip38jbeld84kpqc%40group.calendar.google.com shared calendar] ([https://calendar.google.com/calendar/ical/mozilla.com_9bk5f2rqdeuip38jbeld84kpqc%40group.calendar.google.com/public/basic.ics iCal]), which is primarily used for PTO. Add this calendar to your google calendar by taking the iCal link and using it in the "Add Calendar -> From URL" section.<br />
<br />
= PTO =<br />
Add any PTO to the shared calendar (see above) and [https://docs.google.com/document/d/1kHHimZH65Rg_Nzx_JfyPqeXL6ONTse0nG2t9Nkv4eOY/edit# team meeting notes] so the team are aware. During PTO please also update your name in Bugzilla's [https://bugzilla.mozilla.org/userprefs.cgi?tab=account user preferences] to indicate that you are away, and when you will return.<br />
<br />
= Communication =<br />
<br />
== Groups ==<br />
Feel free to sign up to the following groups, and post to them when you have something to share or questions to ask.<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/perftest perftest] is for team communications and setting up test accounts<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/performance performance] is for general discussion and announcements<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/perfteam perfteam] is for the broader performance team<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/perf-sheriffs perf-sheriffs] is for discussions related to performance sheriffing<br />
* [https://groups.google.com/a/mozilla.com/forum/#!forum/perftest-alerts perftest-alerts] is for alerts related to performance tests<br />
<br />
== [[Matrix]] ==<br />
Feel free to browse through the full list at your leisure and find things that interest you. Here are some to start with:<br />
* {{matrix|perf|General performance chat}}<br />
* {{matrix|perftest|Public team channel}}<br />
* {{matrix|perfteam|Public performance team channel}}<br />
* {{matrix|perfsheriffs|Performance sheriffs channel}}<br />
<br />
== Slack ==<br />
Here are some useful Slack channels to start with:<br />
* #announcements - Global communication and announcements<br />
* #moco - Used for questions during internal meetings<br />
* #newsroom - Firefox and relevant tech news<br />
* #servicedesk - Internal IT support for employees<br />
* #peopleteam - People support<br />
<br />
= Credentials =<br />
There's a shared [https://1password.com/ 1Password] vault for credentials that you may need to access. Please submit a request for 1Password from [https://mozilla-hub.atlassian.net/servicedesk/customer/portal/6 ServiceDesk]. Once you have an account and the software set up (available on iOS, Android, Windows, macOS) you can be added to the team vault.<br />
<br />
= Hardware =<br />
List any hardware devices that you have assigned to you in [https://docs.google.com/document/d/1T7O7uIM05xG1k5E79GQt4ac2gl6CRpE9P1BqrU-3RS8/edit# this document]. This can be valuable if we need to identify somebody on the team that has a specific device or platform for running tests, reproducing issues, etc. You may need additional hardware such as mobile devices, laptops, etc. You can request this equipment from [https://mozilla.service-now.com/sp The Hub].<br />
<br />
= Bugzilla =<br />
You will need to create an account in Mozilla's instance of Bugzilla. See [[BMO/UserGuide]] for how to get started. It's helpful to include your Matrix/Slack handle in your name field prefixed by <code>:</code> so you can quickly be identified by other users of Bugzilla. Other details that can be helpful are your preferred pronouns, current timezone, and if you're currently on PTO. For example: <br />
<br />
Dave Hunt [:davehunt] [he/him] ⌚BST (away until 1st January 2021)<br />
<br />
== Products/Components ==<br />
The relevant components for the team are:<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=AWSY#AWSY Testing::AWSY]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=mozperftest#mozperftest Testing::mozperftest]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=Performance#Performance Testing::Performance]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=Raptor#Raptor Testing::Raptor]<br />
* [https://bugzilla.mozilla.org/describecomponents.cgi?product=Testing&component=Talos#Talos Testing::Talos]<br />
<br />
== Whiteboard Entries ==<br />
The following whiteboard entries are used by the team:<br />
* [https://bugzilla.mozilla.org/buglist.cgi?status_whiteboard=%5Bperftest%3Atriage%5D&resolution=---&status_whiteboard_type=allwordssubstr <nowiki>[perftest:triage]</nowiki>] - discussion required in [[TestEngineering/Performance/Triage_Process|triage]].<br />
* [https://bugzilla.mozilla.org/buglist.cgi?status_whiteboard=%5Bperf:responsiveness%5D&resolution=---&status_whiteboard_type=allwordssubstr <nowiki>[perf:responsiveness]</nowiki>] - related to responsiveness.<br />
* [https://bugzilla.mozilla.org/buglist.cgi?status_whiteboard=%5Bperf%3Aworkflow%5D&resolution=---&status_whiteboard_type=allwordssubstr <nowiki>[perf:workflow]</nowiki>] - related to the performance workflow.<br />
<br />
== Keywords ==<br />
See [[Performance/Bugzilla#Keywords]]<br />
<br />
== Resources ==<br />
* [https://developer.mozilla.org/en-US/docs/Mozilla/QA/Bug_writing_guidelines Bug writing guidelines]<br />
* [https://bugzilla.mozilla.org/page.cgi?id=etiquette.html Bugzilla etiquette]<br />
<br />
= GitHub =<br />
If you don't already have one, you will need to [https://docs.github.com/en/get-started/onboarding/getting-started-with-your-github-account create a GitHub account] and enable [https://help.github.com/articles/about-two-factor-authentication/ two-factor authentication].<br />
<br />
We have a [https://github.com/orgs/mozilla/teams/perftest GitHub team] for simplifying access to repositories. All team members that belong to the [https://github.com/orgs/mozilla Mozilla organisation] should be added to this team as members. Team maintainers can add new members following the process [[GitHub#Team_Maintainers_.26_Project_Leads|documented here]]. Other contributors will need to be manually granted access to individual repositories as needed.<br />
<br />
= Shared folder =<br />
We have a [https://drive.google.com/drive/u/0/folders/1EyeiuJYmivvY83BCicqeR4-59HRTU-ll shared folder] in Google Drive.<br />
<br />
= Sheriffing =<br />
Performance sheriffs will need to complete the following:<br />
<br />
* Request an LDAP account<br />
* Request commit access:<br />
** Level 1: {{bug|1398609}}<br />
** Level 3: {{bug|1509284}} <br />
* Request access to Treeherder sheriff group: {{bug|1506882}}<br />
* Training (ranked)<br />
* Join the [https://groups.google.com/a/mozilla.com/forum/#!forum/perf-sheriffs perf-sheriffs] Google Group<br />
<br />
= Review policy =<br />
When you push a commit up for review, you should use the following syntax to request review from the [https://phabricator.services.mozilla.com/tag/perftest-reviewers/ perftest review group]: <br />
<br />
r=#perftest<br />
<br />
For most patches, a single r+ from one reviewer is required to be allowed to be sent off for integration. More reviewers can pitch in on the same review, and Lando will in this case automatically rewrite the commit message to show who was involved signing off the patch, for example:<br />
<br />
Bug 1546611 - Fix "None" checks when validating test manifests; r=perftest,dhunt<br />
<br />
[https://secure.phabricator.com/book/phabflavor/article/writing_reviewable_code/ See the section "Write Sensible Commit Messages" here for how to form good commit titles and summaries].<br />
<br />
When you occasionally you have to single out individuals for specific topic expertise, you add an exclamation mark behind the nickname:<br />
<br />
r=#perftest,dhunt!<br />
<br />
This will add the patch to the shared review queue, but also block the review from landing pending Dave's approval. Requested changes by other reviewers will also block the review.<br />
<br />
Note that a [https://phabricator.services.mozilla.com/H216 Herald rule] exists that will set this group as a blocking reviewer for certain paths in the tree. This was configured via {{bug|1618249}}.<br />
<br />
== Module ownership policy ==<br />
If a patch touches code in a module owned by someone outside of the team, you must follow the [https://www.mozilla.org/en-US/about/governance/policies/module-ownership/ module ownership policy] and request review from the module owner or a peer listed in [[Modules]].<br />
<br />
= Duties =<br />
Everyone on the team will be expected to carry out the following duties to ensure effective collaboration both within the team, and with other teams.<br />
<br />
== Code reviews ==<br />
Visit [https://phabricator.services.mozilla.com/differential/ active revisions] in Phabricator every day to:<br />
* Review any '''Must Review''' and '''Ready to Review''' patches. Pay particular attention to any patches that have yourself as the sole reviewer. Consider adding the <code>#perftest</code> review group for a wider audience of reviewers. All review requests should receive a response (not necessarily a complete review) within 1 working day.<br />
* Follow up on any '''Waiting on Review''' patches by prompting appropriate team members for a review.<br />
* Review any '''Waiting on Authors''' patches and prompt the authors if there is an action they need to take.<br />
<br />
== Bugzilla requests ==<br />
Visit the [https://bugzilla.mozilla.org/request.cgi request queue] in Bugzilla every day and filter by your account to see all requests that have been submitted for you. Requests for P1/P2 bugs should recieve a response within 1 working day. All other requests should receive a response within 5 working days.</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Tools&diff=1247188Performance/Tools2023-07-24T16:27:09Z<p>Davehunt: /* Resources */</p>
<hr />
<div>{{DISPLAYTITLE:Performance Tools 🔥🦊⏱️🛠️}}[[File:Fxperftest.png|thumb|right]]<br />
<br />
= Who we are =<br />
* Carla Severe [:carla] 🇺🇸<br />
* Andrej Glavic [:aglavic] 🇨🇦<br />
* Greg Mierzwinski [:sparky] 🇨🇦<br />
* Kash Shampur [:kshampur] 🇨🇦<br />
* Adam Brouwers-Harries [:aabh] 🇬🇧<br />
* Dave Hunt [:davehunt] 🇬🇧<br />
* Julien Wajsberg [:julienw] 🇫🇷<br />
* Nazım Can Altınova [:canova] 🇩🇪<br />
* Alex Finder [:afinder] 🇷🇴<br />
* Alex Ionescu [:alexandrui] 🇷🇴<br />
* Andra Esanu [:andra.esanu] 🇷🇴<br />
* Beatrice Acasandrei [:beatrice-acasandrei] 🇷🇴<br />
<br />
= Where to find us =<br />
* [https://chat.mozilla.org/#/room/#perftest:mozilla.org #perftest]<br />
* [https://chat.mozilla.org/#/room/#profiler:mozilla.org #profiler]<br />
<br />
= Team purpose =<br />
Empowering engineers with tools to continuously improve the performance of Mozilla products.<br />
<br />
= Meetings =<br />
{{/Meetings}}<br />
<br />
= Onboarding =<br />
Welcome to the team! You are encouraged to improve the [[/Onboarding|onboarding page]]. If you need to ask questions that are not already covered, please update the page so that the next person has a better onboarding experience.<br />
<br />
= Workflows =<br />
* [[/Testing/Triage|Testing Triage]]<br />
* [[/Testing/Reviews|Testing Reviews]]<br />
<br />
= Resources =<br />
* [[../Glossary]]<br />
* [https://docs.google.com/document/d/1SswqYIAm4h8vlwfMc0pfGEJwXpFECyubGDezRZHHPFE/edit Strategies for investigating intermittents]<br />
* [https://docs.google.com/document/d/1HV2_z8hwhI2w8EbURtkYjpikVG5g9QeKEPo9h5msuRs/edit Following up perf bugs]<br />
* [https://docs.google.com/document/d/103SRVVcE2SZNYP3kFXGeiVQrusH2Wj2yv8SWaDvB9SM/edit Excessive Android device queue response plan]<br />
* [[/FAQ#How_can_I_do_a_bisection.3F|Bisection Workflow]]<br />
* [[TestEngineering/Performance/Sheriffing/CompareView|CompareView]]</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/FAQ&diff=1247187TestEngineering/Performance/FAQ2023-07-24T16:26:37Z<p>Davehunt: Redirected page to Performance/Tools/FAQ</p>
<hr />
<div>#REDIRECT [[Performance/Tools/FAQ]]<br />
<br />
== What is Perfherder? ==<br />
[https://treeherder.mozilla.org/perf.html#/graphs Perfherder] is a tool that takes data points from log files and graphs them over time. Primarily this is used for performance data from [[TestEngineering/Performance/Talos|Talos]], but also from [[AWSY/Tests|AWSY]], build_metrics, [[EngineeringProductivity/Autophone|Autophone]] and platform_microbenchmarks. All these are test harnesses and you can find more about them [[TestEngineering/Performance/Sheriffing/Alerts|here]]. <br />
<br />
The code for Perfherder can be found inside Treeherder [https://github.com/mozilla/treeherder/ here].<br />
<br />
== How can I view details on a graph? ==<br />
When viewing Perfherder Graph details, in many cases it is obvious where the regression is. If you mouse over the data points (not click on them) you can see some raw data values.<br />
<br />
While looking for the specific changeset that caused the regression, you have to determine where the values changed. By moving the mouse over the values you can easily determine the high/low values historically to determine the normal 'range'. When you see values change, it should be obvious that the high/low values have a different 'range'.<br />
<br />
If this is hard to see, it helps to zoom in to reduce the 'y' axis. Also zooming into the 'x' axis for a smaller range of revisions yields less data points, but an easier way to see the regression.<br />
<br />
Once you find the regression point, you can click on the data point and it will lock the information as a popup. Then you can click on the revision to investigate the raw changes which were part of that.<br />
<br />
[[File:ph_Details.jpg]]<br />
<br />
Note, here you can get the date, revision, and value. These are all useful data points to be aware of while viewing graphs.<br />
<br />
Keep in mind, graph server doesn't show if there is missing data or a range of changesets.<br />
<br />
== How can I zoom on a perfherder graph? ==<br />
Perfherder graphs has the ability adjust the date range from a drop down box. We default to 14 days, but we can change it to last day/2/7/14/30/90/365 days from the UI drop down.<br />
<br />
It is usually a good idea to zoom out to a 30 day view on integration branches. This allows us to see recent history as well as what the longer term trend is.<br />
<br />
There are two parts in the Perfherder graph, the top box with the trendline and the bottom viewing area with the raw data points. If you select an area in the trendline box, it will zoom to that. This is useful for adjusting the Y-axis.<br />
<br />
Here is an example of zooming in on an area:<br />
<br />
[[File:Ph_Zooming.jpg]]<br />
<br />
== How can I add more test series to a graph? ==<br />
One feature of Perfherder graphs is the ability to add up to 7 sets of data points at once and compare them on the same graph. In fact when clicking on a graph for an alert, we do this automatically when we add multiple branches at once.<br />
<br />
While looking at a graph, it is a good idea to look at that test/platform across multiple branches to see where the regression originally started at and to see if it is affected on different branches. There are 3 primary needs for adding data:<br />
* investigating branches<br />
* investigating platforms<br />
* comparing pgo/non pgo/e10s for the same test<br />
<br />
For investingating branches click the branch name in the UI and it will pop up the "Add more test data" dialog pre populated with the other branches which has data for this exact platform/test. All you have to do is hit add.<br />
<br />
[[File:Ph_Addbranch.jpg]]<br />
<br />
For investigating platforms, click the platform name in the UI and it will pop up the "Add more test data" dialog pre populated with the other platforms which has data for this exact platform/test. All you have to do is hit add.<br />
<br />
[[File:Ph_Addplatform.jpg]]<br />
<br />
To do this find the link on the left hand side where the data series are located at "+Add more test data":<br />
<br />
[[File:Ph_Addmoredata.jpg]]<br />
<br />
== How can a test series can be muted/hidden? ==<br />
A test series from a perfherder graph can be muted/hidden by toggling on the checkbox on the lower right of the data series from the left side panel.<br />
<br />
[[File:Ph_Muting.jpg]]<br />
<br />
== What makes branches different from one another? ==<br />
We have a variety of branches at Mozilla, here are the main ones that we see alerts on:<br />
* Mozilla-Inbound (PGO, Non-PGO)<br />
* Autoland (PGO, Non-PGO)<br />
* Mozilla-Beta (all PGO)<br />
<br />
Linux and Windows builds have [[TestEngineering/Performance/Sheriffing/Tree_FAQ#What_is_PGO|PGO]], OSX does not.<br />
<br />
When investigating alerts, always look for the Non-PGO branch first. Usually expect to find changes on Mozilla-Inbound (about 50%) and Autoland (50%).<br />
<br />
The volume on the branches is something to be aware of, we have higher volume on Mozilla-Inbound and Autoland, this means that alerts will be generated faster and it will be easier to track down the offending revision.<br />
<br />
A final note, Mozilla-Beta is a branch where little development takes place. The volume is really low and alerts come 5 days (or more) later. It is important to address Mozilla-Beta alerts ASAP because that is what we are shipping to customers.<br />
<br />
== What is coalescing? ==<br />
Coalescing is a term we use for when we schedule jobs to run on a given machine. When the load is high these jobs are placed in a queue and the longer the queue we skip over some of the jobs. This allows us to get results on more recent changesets faster.<br />
<br />
This affects talos numbers as we see regressions which show up over >1 changeset that is pushed. We have to manually fill in the coalesced jobs (including builds sometimes) to ensure we have the right changeset to blame for the regression.<br />
<br />
Some things to be aware of:<br />
* missing test jobs - This could be as easy as waiting for jobs to finish, or scheduling the missing job assuming it was coalesced, otherwise, it could be a missing build.<br />
* missing builds - we would have to generate builds, which automatically schedules test jobs, sometimes these test jobs are coalesced and not run.<br />
* results might not be possible due to build failures, or test failures<br />
* [[TestEngineering/Performance/Sheriffing/Tree_FAQ#What_is_PGO|pgo builds]] are not coalesced, they just run much less frequently. Most likely a pgo build isn't the root cause<br />
<br />
Here is a view on treeherder of missing data (usually coalescing):<br />
<br />
[[File:Coalescing_markedup.png]]<br />
<br />
Note the two pushes that have no data (circled in red). If the regression happened around here, we might want to backfill those two jobs so we can ensure we are looking at the push which caused the regression instead of >1 push.<br />
<br />
== What is an uplift? ==<br />
Every [[RapidRelease/Calendar|6 weeks]] we release a new version of Firefox. When we do that, our code which developers check into the nightly branch gets uplifted (thing of this as a large [[TestEngineering/Performance/Sheriffing/Tree_FAQ#What_is_a_merge|merge]]) to the Beta branch. Now all the code, features, and Talos regressions are on Beta.<br />
<br />
This affects the Performance Sheriffs because we will get a big pile of alerts for Mozilla-Beta. These need to be addressed rapidly. Luckily almost all the regressions seen on Mozilla-Beta will already have been tracked on Mozilla-Inbound or Autoland.<br />
<br />
== What is a merge? ==<br />
Many times each day we merge code from the integration branches into the main branch and back. This is a common process in large projects. At Mozilla, this means that the majority of the code for Firefox is checked into Mozilla-Inbound and Autoland, then it is merged into Mozilla-Central (also referred to as Firefox) and then once merged, it gets merged back into the other branches. If you want to read more about this merge procedure, here are [[Sheriffing/How_To/Merges|the details]].<br />
<br />
Here is an example of a view of what a merge looks like on [https://treeherder.mozilla.org/#/jobs?repo=mozilla-inbound&revision=126a1ec5c7c5 TreeHerder]:<br />
<br />
[[File:Merge.png]]<br />
<br />
Note that the topmost revision has the commit messsage of: "merge m-c to m-i". This is pretty standard and you can see that there are a series of [https://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?changeset=126a1ec5c7c5 changesets], not just a few related patches.<br />
<br />
How this affects alerts is that when a regression lands on Mozilla-Inbound, it will be merged into Firefox, then Autoland. Most likely this means that you will see duplicate alerts on the other integration branch.<br />
<br />
* note: we do not generate alerts for the Firefox (Mozilla-Central) branch.<br />
<br />
== What is a backout? ==<br />
Many times we backout or hotfix code as it is causing a build failure or unittest failure. The [[Sheriffing/Sheriff_Duty|Sheriff team]] handles this process in general and backouts/hotfixes are usually done within 3 hours (i.e. we won't have [[TestEngineering/Performance/Sheriffing/Noise_FAQ#Why_do_we_need_12_future_data_points|12 future changesets]]) of the original fix. As you can imagine we could get an alert 6 hours later and go to look at the graph and see there is no regression, instead there is a temporary spike for a few data points.<br />
<br />
While looking on Treeherder for a backout, they all mention a backout in the commit message:<br />
<br />
[[File:Backout_tree.png]]<br />
<br />
* note ^ the above image mentions the bug that was backed out, sometimes it is the revisoin<br />
<br />
Backouts which affect [[TestEngineering/Performance/Sheriffing/Alerts|Perfherder alerts]] always generate a set of improvements and regressions. These are usually easy to spot on the graph server and we just need to annotate the set of alerts for the given revision to be a 'backout' with the bug to track what took place.<br />
<br />
Here is a view on graph server of what appears to be a backout (it could be a fix that landed quickly also):<br />
<br />
[[File:Backout_graph.png]]<br />
<br />
== What is PGO? ==<br />
PGO is [https://developer.mozilla.org/en-US/docs/Building_with_Profile-Guided_Optimization Profile Guided Optimization] where we do a build, run it to collect metrics and optimize based on the output of the metrics. We only release PGO builds, and for the integration branches we do these periodically (6 hours) or as needed. For Mozilla-Central we follow the same pattern. As the builds take considerably longer (2+ times as long) we don't do this for every commit into our integration branches.<br />
<br />
How does this affect alerts? We care most about PGO alerts- that is what we ship! Most of the time an alert will be generated for a -Non-PGO build and then a few hours or a day later we will see alerts for the PGO build.<br />
<br />
Pay close attention to the branch the alerts are on, most likely you will see it on the non-pgo branch first (i.e. Mozilla-Inbound-Non-PGO), then roughly a day later you will see a similar alert show up on the PGO branch (i.e. Mozilla-Inbound).<br />
<br />
Caveats:<br />
* OSX does not do PGO builds, so we do not have -Non-PGO branches for those platforms. (i.e. we only have Mozilla-Inbound)<br />
* PGO alerts will probably have different regression percentages, but the overall list of platforms/tests for a given revision will be almost identical<br />
<br />
== What alerts are displayed in Alert Manager? ==<br />
[https://treeherder.mozilla.org/perf.html#/alerts Perfherder Alerts] defaults to [[TestEngineering/Performance/Sheriffing/Alerts|multiple types of alerts]] that are untriaged. It is a goal to keep these lists empty! You can view alerts that are improvements or in any other state (i.e. investigating, fixed, etc.) by using the drop down at the top of the page.<br />
<br />
== Do we care about all alerts/tests? ==<br />
Yes we do. Some tests are more commonly invalid, mostly due to the noise in the tests. We also adjust the threshold per test, the default is 2%, but for Dromaeo it is 5%.<br />
If we consider a test too noisy, we consider removing it entirely.<br />
<br />
Here are some platforms/tests which are exceptions about what we run:<br />
* Linux 64bit - the only platform which we run dromaeo_dom<br />
* Linux 32/64bit - the only platform in which no [[TestEngineering/Performance/Sheriffing/Alerts#platform_microbench|platform_microbench]] test runs, due to high noise levels<br />
* Windows 7 - the only platform that supports xperf (toolchain is only installed there)<br />
* Windows 7/10 - heavy profiles don't run here, because they take too long while cloning the big profiles; these are tp6 tests that use heavy user profiles<br />
<br />
Lastly, we should prioritize alerts on the Mozilla-Beta branch since those are affecting more people.<br />
<br />
== What does a regression look like on the graph? ==<br />
On almost all of our tests, we are measuring based on time. This means that the lower the score the better. Whenever the graph increases in value that is a regression.<br />
<br />
Here is a view of a regression:<br />
<br />
[[File:Regression.png]]<br />
<br />
We have some tests which measure internal metrics. A few of those are actually reported where a higher score is better. This is confusing, but we refer to these as reverse tests. The list of tests which are reverse are:<br />
* canvasmark<br />
* dromaeo_css<br />
* dromaeo_dom<br />
* rasterflood_gradient<br />
* speedometer<br />
* tcanvasmark<br />
* v8 version 7<br />
<br />
Here is a view of a reverse regression:<br />
<br />
[[File:Reverse_regression.png]]<br />
<br />
== Why does Alert Manager print -xx% ? ==<br />
The alert will either be a regression or an improvement. For the alerts we show by default, it is regressions only. It is important to know the severity of an alert. For example a 3% regression is important to understand, but a 30% regression probably needs to be fixed ASAP. This is annotated as a XX% in the UI. there are no + or - to indicate improvement or regression, this is an absolute number. Use the bar graph to the side to determine which type of alert this is.<br />
<br />
NOTE: for the reverse tests we take that into account, so the bar graph will know to look in the correct direction.<br />
<br />
== What is noise? ==<br />
Generally a test reports values that are in a range instead of a consistent value. The larger the range of 'normal' results, the more noise we have.<br />
<br />
Some tests will post results in a small range, and when we get a data point significantly outside the range, it is easy to identify.<br />
<br />
The problem is that many tests have a large range of expected results (we call them unstable). It makes it hard to determine what a regression is when we might have a range += 4% from the median and we have a 3% regression. It is obvious in the graph over time, but hard to tell until you have many future data points.<br />
<br /><br />
[[File:Noisy graph.png|Noisy graph]]<br />
<br />
== What are low value tests? ==<br />
In the context of noise, the low value mean that the regression magnitude is too small related to the noise of the tests, thus it's pretty hard to tell which particular bug/commit caused this, but rather a range.<br />
<br /><br />
In a sheriffing perspective, those often end up as WONTFIX/INVALID or tests which are often considered unreliable, not relevant to current Firefox revision etc.<br />
<br /><br />
[[File:Noisy low value graph.png.png|Noisy low value graph]]<br />
<br />
== Why can we not trust a single data point? ==<br />
This is a problem we have dealt with for years with no perfect answer. Some reasons we do know are:<br />
* the test is noisy due to timing, diskIO, etc.<br />
* the specific machine might have slight differences<br />
* sometimes we have longer waits starting the browser or a pageload hang for a couple extra seconds<br />
<br />
The short answer is we don't know and have to work within the constraints we do know.<br />
<br />
== Why do we need 12 future data points? ==<br />
We are re-evaluating our assertions here, but the more data points we have, the more confidence we have in the analysis of the raw data to point out a specific change.<br />
<br />
This causes problem when we land code on Mozilla-Beta and it takes 10 days to get 12 data points. We sometimes rerun tests and just retriggering a job will help provide more data points to help us generate an alert if needed.<br />
<br />
== Can't we do smarter analysis to reduce noise? ==<br />
Yes, we can. We have other projects and a [https://wiki.mozilla.org/images/c/c0/Larres-thesis.pdf masters thesis] has been written on this subject. The reality is we will still need future data points to show a trend and depending on the source of data we will need to use different algorithms to analyze it.<br />
<br />
== How can duplicate alerts can be identified? ==<br />
One problem with [[TestEngineering/Performance/Sheriffing/Tree_FAQ#What_is_coalescing|coalescing]] is that we sometimes generate an original alert on a range of changes, then when we fill in the data (backfilling/retriggering) we generate new alerts. This causes confusion while looking at the alerts.<br />
<br />
Here are some scenarios which duplication will be seen:<br />
* backfilling data from [[TestEngineering/Performance/Sheriffing/Tree_FAQ#What_is_coalescing|coalescing]], you will see a similar alert on the same branch/platform/test but a different revision<br />
** action: reassign the alerts to the original alert summary so all related alerts are in one place!<br />
* we [[TestEngineering/Performance/Sheriffing/Tree_FAQ#What_is_a_merge|merge]] changesets between branches<br />
** action: find the original alert summary on the upstream branch and mark the specific alert as downstream to that alert summary<br />
* [[TestEngineering/Performance/Sheriffing/Tree_FAQ#What_is_PGO|pgo]] builds<br />
** action: reassign these to the non-pgo alert summary (if one exists), or downstream to the correct alert summary if this originally happened on another branch<br />
<br />
In Alert Manager it is good to acknowledge the alert and use the reassign or downstream actions. This helps us keep track of alerts across branches whenever we need to investigate in the future.<br />
<br />
== What are weekend spikes? ==<br />
On weekends (Saturday/Sunday) and many holidays, we find that the volume of pushes are much smaller. This results in much fewer tests to be run. For many tests, especially ones that are noisier than others, we find that the few data points we collect on a [https://elvis314.wordpress.com/2014/10/30/a-case-of-the-weekends/ weekend are much less noisy] (either falling to the top or bottom of the noise range).<br />
<br />
Here is an example view of data that behaves differently on weekends:<br />
<br />
[[File:Weekends_example.png]]<br />
<br />
This affects the Talos Sheriff because on Monday when our volume of pushes picks up, we get a larger range of values. Due to the way we calculate a regression, it means that we see a shift in our expected range on Monday. Usually these alerts are generated Monday evening/Tuesday morning. These are typically small regressions (<3%) and on the noisier tests.<br />
<br />
== What is a multi-modal test? ==<br />
Many tests are bi-modal or multi-modal. This means that they have a consistent set of values, but 2 or 3 of them. Instead of having a bunch of scattered values between the low and high, you will have 2 values, the lower one and the higher one.<br />
<br />
Here is an example of a graph that has two sets of values (with random ones scattered in between):<br />
<br />
[[File:Modal_example.png]]<br />
<br />
This affects the alerts and results because sometimes we get a series of results that are less modal than the original- of course this generates an alert and a day later you will probably see that we are back to the original x-modal pattern as we see historically. Some of this is affected by the weekends.<br />
<br />
== What is random noise? ==<br />
Random noise are the data-points that don't fit in the graph trend of the test. They happen because of various uncontrollable factors (and this is assumed) or because the test is unstable.<br />
<br />
== How do I identify the current firefox release meta-bug? ==<br />
To easily track all the regressions opened, for every Firefox release is created a meta-bug that will depend on the regressions open.<br />
[[File:Advanced search.png|Advanced search]]<br /><br />
<br />
To find all the Firefox release meta-bugs you just have to search in Advanced search for bugs with:<br />
[[File:Firefox 70 meta.png|Firefox 70 meta]]<br /><br /><br />
<br />
'''Product:''' Testing<br /><br />
'''Component:''' Performance<br /><br />
'''Summary:''' Contains all of the strings [meta] Firefox, Perfherder Regression Tracking Bug<br />
You can leave the rest of the fields as they are.<br />
[[File:Advanced search filter.png|1200px|Advanced search filter]]<br /><br />
<br /><br />
<br />
Result:<br /><br />
[[File:Firefox metabugs.png|1200px|Firefox metabugs]]<br />
<br />
== How do I search for an already open regression? ==<br />
Sometimes treeherder include alerts related to a test in the same summary, sometimes it doesn’t. To make sure that the regression you found doesn’t have already a bug open, you have to search in the current Firefox release meta-bug for regressions open with the summary similar to the summary of your alert. Usually, if the test name matches, it might be what you’re looking for. But, be careful, if the test name matches that doesn’t mean that it is what you’re looking for. You need to check it thoroughly.<br /><br />
<br />
Those situations appear because a regression appears first on one repo (e.g. autoland) and it takes a few days until the causing commit gets merged to other repos (inbound, beta, central).<br />
<br /><br />
<br />
== How do I follow up on already open regressions open by me? ==<br />
You can follow up on all the open regression bugs created by you by searching in [https://bugzilla.mozilla.org/query.cgi?format=advanced Advanced search] for bugs with:<br />
<br /><br />
'''Summary:''' contains all of the strings > regression on push<br />
<br /><br />
'''Status:''' NEW, ASSIGNED, REOPENED<br /><br />
<br /><br />
[[File:Advanced search for perf regressions.png|1200px|Advanced search for perf regressions]]<br />
<br /><br />
'''Keywords:''' perf, perf-alert, regression<br />
<br /><br />
'''Type:''' defect<br />
<br /><br />
[[File:Advanced search for perf regressions type.png|700px|Advanced search for perf regressions type]]<br />
<br /><br />
'''Search by People:''' ''The reporter is'' > [your email]<br /><br />
[[File:Advanced search for perf regressions by people.png|200px|Advanced search for perf regressions by people]]<br />
<br /><br />
<br /><br />
And you will get the list of all open regressions reported by you:<br />
<br /><br />
[[File:Advanced search results.png|1200px|Advanced search results]]<br />
<br />
== How can I do a bisection? ==<br />
If you're investigating a regression/improvement but for some reason it happened in a revision interval where the jobs aren't able to run or the revision contains multiple commits (this happens more often on mozilla-beta), you need to do a bisection in order to find the exact culprit. We usually adopt the binary search method. Say you have the revisions:<br />
* abcde1 - first regressed/improved value<br />
* abcde2<br />
* abcde3<br />
* abcde4<br />
* abcde5 - last good value<br />
<br />
Bisection steps:<br />
# checkout to the repository you're investigating: <br />
## hg checkout autoland (if you don't have it locally you need to do > hg pull autoland && hg update autoland)<br />
# hg checkout abcde5<br />
## ./mach try fuzzy --full -q=^investigated-test-signature -m=baseline_abcde5_alert_###### (you will know that the baseline contains the reference value)<br />
# hg checkout abcde3<br />
## let's assume that build abcde4 broke the tests. you need to back it out in order to get the values of your investigated test on try:<br />
### hg oops -r abcde4<br />
## ./mach try fuzzy --full -q=^investigated-test-signature -m=abcde4_alert_###### (the baseline keyword is included just in the reference push message)<br />
## Use the [https://treeherder.mozilla.org/perf.html#/comparechooser| comparechooser] to compare between the 2 pushes.<br />
# If the try values between abcde5 and abcde3 don't include the delta, then you'll know that abcde1 or abcde2 are suspects so you need to repeat the step you did for abcde3 to find out.</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Tools/FAQ&diff=1247186Performance/Tools/FAQ2023-07-24T16:26:09Z<p>Davehunt: Created page with "{{DISPLAYTITLE:Performance Tools FAQ}} {{DISPLAYTITLE:Performance Tools FAQ}} = What is Perfherder? = [https://treeherder.mozilla.org/perf.html#/graphs Perfherder] is a tool..."</p>
<hr />
<div>{{DISPLAYTITLE:Performance Tools FAQ}}<br />
<br />
{{DISPLAYTITLE:Performance Tools FAQ}}<br />
<br />
= What is Perfherder? =<br />
[https://treeherder.mozilla.org/perf.html#/graphs Perfherder] is a tool that takes data points from log files and graphs them over time. Primarily this is used for performance data from [[TestEngineering/Performance/Talos|Talos]], but also from [[AWSY/Tests|AWSY]], build_metrics, [[EngineeringProductivity/Autophone|Autophone]] and platform_microbenchmarks. All these are test harnesses and you can find more about them [[TestEngineering/Performance/Sheriffing/Alerts|here]]. <br />
<br />
The code for Perfherder can be found inside Treeherder [https://github.com/mozilla/treeherder/ here].<br />
<br />
= How can I view details on a graph? =<br />
When viewing Perfherder Graph details, in many cases it is obvious where the regression is. If you mouse over the data points (not click on them) you can see some raw data values.<br />
<br />
While looking for the specific changeset that caused the regression, you have to determine where the values changed. By moving the mouse over the values you can easily determine the high/low values historically to determine the normal 'range'. When you see values change, it should be obvious that the high/low values have a different 'range'.<br />
<br />
If this is hard to see, it helps to zoom in to reduce the 'y' axis. Also zooming into the 'x' axis for a smaller range of revisions yields less data points, but an easier way to see the regression.<br />
<br />
Once you find the regression point, you can click on the data point and it will lock the information as a popup. Then you can click on the revision to investigate the raw changes which were part of that.<br />
<br />
[[File:ph_Details.jpg]]<br />
<br />
Note, here you can get the date, revision, and value. These are all useful data points to be aware of while viewing graphs.<br />
<br />
Keep in mind, graph server doesn't show if there is missing data or a range of changesets.<br />
<br />
= How can I zoom on a perfherder graph? =<br />
Perfherder graphs has the ability adjust the date range from a drop down box. We default to 14 days, but we can change it to last day/2/7/14/30/90/365 days from the UI drop down.<br />
<br />
It is usually a good idea to zoom out to a 30 day view on integration branches. This allows us to see recent history as well as what the longer term trend is.<br />
<br />
There are two parts in the Perfherder graph, the top box with the trendline and the bottom viewing area with the raw data points. If you select an area in the trendline box, it will zoom to that. This is useful for adjusting the Y-axis.<br />
<br />
Here is an example of zooming in on an area:<br />
<br />
[[File:Ph_Zooming.jpg]]<br />
<br />
= How can I add more test series to a graph? =<br />
One feature of Perfherder graphs is the ability to add up to 7 sets of data points at once and compare them on the same graph. In fact when clicking on a graph for an alert, we do this automatically when we add multiple branches at once.<br />
<br />
While looking at a graph, it is a good idea to look at that test/platform across multiple branches to see where the regression originally started at and to see if it is affected on different branches. There are 3 primary needs for adding data:<br />
* investigating branches<br />
* investigating platforms<br />
* comparing pgo/non pgo/e10s for the same test<br />
<br />
For investingating branches click the branch name in the UI and it will pop up the "Add more test data" dialog pre populated with the other branches which has data for this exact platform/test. All you have to do is hit add.<br />
<br />
[[File:Ph_Addbranch.jpg]]<br />
<br />
For investigating platforms, click the platform name in the UI and it will pop up the "Add more test data" dialog pre populated with the other platforms which has data for this exact platform/test. All you have to do is hit add.<br />
<br />
[[File:Ph_Addplatform.jpg]]<br />
<br />
To do this find the link on the left hand side where the data series are located at "+Add more test data":<br />
<br />
[[File:Ph_Addmoredata.jpg]]<br />
<br />
= How can a test series can be muted/hidden? =<br />
A test series from a perfherder graph can be muted/hidden by toggling on the checkbox on the lower right of the data series from the left side panel.<br />
<br />
[[File:Ph_Muting.jpg]]<br />
<br />
= What makes branches different from one another? =<br />
We have a variety of branches at Mozilla, here are the main ones that we see alerts on:<br />
* Mozilla-Inbound (PGO, Non-PGO)<br />
* Autoland (PGO, Non-PGO)<br />
* Mozilla-Beta (all PGO)<br />
<br />
Linux and Windows builds have [[TestEngineering/Performance/Sheriffing/Tree_FAQ#What_is_PGO|PGO]], OSX does not.<br />
<br />
When investigating alerts, always look for the Non-PGO branch first. Usually expect to find changes on Mozilla-Inbound (about 50%) and Autoland (50%).<br />
<br />
The volume on the branches is something to be aware of, we have higher volume on Mozilla-Inbound and Autoland, this means that alerts will be generated faster and it will be easier to track down the offending revision.<br />
<br />
A final note, Mozilla-Beta is a branch where little development takes place. The volume is really low and alerts come 5 days (or more) later. It is important to address Mozilla-Beta alerts ASAP because that is what we are shipping to customers.<br />
<br />
= What is coalescing? =<br />
Coalescing is a term we use for when we schedule jobs to run on a given machine. When the load is high these jobs are placed in a queue and the longer the queue we skip over some of the jobs. This allows us to get results on more recent changesets faster.<br />
<br />
This affects talos numbers as we see regressions which show up over >1 changeset that is pushed. We have to manually fill in the coalesced jobs (including builds sometimes) to ensure we have the right changeset to blame for the regression.<br />
<br />
Some things to be aware of:<br />
* missing test jobs - This could be as easy as waiting for jobs to finish, or scheduling the missing job assuming it was coalesced, otherwise, it could be a missing build.<br />
* missing builds - we would have to generate builds, which automatically schedules test jobs, sometimes these test jobs are coalesced and not run.<br />
* results might not be possible due to build failures, or test failures<br />
* [[TestEngineering/Performance/Sheriffing/Tree_FAQ#What_is_PGO|pgo builds]] are not coalesced, they just run much less frequently. Most likely a pgo build isn't the root cause<br />
<br />
Here is a view on treeherder of missing data (usually coalescing):<br />
<br />
[[File:Coalescing_markedup.png]]<br />
<br />
Note the two pushes that have no data (circled in red). If the regression happened around here, we might want to backfill those two jobs so we can ensure we are looking at the push which caused the regression instead of >1 push.<br />
<br />
= What is an uplift? =<br />
Every [[RapidRelease/Calendar|6 weeks]] we release a new version of Firefox. When we do that, our code which developers check into the nightly branch gets uplifted (thing of this as a large [[TestEngineering/Performance/Sheriffing/Tree_FAQ#What_is_a_merge|merge]]) to the Beta branch. Now all the code, features, and Talos regressions are on Beta.<br />
<br />
This affects the Performance Sheriffs because we will get a big pile of alerts for Mozilla-Beta. These need to be addressed rapidly. Luckily almost all the regressions seen on Mozilla-Beta will already have been tracked on Mozilla-Inbound or Autoland.<br />
<br />
= What is a merge? =<br />
Many times each day we merge code from the integration branches into the main branch and back. This is a common process in large projects. At Mozilla, this means that the majority of the code for Firefox is checked into Mozilla-Inbound and Autoland, then it is merged into Mozilla-Central (also referred to as Firefox) and then once merged, it gets merged back into the other branches. If you want to read more about this merge procedure, here are [[Sheriffing/How_To/Merges|the details]].<br />
<br />
Here is an example of a view of what a merge looks like on [https://treeherder.mozilla.org/#/jobs?repo=mozilla-inbound&revision=126a1ec5c7c5 TreeHerder]:<br />
<br />
[[File:Merge.png]]<br />
<br />
Note that the topmost revision has the commit messsage of: "merge m-c to m-i". This is pretty standard and you can see that there are a series of [https://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?changeset=126a1ec5c7c5 changesets], not just a few related patches.<br />
<br />
How this affects alerts is that when a regression lands on Mozilla-Inbound, it will be merged into Firefox, then Autoland. Most likely this means that you will see duplicate alerts on the other integration branch.<br />
<br />
* note: we do not generate alerts for the Firefox (Mozilla-Central) branch.<br />
<br />
= What is a backout? =<br />
Many times we backout or hotfix code as it is causing a build failure or unittest failure. The [[Sheriffing/Sheriff_Duty|Sheriff team]] handles this process in general and backouts/hotfixes are usually done within 3 hours (i.e. we won't have [[TestEngineering/Performance/Sheriffing/Noise_FAQ#Why_do_we_need_12_future_data_points|12 future changesets]]) of the original fix. As you can imagine we could get an alert 6 hours later and go to look at the graph and see there is no regression, instead there is a temporary spike for a few data points.<br />
<br />
While looking on Treeherder for a backout, they all mention a backout in the commit message:<br />
<br />
[[File:Backout_tree.png]]<br />
<br />
* note ^ the above image mentions the bug that was backed out, sometimes it is the revisoin<br />
<br />
Backouts which affect [[TestEngineering/Performance/Sheriffing/Alerts|Perfherder alerts]] always generate a set of improvements and regressions. These are usually easy to spot on the graph server and we just need to annotate the set of alerts for the given revision to be a 'backout' with the bug to track what took place.<br />
<br />
Here is a view on graph server of what appears to be a backout (it could be a fix that landed quickly also):<br />
<br />
[[File:Backout_graph.png]]<br />
<br />
= What is PGO? =<br />
PGO is [https://developer.mozilla.org/en-US/docs/Building_with_Profile-Guided_Optimization Profile Guided Optimization] where we do a build, run it to collect metrics and optimize based on the output of the metrics. We only release PGO builds, and for the integration branches we do these periodically (6 hours) or as needed. For Mozilla-Central we follow the same pattern. As the builds take considerably longer (2+ times as long) we don't do this for every commit into our integration branches.<br />
<br />
How does this affect alerts? We care most about PGO alerts- that is what we ship! Most of the time an alert will be generated for a -Non-PGO build and then a few hours or a day later we will see alerts for the PGO build.<br />
<br />
Pay close attention to the branch the alerts are on, most likely you will see it on the non-pgo branch first (i.e. Mozilla-Inbound-Non-PGO), then roughly a day later you will see a similar alert show up on the PGO branch (i.e. Mozilla-Inbound).<br />
<br />
Caveats:<br />
* OSX does not do PGO builds, so we do not have -Non-PGO branches for those platforms. (i.e. we only have Mozilla-Inbound)<br />
* PGO alerts will probably have different regression percentages, but the overall list of platforms/tests for a given revision will be almost identical<br />
<br />
= What alerts are displayed in Alert Manager? =<br />
[https://treeherder.mozilla.org/perf.html#/alerts Perfherder Alerts] defaults to [[TestEngineering/Performance/Sheriffing/Alerts|multiple types of alerts]] that are untriaged. It is a goal to keep these lists empty! You can view alerts that are improvements or in any other state (i.e. investigating, fixed, etc.) by using the drop down at the top of the page.<br />
<br />
= Do we care about all alerts/tests? =<br />
Yes we do. Some tests are more commonly invalid, mostly due to the noise in the tests. We also adjust the threshold per test, the default is 2%, but for Dromaeo it is 5%.<br />
If we consider a test too noisy, we consider removing it entirely.<br />
<br />
Here are some platforms/tests which are exceptions about what we run:<br />
* Linux 64bit - the only platform which we run dromaeo_dom<br />
* Linux 32/64bit - the only platform in which no [[TestEngineering/Performance/Sheriffing/Alerts#platform_microbench|platform_microbench]] test runs, due to high noise levels<br />
* Windows 7 - the only platform that supports xperf (toolchain is only installed there)<br />
* Windows 7/10 - heavy profiles don't run here, because they take too long while cloning the big profiles; these are tp6 tests that use heavy user profiles<br />
<br />
Lastly, we should prioritize alerts on the Mozilla-Beta branch since those are affecting more people.<br />
<br />
= What does a regression look like on the graph? =<br />
On almost all of our tests, we are measuring based on time. This means that the lower the score the better. Whenever the graph increases in value that is a regression.<br />
<br />
Here is a view of a regression:<br />
<br />
[[File:Regression.png]]<br />
<br />
We have some tests which measure internal metrics. A few of those are actually reported where a higher score is better. This is confusing, but we refer to these as reverse tests. The list of tests which are reverse are:<br />
* canvasmark<br />
* dromaeo_css<br />
* dromaeo_dom<br />
* rasterflood_gradient<br />
* speedometer<br />
* tcanvasmark<br />
* v8 version 7<br />
<br />
Here is a view of a reverse regression:<br />
<br />
[[File:Reverse_regression.png]]<br />
<br />
= Why does Alert Manager print -xx% ? =<br />
The alert will either be a regression or an improvement. For the alerts we show by default, it is regressions only. It is important to know the severity of an alert. For example a 3% regression is important to understand, but a 30% regression probably needs to be fixed ASAP. This is annotated as a XX% in the UI. there are no + or - to indicate improvement or regression, this is an absolute number. Use the bar graph to the side to determine which type of alert this is.<br />
<br />
NOTE: for the reverse tests we take that into account, so the bar graph will know to look in the correct direction.<br />
<br />
= What is noise? =<br />
Generally a test reports values that are in a range instead of a consistent value. The larger the range of 'normal' results, the more noise we have.<br />
<br />
Some tests will post results in a small range, and when we get a data point significantly outside the range, it is easy to identify.<br />
<br />
The problem is that many tests have a large range of expected results (we call them unstable). It makes it hard to determine what a regression is when we might have a range += 4% from the median and we have a 3% regression. It is obvious in the graph over time, but hard to tell until you have many future data points.<br />
<br /><br />
[[File:Noisy graph.png|Noisy graph]]<br />
<br />
= What are low value tests? =<br />
In the context of noise, the low value mean that the regression magnitude is too small related to the noise of the tests, thus it's pretty hard to tell which particular bug/commit caused this, but rather a range.<br />
<br /><br />
In a sheriffing perspective, those often end up as WONTFIX/INVALID or tests which are often considered unreliable, not relevant to current Firefox revision etc.<br />
<br /><br />
[[File:Noisy low value graph.png.png|Noisy low value graph]]<br />
<br />
= Why can we not trust a single data point? =<br />
This is a problem we have dealt with for years with no perfect answer. Some reasons we do know are:<br />
* the test is noisy due to timing, diskIO, etc.<br />
* the specific machine might have slight differences<br />
* sometimes we have longer waits starting the browser or a pageload hang for a couple extra seconds<br />
<br />
The short answer is we don't know and have to work within the constraints we do know.<br />
<br />
= Why do we need 12 future data points? =<br />
We are re-evaluating our assertions here, but the more data points we have, the more confidence we have in the analysis of the raw data to point out a specific change.<br />
<br />
This causes problem when we land code on Mozilla-Beta and it takes 10 days to get 12 data points. We sometimes rerun tests and just retriggering a job will help provide more data points to help us generate an alert if needed.<br />
<br />
= Can't we do smarter analysis to reduce noise? =<br />
Yes, we can. We have other projects and a [https://wiki.mozilla.org/images/c/c0/Larres-thesis.pdf masters thesis] has been written on this subject. The reality is we will still need future data points to show a trend and depending on the source of data we will need to use different algorithms to analyze it.<br />
<br />
= How can duplicate alerts can be identified? =<br />
One problem with [[TestEngineering/Performance/Sheriffing/Tree_FAQ#What_is_coalescing|coalescing]] is that we sometimes generate an original alert on a range of changes, then when we fill in the data (backfilling/retriggering) we generate new alerts. This causes confusion while looking at the alerts.<br />
<br />
Here are some scenarios which duplication will be seen:<br />
* backfilling data from [[TestEngineering/Performance/Sheriffing/Tree_FAQ#What_is_coalescing|coalescing]], you will see a similar alert on the same branch/platform/test but a different revision<br />
** action: reassign the alerts to the original alert summary so all related alerts are in one place!<br />
* we [[TestEngineering/Performance/Sheriffing/Tree_FAQ#What_is_a_merge|merge]] changesets between branches<br />
** action: find the original alert summary on the upstream branch and mark the specific alert as downstream to that alert summary<br />
* [[TestEngineering/Performance/Sheriffing/Tree_FAQ#What_is_PGO|pgo]] builds<br />
** action: reassign these to the non-pgo alert summary (if one exists), or downstream to the correct alert summary if this originally happened on another branch<br />
<br />
In Alert Manager it is good to acknowledge the alert and use the reassign or downstream actions. This helps us keep track of alerts across branches whenever we need to investigate in the future.<br />
<br />
= What are weekend spikes? =<br />
On weekends (Saturday/Sunday) and many holidays, we find that the volume of pushes are much smaller. This results in much fewer tests to be run. For many tests, especially ones that are noisier than others, we find that the few data points we collect on a [https://elvis314.wordpress.com/2014/10/30/a-case-of-the-weekends/ weekend are much less noisy] (either falling to the top or bottom of the noise range).<br />
<br />
Here is an example view of data that behaves differently on weekends:<br />
<br />
[[File:Weekends_example.png]]<br />
<br />
This affects the Talos Sheriff because on Monday when our volume of pushes picks up, we get a larger range of values. Due to the way we calculate a regression, it means that we see a shift in our expected range on Monday. Usually these alerts are generated Monday evening/Tuesday morning. These are typically small regressions (<3%) and on the noisier tests.<br />
<br />
= What is a multi-modal test? =<br />
Many tests are bi-modal or multi-modal. This means that they have a consistent set of values, but 2 or 3 of them. Instead of having a bunch of scattered values between the low and high, you will have 2 values, the lower one and the higher one.<br />
<br />
Here is an example of a graph that has two sets of values (with random ones scattered in between):<br />
<br />
[[File:Modal_example.png]]<br />
<br />
This affects the alerts and results because sometimes we get a series of results that are less modal than the original- of course this generates an alert and a day later you will probably see that we are back to the original x-modal pattern as we see historically. Some of this is affected by the weekends.<br />
<br />
= What is random noise? =<br />
Random noise are the data-points that don't fit in the graph trend of the test. They happen because of various uncontrollable factors (and this is assumed) or because the test is unstable.<br />
<br />
= How do I identify the current firefox release meta-bug? =<br />
To easily track all the regressions opened, for every Firefox release is created a meta-bug that will depend on the regressions open.<br />
[[File:Advanced search.png|Advanced search]]<br /><br />
<br />
To find all the Firefox release meta-bugs you just have to search in Advanced search for bugs with:<br />
[[File:Firefox 70 meta.png|Firefox 70 meta]]<br /><br /><br />
<br />
'''Product:''' Testing<br /><br />
'''Component:''' Performance<br /><br />
'''Summary:''' Contains all of the strings [meta] Firefox, Perfherder Regression Tracking Bug<br />
You can leave the rest of the fields as they are.<br />
[[File:Advanced search filter.png|1200px|Advanced search filter]]<br /><br />
<br /><br />
<br />
Result:<br /><br />
[[File:Firefox metabugs.png|1200px|Firefox metabugs]]<br />
<br />
= How do I search for an already open regression? =<br />
Sometimes treeherder include alerts related to a test in the same summary, sometimes it doesn’t. To make sure that the regression you found doesn’t have already a bug open, you have to search in the current Firefox release meta-bug for regressions open with the summary similar to the summary of your alert. Usually, if the test name matches, it might be what you’re looking for. But, be careful, if the test name matches that doesn’t mean that it is what you’re looking for. You need to check it thoroughly.<br /><br />
<br />
Those situations appear because a regression appears first on one repo (e.g. autoland) and it takes a few days until the causing commit gets merged to other repos (inbound, beta, central).<br />
<br /><br />
<br />
= How do I follow up on already open regressions open by me? =<br />
You can follow up on all the open regression bugs created by you by searching in [https://bugzilla.mozilla.org/query.cgi?format=advanced Advanced search] for bugs with:<br />
<br /><br />
'''Summary:''' contains all of the strings > regression on push<br />
<br /><br />
'''Status:''' NEW, ASSIGNED, REOPENED<br /><br />
<br /><br />
[[File:Advanced search for perf regressions.png|1200px|Advanced search for perf regressions]]<br />
<br /><br />
'''Keywords:''' perf, perf-alert, regression<br />
<br /><br />
'''Type:''' defect<br />
<br /><br />
[[File:Advanced search for perf regressions type.png|700px|Advanced search for perf regressions type]]<br />
<br /><br />
'''Search by People:''' ''The reporter is'' > [your email]<br /><br />
[[File:Advanced search for perf regressions by people.png|200px|Advanced search for perf regressions by people]]<br />
<br /><br />
<br /><br />
And you will get the list of all open regressions reported by you:<br />
<br /><br />
[[File:Advanced search results.png|1200px|Advanced search results]]<br />
<br />
= How can I do a bisection? =<br />
If you're investigating a regression/improvement but for some reason it happened in a revision interval where the jobs aren't able to run or the revision contains multiple commits (this happens more often on mozilla-beta), you need to do a bisection in order to find the exact culprit. We usually adopt the binary search method. Say you have the revisions:<br />
* abcde1 - first regressed/improved value<br />
* abcde2<br />
* abcde3<br />
* abcde4<br />
* abcde5 - last good value<br />
<br />
Bisection steps:<br />
# checkout to the repository you're investigating: <br />
## hg checkout autoland (if you don't have it locally you need to do > hg pull autoland && hg update autoland)<br />
# hg checkout abcde5<br />
## ./mach try fuzzy --full -q=^investigated-test-signature -m=baseline_abcde5_alert_###### (you will know that the baseline contains the reference value)<br />
# hg checkout abcde3<br />
## let's assume that build abcde4 broke the tests. you need to back it out in order to get the values of your investigated test on try:<br />
### hg oops -r abcde4<br />
## ./mach try fuzzy --full -q=^investigated-test-signature -m=abcde4_alert_###### (the baseline keyword is included just in the reference push message)<br />
## Use the [https://treeherder.mozilla.org/perf.html#/comparechooser| comparechooser] to compare between the 2 pushes.<br />
# If the try values between abcde5 and abcde3 don't include the delta, then you'll know that abcde1 or abcde2 are suspects so you need to repeat the step you did for abcde3 to find out.</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Tools&diff=1247185Performance/Tools2023-07-24T16:22:55Z<p>Davehunt: </p>
<hr />
<div>{{DISPLAYTITLE:Performance Tools 🔥🦊⏱️🛠️}}[[File:Fxperftest.png|thumb|right]]<br />
<br />
= Who we are =<br />
* Carla Severe [:carla] 🇺🇸<br />
* Andrej Glavic [:aglavic] 🇨🇦<br />
* Greg Mierzwinski [:sparky] 🇨🇦<br />
* Kash Shampur [:kshampur] 🇨🇦<br />
* Adam Brouwers-Harries [:aabh] 🇬🇧<br />
* Dave Hunt [:davehunt] 🇬🇧<br />
* Julien Wajsberg [:julienw] 🇫🇷<br />
* Nazım Can Altınova [:canova] 🇩🇪<br />
* Alex Finder [:afinder] 🇷🇴<br />
* Alex Ionescu [:alexandrui] 🇷🇴<br />
* Andra Esanu [:andra.esanu] 🇷🇴<br />
* Beatrice Acasandrei [:beatrice-acasandrei] 🇷🇴<br />
<br />
= Where to find us =<br />
* [https://chat.mozilla.org/#/room/#perftest:mozilla.org #perftest]<br />
* [https://chat.mozilla.org/#/room/#profiler:mozilla.org #profiler]<br />
<br />
= Team purpose =<br />
Empowering engineers with tools to continuously improve the performance of Mozilla products.<br />
<br />
= Meetings =<br />
{{/Meetings}}<br />
<br />
= Onboarding =<br />
Welcome to the team! You are encouraged to improve the [[/Onboarding|onboarding page]]. If you need to ask questions that are not already covered, please update the page so that the next person has a better onboarding experience.<br />
<br />
= Workflows =<br />
* [[/Testing/Triage|Testing Triage]]<br />
* [[/Testing/Reviews|Testing Reviews]]<br />
<br />
= Resources =<br />
* [[../Glossary]]<br />
* [https://docs.google.com/document/d/1SswqYIAm4h8vlwfMc0pfGEJwXpFECyubGDezRZHHPFE/edit Strategies for investigating intermittents]<br />
* [https://docs.google.com/document/d/1HV2_z8hwhI2w8EbURtkYjpikVG5g9QeKEPo9h5msuRs/edit Following up perf bugs]<br />
* [https://docs.google.com/document/d/103SRVVcE2SZNYP3kFXGeiVQrusH2Wj2yv8SWaDvB9SM/edit Excessive Android device queue response plan]<br />
* [[TestEngineering/Performance/FAQ#How_can_I_do_a_bisection.3F|Bisection Workflow]]<br />
* [[TestEngineering/Performance/Sheriffing/CompareView|CompareView]]</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Review_Process&diff=1247184TestEngineering/Performance/Review Process2023-07-24T16:15:27Z<p>Davehunt: Redirected page to Performance/Tools/Testing/Reviews</p>
<hr />
<div>#REDIRECT [[Performance/Tools/Testing/Reviews]]<br />
<br />
This page provides a set of checks that we should be doing when we are reviewing a patch, as well as providing an outline of how to tackle reviews.<br />
<br />
'''Standard Checks and Process:'''<br />
# Start by skimming through the patch to get a full picture of it.<br />
# Check the try run. Make sure that the tests being run make sense for this change.<br />
# Check that the try run performance numbers are similar to what we currently have. Otherwise, discuss/r- to see if those changes were expected or not.<br />
# If the patch is a migration, make sure there is a documented plan (preferably with a dependent bug) of how this will proceed. Do not land unfinished migration patches, the new tests being added should be fully functional and all tested.<br />
# If the patch changes task configurations, ensure that the full taskgraph has the desired results. We use [https://github.com/gmierz/moz-current-tests#generating-a-test-report this tool] along with [www.diffchecker.com] to look at how a patch changes the tasks that will be run.<br />
# Go through the patch now and scrutinize the code. Don't be scared to make suggestions for changes, the author might not have considered what you're suggesting or are concerned about.<br />
# If the patch is large or complex, don't try to review it all in one sitting. It's very likely that you'll miss something. Break the review out across multiple iterations, this has multiple benefits:<br />
## You won't need to dig very deep on the first one. Outline all the small, or easy fixes to make here.<br />
## On the next pass, check to make sure your requests were satisfied. Then, dig deeper into the code changes. This has the benefit of allowing you to step back from the code for a bit and then look at it again. You'll likely find things you didn't consider in the first pass.<br />
# Before providing the final review, double-check the reviewbot try run to make sure it didn't miss submitting anything on the patch. If there are failures that are related to the patch, then mention them in the review and request changes to get them fixed.</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Review_Process&diff=1247183TestEngineering/Performance/Review Process2023-07-24T16:15:16Z<p>Davehunt: Redirected page to Performance/Tools/Reviews</p>
<hr />
<div>#REDIRECT [[Performance/Tools/Reviews]]<br />
<br />
This page provides a set of checks that we should be doing when we are reviewing a patch, as well as providing an outline of how to tackle reviews.<br />
<br />
'''Standard Checks and Process:'''<br />
# Start by skimming through the patch to get a full picture of it.<br />
# Check the try run. Make sure that the tests being run make sense for this change.<br />
# Check that the try run performance numbers are similar to what we currently have. Otherwise, discuss/r- to see if those changes were expected or not.<br />
# If the patch is a migration, make sure there is a documented plan (preferably with a dependent bug) of how this will proceed. Do not land unfinished migration patches, the new tests being added should be fully functional and all tested.<br />
# If the patch changes task configurations, ensure that the full taskgraph has the desired results. We use [https://github.com/gmierz/moz-current-tests#generating-a-test-report this tool] along with [www.diffchecker.com] to look at how a patch changes the tasks that will be run.<br />
# Go through the patch now and scrutinize the code. Don't be scared to make suggestions for changes, the author might not have considered what you're suggesting or are concerned about.<br />
# If the patch is large or complex, don't try to review it all in one sitting. It's very likely that you'll miss something. Break the review out across multiple iterations, this has multiple benefits:<br />
## You won't need to dig very deep on the first one. Outline all the small, or easy fixes to make here.<br />
## On the next pass, check to make sure your requests were satisfied. Then, dig deeper into the code changes. This has the benefit of allowing you to step back from the code for a bit and then look at it again. You'll likely find things you didn't consider in the first pass.<br />
# Before providing the final review, double-check the reviewbot try run to make sure it didn't miss submitting anything on the patch. If there are failures that are related to the patch, then mention them in the review and request changes to get them fixed.</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Tools/Testing/Triage&diff=1247182Performance/Tools/Testing/Triage2023-07-24T16:14:44Z<p>Davehunt: </p>
<hr />
<div>{{DISPLAYTITLE:Performance Test Triage}}<br />
<br />
= Triage Workflow =<br />
<br />
== Triage Duty ==<br />
Your main goal during triage duty is to make sure bugs are labelled appropriately and quickly based on recent bug activity. This might mean checking bug activity once a day, perhaps doing some minimal investigation, and then updating the bug's priority, severity, product, status, need-info, etc.<br />
<br />
See [[#Queries|Useful Queries]].<br />
<br />
* Triage incoming bugs as early as possible or at least once a day. <br />
* Intermittent failures:<br />
** Only investigate an intermittent failure if it has happened more than once. <br />
** Glimpse over the failure details, and if incomplete information has been added as the first comment, add the relevant part of the log as a new comment. ** If it’s a duplicate bug mark it as such, or if not related to the component move it immediately to the correct one. <br />
** Intermittent failures should have a priority of P5 by default, unless they need investigation and a fix immediately. Then set a priority of P2 and find an owner.<br />
** On Monday the triage owner or person on triage duty goes through all the bugs that got updated by the intermittent failures bot. If there is a top-occurring failure make sure to assign the bug to someone familiar with the affected code. Failures which happened less often (like less than 15 times in the last week) you can simply ignore.<br />
* Untriaged bugs:<br />
** Bugs without a priority set should move to P3 by default, which means it will be fixed at some point. Only set P2 if the bug blocks current OKRs.<br />
* Mentored bugs:<br />
** It's generally up to the bug mentor to keep these bugs in good shape. Feel free to need-info the mentor if you have any doubts.<br />
** Set needinfo on the most recent contributor if they haven't replied for more than a week. <br />
** Never set a contributor as assignee. This will be done automatically by Phabricator when the initial patch gets submitted. Reset the assignee and set the bug to new if no further response comes in within a week. <br />
** Leave the priority as is and don't change it to P1 if such a bug gets assigned.<br />
<br />
* If it is not clear how to proceed on the bug, or if further input is necessary from stakeholders, add the whiteboard entry '''[perftest:triage]'''. Those bugs will be discussed in the next [https://docs.google.com/document/d/1SeMijarFsdtm-mrxkIQzV4y1PHcJN72JDPWOlxJ7u-A/edit#heading=h.v37yirv4o0rn triage meeting].<br />
<br />
== Review Queue ==<br />
To ensure that we're responding to review requests in a timely manner, the [https://phabricator.services.mozilla.com/tag/perftest-reviewers/ #perftest-reviewers] group is triaged once/day. This involves tracking the number of open review requests and assigning a team member to be responsible for the review. See [[../Reviews/]] for more information about how reviews should be performed.<br />
<br />
* Open the [https://phabricator.services.mozilla.com/dashboard/view/84/ FxPerfTest dashboard] in Phabricator.<br />
* For any review that '''only''' has the '''#perftest-reviewers''' as the reviewer, assign a team member as a blocking reviewer.<br />
** The reviewer should be the next team member in rotation to balance the load across the team, however this may not be desirable if many large reviews are building up on an individual. Use the team member tabs to understand the review queue for individuals and your best judgement.<br />
<br />
== Queries ==<br />
<br />
* [https://mozilla.github.io/triage-center/?component=Testing%3AAWSY&component=Testing%3APerformance&component=Testing%3ARaptor&component=Testing%3ATalos Triage Center] highlights where attention is needed.<br />
* [https://bugzilla.mozilla.org/buglist.cgi?priority=--&keywords=meta%2C%20&query_format=advanced&product=Testing&resolution=---&list_id=15153821&keywords_type=nowords&component=AWSY&component=Performance&component=Raptor&component=Talos Untriaged bugs]<br />
* [https://bugzilla.mozilla.org/buglist.cgi?keywords_type=allwords&resolution=---&component=AWSY&component=Performance&component=Raptor&component=Talos&product=Testing&keywords=intermittent-failure&query_format=advanced&list_id=15222319 Intermittent failures]<br />
* [https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&o1=isnotempty&component=AWSY&component=Performance&component=Raptor&component=Talos&f1=bug_mentor&resolution=---&product=Testing&list_id=15153825 Mentored bugs]<br />
<br />
== Triage Duty versus Triage Owner ==<br />
<br />
* Every bug component has a Triage Owner. This is an ongoing, long-term role.<br />
* Anyone on the team may be assigned to Triage Duty. This is a short-term role that involves monitoring incoming bugs on a daily basis.<br />
* The triage team decides who is on triage duty until the next triage meeting, which means triage duty usually rotates on a weekly basis. <br />
<br />
== Bugs being worked on ==<br />
<br />
We want to make it clear when a bug is actively being worked on, and make it easy for people to pick up any available work. <br />
<br />
* When you start working on a bug (you start implementing a fix), set its priority to P1 and assign yourself.<br />
* If you stop working on a bug or when it is blocked by another one, reset the priority to its original value, and unassign yourself.<br />
* If you want to indicate that you plan to work on a bug soon set a need-info to yourself on the bug with a short comment about your plans (Examples: "I will work on this after Bug xyz is done" or "I will start the implementation next week")<br />
<br />
== Priorities ==<br />
<br />
* P1 - This bug represents an OKR or an important intermittent failure, and has an assignee working on an implementation<br />
* P2 - This bug represents an OKR or an important intermittent failure, but no-one is working on it at the moment<br />
* P3 - This bug will be fixed eventually (non-OKR, mentoring)<br />
* P4 - Not used (reserved for bots)<br />
* P5 - Used for intermittent failures, or no intention to fix but will accept patches<br />
<br />
= Strategies for triaging intermittents =<br />
<br />
* Look for patterns in Treeherder's [https://treeherder.mozilla.org/intermittent-failures.html intermittent failures view] (platforms, build types, tree etc). This also linked to in the Orange Factor field on each bug.<br />
** E.g. Investigate the intermittent logs associations with a grain of salt, as Code sheriffs may occasionally misattribute some failure logs.<br />
** E.g. if 90% of failures happen on Android, and the rest on some desktop platforms, there’s a chance that desktop failures were incorrectly assigned.<br />
* Recognise and mark duplicates as early as possible ([https://bugzilla.mozilla.org/show_bug.cgi?id=1552812 example])<br />
* Use generic intermittent bugs when availabe ([https://bugzilla.mozilla.org/show_bug.cgi?id=1609295 example])<br />
** Simply ask a Code sheriff to group them (a needinfo? + some guidelines should suffice)<br />
** Use this when you have lots of bugs covering the exact same underlying issue<br />
*** Pick the oldest bug<br />
*** Replace parts of the bug summary with <random><br />
*** Use this only for common patterns you notice<br />
*** There are some risks involved here, especially if we’re not entirely sure about the underlying problem. Any mistake could hide other Raptor regressions.<br />
*** Making this too generic can increase the failure rate for what seems to be a common culprit. Code sheriffs will then have more reasons to turn our tests off.<br />
<br />
== How to handle intermittent bugs which cover a crash of Firefox? ==<br />
<br />
First make sure it doesn't stay in the Testing component but gets moved to a product that covers Firefox as crashes relate to problems to Firefox itself & not to the test harness. Therefore check the crashing thread and find the reported crash frame as listed in the summary of the bug. From there go on and find the first frame that is part of our code. Also check for the following:<br />
<br />
* For a header file (.h) you can most likely continue to the next frame<br />
* If it is inter-process communication (IPC) related remember that various components make use of it, so check higher in the stack, which code calls into the IPC code.<br />
* For allocation issues (like OOM) also find the appropriate caller<br />
<br />
If not done yet, also add a comment with the link to the exact crash location. Make sure to keep the changeset id in the URL.<br />
<br />
* Figure out the right component<br />
** Don’t rush & assume that the 1st frame of the crashing thread is the culprit, especially if its corresponding source code points to a header (*.h) file<br />
** If indeed 1st frame isn’t the culprit, just go to the next frame from the logs.<br />
** Note: most often, this is not a trivial task. So even if you end up to another source file, it’s still very likely that the problem happens a bit more up the stack. If you get blocked, request an engineer’s assistance and learn from their process.<br />
* How do you know which engineer to ask for assistance? <br />
** By looking over the source file (you got stuck at) & figuring out its component (use Mercurial’s blame feature). With that component, you identify the team that likely has more knowledge over the problem. Contact the team & ask someone there to assist you.<br />
** You can also search for the associated file name in [https://searchfox.org/ searchfox], and find the corresponding Bugzilla component within the nearest <code>moz.build</code> file.<br />
<br />
= FAQ =<br />
<br />
== Can I have many bugs assigned to me? ==<br />
<br />
Yes, it's possible to be actively working on several bugs at a time. The definition of "actively working" is loose; you can use your intuition.<br />
<br />
== When should I unassign myself from a bug that I have started to work on? ==<br />
<br />
As mentioned above, the definition of "actively working" on a bug is loose, so you can use your intuition. If you notice that you won't be making any progress on the bug this week, that probably means that your attention is focused on other work or something else is blocking progress.</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Tools&diff=1247181Performance/Tools2023-07-24T16:14:10Z<p>Davehunt: </p>
<hr />
<div>{{DISPLAYTITLE:Performance Tools 🔥🦊⏱️🛠️}}[[File:Fxperftest.png|thumb|right]]<br />
<br />
= Who we are =<br />
* Carla Severe [:carla] 🇺🇸<br />
* Andrej Glavic [:aglavic] 🇨🇦<br />
* Greg Mierzwinski [:sparky] 🇨🇦<br />
* Kash Shampur [:kshampur] 🇨🇦<br />
* Adam Brouwers-Harries [:aabh] 🇬🇧<br />
* Dave Hunt [:davehunt] 🇬🇧<br />
* Julien Wajsberg [:julienw] 🇫🇷<br />
* Nazım Can Altınova [:canova] 🇩🇪<br />
* Alex Finder [:afinder] 🇷🇴<br />
* Alex Ionescu [:alexandrui] 🇷🇴<br />
* Andra Esanu [:andra.esanu] 🇷🇴<br />
* Beatrice Acasandrei [:beatrice-acasandrei] 🇷🇴<br />
<br />
= Where to find us =<br />
* [https://chat.mozilla.org/#/room/#perftest:mozilla.org #perftest]<br />
* [https://chat.mozilla.org/#/room/#profiler:mozilla.org #profiler]<br />
<br />
= Team purpose =<br />
Empowering engineers with tools to continuously improve the performance of Mozilla products.<br />
<br />
= Meetings =<br />
{{/Meetings}}<br />
<br />
= Onboarding =<br />
Welcome to the team! You are encouraged to improve the [[/Onboarding|onboarding page]]. If you need to ask questions that are not already covered, please update the page so that the next person has a better onboarding experience.<br />
<br />
= Workflows =<br />
* [[/Testing/Triage|Testing Triage]]<br />
* [[/Testing/Reviews|Testing Reviews]]</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Tools/Testing/Reviews&diff=1247180Performance/Tools/Testing/Reviews2023-07-24T16:13:30Z<p>Davehunt: </p>
<hr />
<div>{{DISPLAYTITLE:Performance Testing Reviews}}<br />
<br />
This page provides a set of checks that we should be doing when we are reviewing a patch, as well as providing an outline of how to tackle reviews.<br />
<br />
'''Standard Checks and Process:'''<br />
# Start by skimming through the patch to get a full picture of it.<br />
# Check the try run. Make sure that the tests being run make sense for this change.<br />
# Check that the try run performance numbers are similar to what we currently have. Otherwise, discuss/r- to see if those changes were expected or not.<br />
# If the patch is a migration, make sure there is a documented plan (preferably with a dependent bug) of how this will proceed. Do not land unfinished migration patches, the new tests being added should be fully functional and all tested.<br />
# If the patch changes task configurations, ensure that the full taskgraph has the desired results. We use [https://github.com/gmierz/moz-current-tests#generating-a-test-report this tool] along with [www.diffchecker.com] to look at how a patch changes the tasks that will be run.<br />
# Go through the patch now and scrutinize the code. Don't be scared to make suggestions for changes, the author might not have considered what you're suggesting or are concerned about.<br />
# If the patch is large or complex, don't try to review it all in one sitting. It's very likely that you'll miss something. Break the review out across multiple iterations, this has multiple benefits:<br />
## You won't need to dig very deep on the first one. Outline all the small, or easy fixes to make here.<br />
## On the next pass, check to make sure your requests were satisfied. Then, dig deeper into the code changes. This has the benefit of allowing you to step back from the code for a bit and then look at it again. You'll likely find things you didn't consider in the first pass.<br />
# Before providing the final review, double-check the reviewbot try run to make sure it didn't miss submitting anything on the patch. If there are failures that are related to the patch, then mention them in the review and request changes to get them fixed.</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Tools/Testing/Reviews&diff=1247179Performance/Tools/Testing/Reviews2023-07-24T16:12:55Z<p>Davehunt: Created page with "This page provides a set of checks that we should be doing when we are reviewing a patch, as well as providing an outline of how to tackle reviews. '''Standard Checks and Pro..."</p>
<hr />
<div>This page provides a set of checks that we should be doing when we are reviewing a patch, as well as providing an outline of how to tackle reviews.<br />
<br />
'''Standard Checks and Process:'''<br />
# Start by skimming through the patch to get a full picture of it.<br />
# Check the try run. Make sure that the tests being run make sense for this change.<br />
# Check that the try run performance numbers are similar to what we currently have. Otherwise, discuss/r- to see if those changes were expected or not.<br />
# If the patch is a migration, make sure there is a documented plan (preferably with a dependent bug) of how this will proceed. Do not land unfinished migration patches, the new tests being added should be fully functional and all tested.<br />
# If the patch changes task configurations, ensure that the full taskgraph has the desired results. We use [https://github.com/gmierz/moz-current-tests#generating-a-test-report this tool] along with [www.diffchecker.com] to look at how a patch changes the tasks that will be run.<br />
# Go through the patch now and scrutinize the code. Don't be scared to make suggestions for changes, the author might not have considered what you're suggesting or are concerned about.<br />
# If the patch is large or complex, don't try to review it all in one sitting. It's very likely that you'll miss something. Break the review out across multiple iterations, this has multiple benefits:<br />
## You won't need to dig very deep on the first one. Outline all the small, or easy fixes to make here.<br />
## On the next pass, check to make sure your requests were satisfied. Then, dig deeper into the code changes. This has the benefit of allowing you to step back from the code for a bit and then look at it again. You'll likely find things you didn't consider in the first pass.<br />
# Before providing the final review, double-check the reviewbot try run to make sure it didn't miss submitting anything on the patch. If there are failures that are related to the patch, then mention them in the review and request changes to get them fixed.</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Tools&diff=1247178Performance/Tools2023-07-24T16:12:07Z<p>Davehunt: </p>
<hr />
<div>{{DISPLAYTITLE:Performance Tools 🔥🦊⏱️🛠️}}[[File:Fxperftest.png|thumb|right]]<br />
<br />
= Who we are =<br />
* Carla Severe [:carla] 🇺🇸<br />
* Andrej Glavic [:aglavic] 🇨🇦<br />
* Greg Mierzwinski [:sparky] 🇨🇦<br />
* Kash Shampur [:kshampur] 🇨🇦<br />
* Adam Brouwers-Harries [:aabh] 🇬🇧<br />
* Dave Hunt [:davehunt] 🇬🇧<br />
* Julien Wajsberg [:julienw] 🇫🇷<br />
* Nazım Can Altınova [:canova] 🇩🇪<br />
* Alex Finder [:afinder] 🇷🇴<br />
* Alex Ionescu [:alexandrui] 🇷🇴<br />
* Andra Esanu [:andra.esanu] 🇷🇴<br />
* Beatrice Acasandrei [:beatrice-acasandrei] 🇷🇴<br />
<br />
= Where to find us =<br />
* [https://chat.mozilla.org/#/room/#perftest:mozilla.org #perftest]<br />
* [https://chat.mozilla.org/#/room/#profiler:mozilla.org #profiler]<br />
<br />
= Team purpose =<br />
Empowering engineers with tools to continuously improve the performance of Mozilla products.<br />
<br />
= Meetings =<br />
{{/Meetings}}<br />
<br />
= Onboarding =<br />
Welcome to the team! You are encouraged to improve the [[/Onboarding|onboarding page]]. If you need to ask questions that are not already covered, please update the page so that the next person has a better onboarding experience.<br />
<br />
= Workflows =<br />
* [[/Testing/Triage|Test Triage]]</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Triage_Process&diff=1247177TestEngineering/Performance/Triage Process2023-07-24T16:10:51Z<p>Davehunt: Redirected page to Performance/Tools/Testing/Triage</p>
<hr />
<div>#REDIRECT [[Performance/Tools/Testing/Triage]]<br />
<br />
== Triage Workflow ==<br />
<br />
=== Triage Duty ===<br />
Your main goal during triage duty is to make sure bugs are labelled appropriately and quickly based on recent bug activity. This might mean checking bug activity once a day, perhaps doing some minimal investigation, and then updating the bug's priority, severity, product, status, need-info, etc.<br />
<br />
See [[#Queries|Useful Queries]].<br />
<br />
* Triage incoming bugs as early as possible or at least once a day. <br />
* Intermittent failures:<br />
** Only investigate an intermittent failure if it has happened more than once. <br />
** Glimpse over the failure details, and if incomplete information has been added as the first comment, add the relevant part of the log as a new comment. ** If it’s a duplicate bug mark it as such, or if not related to the component move it immediately to the correct one. <br />
** Intermittent failures should have a priority of P5 by default, unless they need investigation and a fix immediately. Then set a priority of P2 and find an owner.<br />
** On Monday the triage owner or person on triage duty goes through all the bugs that got updated by the intermittent failures bot. If there is a top-occurring failure make sure to assign the bug to someone familiar with the affected code. Failures which happened less often (like less than 15 times in the last week) you can simply ignore.<br />
* Untriaged bugs:<br />
** Bugs without a priority set should move to P3 by default, which means it will be fixed at some point. Only set P2 if the bug blocks current OKRs.<br />
* Mentored bugs:<br />
** It's generally up to the bug mentor to keep these bugs in good shape. Feel free to need-info the mentor if you have any doubts.<br />
** Set needinfo on the most recent contributor if they haven't replied for more than a week. <br />
** Never set a contributor as assignee. This will be done automatically by Phabricator when the initial patch gets submitted. Reset the assignee and set the bug to new if no further response comes in within a week. <br />
** Leave the priority as is and don't change it to P1 if such a bug gets assigned.<br />
<br />
* If it is not clear how to proceed on the bug, or if further input is necessary from stakeholders, add the whiteboard entry '''[perftest:triage]'''. Those bugs will be discussed in the next [https://docs.google.com/document/d/1SeMijarFsdtm-mrxkIQzV4y1PHcJN72JDPWOlxJ7u-A/edit#heading=h.v37yirv4o0rn triage meeting].<br />
<br />
=== Review Queue ===<br />
To ensure that we're responding to review requests in a timely manner, the [https://phabricator.services.mozilla.com/tag/perftest-reviewers/ #perftest-reviewers] group is triaged once/day. This involves tracking the number of open review requests and assigning a team member to be responsible for the review. See [[../Review Process/]] for more information about how reviews should be performed.<br />
<br />
* Open the [https://phabricator.services.mozilla.com/dashboard/view/84/ FxPerfTest dashboard] in Phabricator.<br />
* For any review that '''only''' has the '''#perftest-reviewers''' as the reviewer, assign a team member as a blocking reviewer.<br />
** The reviewer should be the next team member in rotation to balance the load across the team, however this may not be desirable if many large reviews are building up on an individual. Use the team member tabs to understand the review queue for individuals and your best judgement.<br />
<br />
=== Queries ===<br />
<br />
* [https://mozilla.github.io/triage-center/?component=Testing%3AAWSY&component=Testing%3APerformance&component=Testing%3ARaptor&component=Testing%3ATalos Triage Center] highlights where attention is needed.<br />
* [https://bugzilla.mozilla.org/buglist.cgi?priority=--&keywords=meta%2C%20&query_format=advanced&product=Testing&resolution=---&list_id=15153821&keywords_type=nowords&component=AWSY&component=Performance&component=Raptor&component=Talos Untriaged bugs]<br />
* [https://bugzilla.mozilla.org/buglist.cgi?keywords_type=allwords&resolution=---&component=AWSY&component=Performance&component=Raptor&component=Talos&product=Testing&keywords=intermittent-failure&query_format=advanced&list_id=15222319 Intermittent failures]<br />
* [https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&o1=isnotempty&component=AWSY&component=Performance&component=Raptor&component=Talos&f1=bug_mentor&resolution=---&product=Testing&list_id=15153825 Mentored bugs]<br />
<br />
=== Triage Duty versus Triage Owner ===<br />
<br />
* Every bug component has a Triage Owner. This is an ongoing, long-term role.<br />
* Anyone on the team may be assigned to Triage Duty. This is a short-term role that involves monitoring incoming bugs on a daily basis.<br />
* The triage team decides who is on triage duty until the next triage meeting, which means triage duty usually rotates on a weekly basis. <br />
<br />
=== Bugs being worked on ===<br />
<br />
We want to make it clear when a bug is actively being worked on, and make it easy for people to pick up any available work. <br />
<br />
* When you start working on a bug (you start implementing a fix), set its priority to P1 and assign yourself.<br />
* If you stop working on a bug or when it is blocked by another one, reset the priority to its original value, and unassign yourself.<br />
* If you want to indicate that you plan to work on a bug soon set a need-info to yourself on the bug with a short comment about your plans (Examples: "I will work on this after Bug xyz is done" or "I will start the implementation next week")<br />
<br />
=== Priorities ===<br />
<br />
* P1 - This bug represents an OKR or an important intermittent failure, and has an assignee working on an implementation<br />
* P2 - This bug represents an OKR or an important intermittent failure, but no-one is working on it at the moment<br />
* P3 - This bug will be fixed eventually (non-OKR, mentoring)<br />
* P4 - Not used (reserved for bots)<br />
* P5 - Used for intermittent failures, or no intention to fix but will accept patches<br />
<br />
== Strategies for triaging intermittents ==<br />
<br />
<br />
* Look for patterns in Treeherder's [https://treeherder.mozilla.org/intermittent-failures.html intermittent failures view] (platforms, build types, tree etc). This also linked to in the Orange Factor field on each bug.<br />
** E.g. Investigate the intermittent logs associations with a grain of salt, as Code sheriffs may occasionally misattribute some failure logs.<br />
** E.g. if 90% of failures happen on Android, and the rest on some desktop platforms, there’s a chance that desktop failures were incorrectly assigned.<br />
* Recognise and mark duplicates as early as possible ([https://bugzilla.mozilla.org/show_bug.cgi?id=1552812 example])<br />
* Use generic intermittent bugs when availabe ([https://bugzilla.mozilla.org/show_bug.cgi?id=1609295 example])<br />
** Simply ask a Code sheriff to group them (a needinfo? + some guidelines should suffice)<br />
** Use this when you have lots of bugs covering the exact same underlying issue<br />
*** Pick the oldest bug<br />
*** Replace parts of the bug summary with <random><br />
*** Use this only for common patterns you notice<br />
*** There are some risks involved here, especially if we’re not entirely sure about the underlying problem. Any mistake could hide other Raptor regressions.<br />
*** Making this too generic can increase the failure rate for what seems to be a common culprit. Code sheriffs will then have more reasons to turn our tests off.<br />
<br />
=== How to handle intermittent bugs which cover a crash of Firefox? ===<br />
<br />
First make sure it doesn't stay in the Testing component but gets moved to a product that covers Firefox as crashes relate to problems to Firefox itself & not to the test harness. Therefore check the crashing thread and find the reported crash frame as listed in the summary of the bug. From there go on and find the first frame that is part of our code. Also check for the following:<br />
<br />
* For a header file (.h) you can most likely continue to the next frame<br />
* If it is inter-process communication (IPC) related remember that various components make use of it, so check higher in the stack, which code calls into the IPC code.<br />
* For allocation issues (like OOM) also find the appropriate caller<br />
<br />
If not done yet, also add a comment with the link to the exact crash location. Make sure to keep the changeset id in the URL.<br />
<br />
* Figure out the right component<br />
** Don’t rush & assume that the 1st frame of the crashing thread is the culprit, especially if its corresponding source code points to a header (*.h) file<br />
** If indeed 1st frame isn’t the culprit, just go to the next frame from the logs.<br />
** Note: most often, this is not a trivial task. So even if you end up to another source file, it’s still very likely that the problem happens a bit more up the stack. If you get blocked, request an engineer’s assistance and learn from their process.<br />
* How do you know which engineer to ask for assistance? <br />
** By looking over the source file (you got stuck at) & figuring out its component (use Mercurial’s blame feature). With that component, you identify the team that likely has more knowledge over the problem. Contact the team & ask someone there to assist you.<br />
** You can also search for the associated file name in [https://searchfox.org/ searchfox], and find the corresponding Bugzilla component within the nearest <code>moz.build</code> file.<br />
<br />
== FAQ ==<br />
<br />
=== Can I have many bugs assigned to me? ===<br />
<br />
Yes, it's possible to be actively working on several bugs at a time. The definition of "actively working" is loose; you can use your intuition.<br />
<br />
=== When should I unassign myself from a bug that I have started to work on? ===<br />
<br />
As mentioned above, the definition of "actively working" on a bug is loose, so you can use your intuition. If you notice that you won't be making any progress on the bug this week, that probably means that your attention is focused on other work or something else is blocking progress.</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Triage_Process&diff=1247176TestEngineering/Performance/Triage Process2023-07-24T16:10:26Z<p>Davehunt: Redirected page to /Performance/Tools/Testing/Triage</p>
<hr />
<div>#REDIRECT [[/Performance/Tools/Testing/Triage]]<br />
<br />
== Triage Workflow ==<br />
<br />
=== Triage Duty ===<br />
Your main goal during triage duty is to make sure bugs are labelled appropriately and quickly based on recent bug activity. This might mean checking bug activity once a day, perhaps doing some minimal investigation, and then updating the bug's priority, severity, product, status, need-info, etc.<br />
<br />
See [[#Queries|Useful Queries]].<br />
<br />
* Triage incoming bugs as early as possible or at least once a day. <br />
* Intermittent failures:<br />
** Only investigate an intermittent failure if it has happened more than once. <br />
** Glimpse over the failure details, and if incomplete information has been added as the first comment, add the relevant part of the log as a new comment. ** If it’s a duplicate bug mark it as such, or if not related to the component move it immediately to the correct one. <br />
** Intermittent failures should have a priority of P5 by default, unless they need investigation and a fix immediately. Then set a priority of P2 and find an owner.<br />
** On Monday the triage owner or person on triage duty goes through all the bugs that got updated by the intermittent failures bot. If there is a top-occurring failure make sure to assign the bug to someone familiar with the affected code. Failures which happened less often (like less than 15 times in the last week) you can simply ignore.<br />
* Untriaged bugs:<br />
** Bugs without a priority set should move to P3 by default, which means it will be fixed at some point. Only set P2 if the bug blocks current OKRs.<br />
* Mentored bugs:<br />
** It's generally up to the bug mentor to keep these bugs in good shape. Feel free to need-info the mentor if you have any doubts.<br />
** Set needinfo on the most recent contributor if they haven't replied for more than a week. <br />
** Never set a contributor as assignee. This will be done automatically by Phabricator when the initial patch gets submitted. Reset the assignee and set the bug to new if no further response comes in within a week. <br />
** Leave the priority as is and don't change it to P1 if such a bug gets assigned.<br />
<br />
* If it is not clear how to proceed on the bug, or if further input is necessary from stakeholders, add the whiteboard entry '''[perftest:triage]'''. Those bugs will be discussed in the next [https://docs.google.com/document/d/1SeMijarFsdtm-mrxkIQzV4y1PHcJN72JDPWOlxJ7u-A/edit#heading=h.v37yirv4o0rn triage meeting].<br />
<br />
=== Review Queue ===<br />
To ensure that we're responding to review requests in a timely manner, the [https://phabricator.services.mozilla.com/tag/perftest-reviewers/ #perftest-reviewers] group is triaged once/day. This involves tracking the number of open review requests and assigning a team member to be responsible for the review. See [[../Review Process/]] for more information about how reviews should be performed.<br />
<br />
* Open the [https://phabricator.services.mozilla.com/dashboard/view/84/ FxPerfTest dashboard] in Phabricator.<br />
* For any review that '''only''' has the '''#perftest-reviewers''' as the reviewer, assign a team member as a blocking reviewer.<br />
** The reviewer should be the next team member in rotation to balance the load across the team, however this may not be desirable if many large reviews are building up on an individual. Use the team member tabs to understand the review queue for individuals and your best judgement.<br />
<br />
=== Queries ===<br />
<br />
* [https://mozilla.github.io/triage-center/?component=Testing%3AAWSY&component=Testing%3APerformance&component=Testing%3ARaptor&component=Testing%3ATalos Triage Center] highlights where attention is needed.<br />
* [https://bugzilla.mozilla.org/buglist.cgi?priority=--&keywords=meta%2C%20&query_format=advanced&product=Testing&resolution=---&list_id=15153821&keywords_type=nowords&component=AWSY&component=Performance&component=Raptor&component=Talos Untriaged bugs]<br />
* [https://bugzilla.mozilla.org/buglist.cgi?keywords_type=allwords&resolution=---&component=AWSY&component=Performance&component=Raptor&component=Talos&product=Testing&keywords=intermittent-failure&query_format=advanced&list_id=15222319 Intermittent failures]<br />
* [https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&o1=isnotempty&component=AWSY&component=Performance&component=Raptor&component=Talos&f1=bug_mentor&resolution=---&product=Testing&list_id=15153825 Mentored bugs]<br />
<br />
=== Triage Duty versus Triage Owner ===<br />
<br />
* Every bug component has a Triage Owner. This is an ongoing, long-term role.<br />
* Anyone on the team may be assigned to Triage Duty. This is a short-term role that involves monitoring incoming bugs on a daily basis.<br />
* The triage team decides who is on triage duty until the next triage meeting, which means triage duty usually rotates on a weekly basis. <br />
<br />
=== Bugs being worked on ===<br />
<br />
We want to make it clear when a bug is actively being worked on, and make it easy for people to pick up any available work. <br />
<br />
* When you start working on a bug (you start implementing a fix), set its priority to P1 and assign yourself.<br />
* If you stop working on a bug or when it is blocked by another one, reset the priority to its original value, and unassign yourself.<br />
* If you want to indicate that you plan to work on a bug soon set a need-info to yourself on the bug with a short comment about your plans (Examples: "I will work on this after Bug xyz is done" or "I will start the implementation next week")<br />
<br />
=== Priorities ===<br />
<br />
* P1 - This bug represents an OKR or an important intermittent failure, and has an assignee working on an implementation<br />
* P2 - This bug represents an OKR or an important intermittent failure, but no-one is working on it at the moment<br />
* P3 - This bug will be fixed eventually (non-OKR, mentoring)<br />
* P4 - Not used (reserved for bots)<br />
* P5 - Used for intermittent failures, or no intention to fix but will accept patches<br />
<br />
== Strategies for triaging intermittents ==<br />
<br />
<br />
* Look for patterns in Treeherder's [https://treeherder.mozilla.org/intermittent-failures.html intermittent failures view] (platforms, build types, tree etc). This also linked to in the Orange Factor field on each bug.<br />
** E.g. Investigate the intermittent logs associations with a grain of salt, as Code sheriffs may occasionally misattribute some failure logs.<br />
** E.g. if 90% of failures happen on Android, and the rest on some desktop platforms, there’s a chance that desktop failures were incorrectly assigned.<br />
* Recognise and mark duplicates as early as possible ([https://bugzilla.mozilla.org/show_bug.cgi?id=1552812 example])<br />
* Use generic intermittent bugs when availabe ([https://bugzilla.mozilla.org/show_bug.cgi?id=1609295 example])<br />
** Simply ask a Code sheriff to group them (a needinfo? + some guidelines should suffice)<br />
** Use this when you have lots of bugs covering the exact same underlying issue<br />
*** Pick the oldest bug<br />
*** Replace parts of the bug summary with <random><br />
*** Use this only for common patterns you notice<br />
*** There are some risks involved here, especially if we’re not entirely sure about the underlying problem. Any mistake could hide other Raptor regressions.<br />
*** Making this too generic can increase the failure rate for what seems to be a common culprit. Code sheriffs will then have more reasons to turn our tests off.<br />
<br />
=== How to handle intermittent bugs which cover a crash of Firefox? ===<br />
<br />
First make sure it doesn't stay in the Testing component but gets moved to a product that covers Firefox as crashes relate to problems to Firefox itself & not to the test harness. Therefore check the crashing thread and find the reported crash frame as listed in the summary of the bug. From there go on and find the first frame that is part of our code. Also check for the following:<br />
<br />
* For a header file (.h) you can most likely continue to the next frame<br />
* If it is inter-process communication (IPC) related remember that various components make use of it, so check higher in the stack, which code calls into the IPC code.<br />
* For allocation issues (like OOM) also find the appropriate caller<br />
<br />
If not done yet, also add a comment with the link to the exact crash location. Make sure to keep the changeset id in the URL.<br />
<br />
* Figure out the right component<br />
** Don’t rush & assume that the 1st frame of the crashing thread is the culprit, especially if its corresponding source code points to a header (*.h) file<br />
** If indeed 1st frame isn’t the culprit, just go to the next frame from the logs.<br />
** Note: most often, this is not a trivial task. So even if you end up to another source file, it’s still very likely that the problem happens a bit more up the stack. If you get blocked, request an engineer’s assistance and learn from their process.<br />
* How do you know which engineer to ask for assistance? <br />
** By looking over the source file (you got stuck at) & figuring out its component (use Mercurial’s blame feature). With that component, you identify the team that likely has more knowledge over the problem. Contact the team & ask someone there to assist you.<br />
** You can also search for the associated file name in [https://searchfox.org/ searchfox], and find the corresponding Bugzilla component within the nearest <code>moz.build</code> file.<br />
<br />
== FAQ ==<br />
<br />
=== Can I have many bugs assigned to me? ===<br />
<br />
Yes, it's possible to be actively working on several bugs at a time. The definition of "actively working" is loose; you can use your intuition.<br />
<br />
=== When should I unassign myself from a bug that I have started to work on? ===<br />
<br />
As mentioned above, the definition of "actively working" on a bug is loose, so you can use your intuition. If you notice that you won't be making any progress on the bug this week, that probably means that your attention is focused on other work or something else is blocking progress.</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Tools/Testing/Triage&diff=1247175Performance/Tools/Testing/Triage2023-07-24T16:09:12Z<p>Davehunt: Created page with "{{DISPLAYTITLE:Performance Test Triage}} = Triage Workflow = == Triage Duty == Your main goal during triage duty is to make sure bugs are labelled appropriately and quickly..."</p>
<hr />
<div>{{DISPLAYTITLE:Performance Test Triage}}<br />
<br />
= Triage Workflow =<br />
<br />
== Triage Duty ==<br />
Your main goal during triage duty is to make sure bugs are labelled appropriately and quickly based on recent bug activity. This might mean checking bug activity once a day, perhaps doing some minimal investigation, and then updating the bug's priority, severity, product, status, need-info, etc.<br />
<br />
See [[#Queries|Useful Queries]].<br />
<br />
* Triage incoming bugs as early as possible or at least once a day. <br />
* Intermittent failures:<br />
** Only investigate an intermittent failure if it has happened more than once. <br />
** Glimpse over the failure details, and if incomplete information has been added as the first comment, add the relevant part of the log as a new comment. ** If it’s a duplicate bug mark it as such, or if not related to the component move it immediately to the correct one. <br />
** Intermittent failures should have a priority of P5 by default, unless they need investigation and a fix immediately. Then set a priority of P2 and find an owner.<br />
** On Monday the triage owner or person on triage duty goes through all the bugs that got updated by the intermittent failures bot. If there is a top-occurring failure make sure to assign the bug to someone familiar with the affected code. Failures which happened less often (like less than 15 times in the last week) you can simply ignore.<br />
* Untriaged bugs:<br />
** Bugs without a priority set should move to P3 by default, which means it will be fixed at some point. Only set P2 if the bug blocks current OKRs.<br />
* Mentored bugs:<br />
** It's generally up to the bug mentor to keep these bugs in good shape. Feel free to need-info the mentor if you have any doubts.<br />
** Set needinfo on the most recent contributor if they haven't replied for more than a week. <br />
** Never set a contributor as assignee. This will be done automatically by Phabricator when the initial patch gets submitted. Reset the assignee and set the bug to new if no further response comes in within a week. <br />
** Leave the priority as is and don't change it to P1 if such a bug gets assigned.<br />
<br />
* If it is not clear how to proceed on the bug, or if further input is necessary from stakeholders, add the whiteboard entry '''[perftest:triage]'''. Those bugs will be discussed in the next [https://docs.google.com/document/d/1SeMijarFsdtm-mrxkIQzV4y1PHcJN72JDPWOlxJ7u-A/edit#heading=h.v37yirv4o0rn triage meeting].<br />
<br />
== Review Queue ==<br />
To ensure that we're responding to review requests in a timely manner, the [https://phabricator.services.mozilla.com/tag/perftest-reviewers/ #perftest-reviewers] group is triaged once/day. This involves tracking the number of open review requests and assigning a team member to be responsible for the review. See [[../Review Process/]] for more information about how reviews should be performed.<br />
<br />
* Open the [https://phabricator.services.mozilla.com/dashboard/view/84/ FxPerfTest dashboard] in Phabricator.<br />
* For any review that '''only''' has the '''#perftest-reviewers''' as the reviewer, assign a team member as a blocking reviewer.<br />
** The reviewer should be the next team member in rotation to balance the load across the team, however this may not be desirable if many large reviews are building up on an individual. Use the team member tabs to understand the review queue for individuals and your best judgement.<br />
<br />
== Queries ==<br />
<br />
* [https://mozilla.github.io/triage-center/?component=Testing%3AAWSY&component=Testing%3APerformance&component=Testing%3ARaptor&component=Testing%3ATalos Triage Center] highlights where attention is needed.<br />
* [https://bugzilla.mozilla.org/buglist.cgi?priority=--&keywords=meta%2C%20&query_format=advanced&product=Testing&resolution=---&list_id=15153821&keywords_type=nowords&component=AWSY&component=Performance&component=Raptor&component=Talos Untriaged bugs]<br />
* [https://bugzilla.mozilla.org/buglist.cgi?keywords_type=allwords&resolution=---&component=AWSY&component=Performance&component=Raptor&component=Talos&product=Testing&keywords=intermittent-failure&query_format=advanced&list_id=15222319 Intermittent failures]<br />
* [https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&o1=isnotempty&component=AWSY&component=Performance&component=Raptor&component=Talos&f1=bug_mentor&resolution=---&product=Testing&list_id=15153825 Mentored bugs]<br />
<br />
== Triage Duty versus Triage Owner ==<br />
<br />
* Every bug component has a Triage Owner. This is an ongoing, long-term role.<br />
* Anyone on the team may be assigned to Triage Duty. This is a short-term role that involves monitoring incoming bugs on a daily basis.<br />
* The triage team decides who is on triage duty until the next triage meeting, which means triage duty usually rotates on a weekly basis. <br />
<br />
== Bugs being worked on ==<br />
<br />
We want to make it clear when a bug is actively being worked on, and make it easy for people to pick up any available work. <br />
<br />
* When you start working on a bug (you start implementing a fix), set its priority to P1 and assign yourself.<br />
* If you stop working on a bug or when it is blocked by another one, reset the priority to its original value, and unassign yourself.<br />
* If you want to indicate that you plan to work on a bug soon set a need-info to yourself on the bug with a short comment about your plans (Examples: "I will work on this after Bug xyz is done" or "I will start the implementation next week")<br />
<br />
== Priorities ==<br />
<br />
* P1 - This bug represents an OKR or an important intermittent failure, and has an assignee working on an implementation<br />
* P2 - This bug represents an OKR or an important intermittent failure, but no-one is working on it at the moment<br />
* P3 - This bug will be fixed eventually (non-OKR, mentoring)<br />
* P4 - Not used (reserved for bots)<br />
* P5 - Used for intermittent failures, or no intention to fix but will accept patches<br />
<br />
= Strategies for triaging intermittents =<br />
<br />
* Look for patterns in Treeherder's [https://treeherder.mozilla.org/intermittent-failures.html intermittent failures view] (platforms, build types, tree etc). This also linked to in the Orange Factor field on each bug.<br />
** E.g. Investigate the intermittent logs associations with a grain of salt, as Code sheriffs may occasionally misattribute some failure logs.<br />
** E.g. if 90% of failures happen on Android, and the rest on some desktop platforms, there’s a chance that desktop failures were incorrectly assigned.<br />
* Recognise and mark duplicates as early as possible ([https://bugzilla.mozilla.org/show_bug.cgi?id=1552812 example])<br />
* Use generic intermittent bugs when availabe ([https://bugzilla.mozilla.org/show_bug.cgi?id=1609295 example])<br />
** Simply ask a Code sheriff to group them (a needinfo? + some guidelines should suffice)<br />
** Use this when you have lots of bugs covering the exact same underlying issue<br />
*** Pick the oldest bug<br />
*** Replace parts of the bug summary with <random><br />
*** Use this only for common patterns you notice<br />
*** There are some risks involved here, especially if we’re not entirely sure about the underlying problem. Any mistake could hide other Raptor regressions.<br />
*** Making this too generic can increase the failure rate for what seems to be a common culprit. Code sheriffs will then have more reasons to turn our tests off.<br />
<br />
== How to handle intermittent bugs which cover a crash of Firefox? ==<br />
<br />
First make sure it doesn't stay in the Testing component but gets moved to a product that covers Firefox as crashes relate to problems to Firefox itself & not to the test harness. Therefore check the crashing thread and find the reported crash frame as listed in the summary of the bug. From there go on and find the first frame that is part of our code. Also check for the following:<br />
<br />
* For a header file (.h) you can most likely continue to the next frame<br />
* If it is inter-process communication (IPC) related remember that various components make use of it, so check higher in the stack, which code calls into the IPC code.<br />
* For allocation issues (like OOM) also find the appropriate caller<br />
<br />
If not done yet, also add a comment with the link to the exact crash location. Make sure to keep the changeset id in the URL.<br />
<br />
* Figure out the right component<br />
** Don’t rush & assume that the 1st frame of the crashing thread is the culprit, especially if its corresponding source code points to a header (*.h) file<br />
** If indeed 1st frame isn’t the culprit, just go to the next frame from the logs.<br />
** Note: most often, this is not a trivial task. So even if you end up to another source file, it’s still very likely that the problem happens a bit more up the stack. If you get blocked, request an engineer’s assistance and learn from their process.<br />
* How do you know which engineer to ask for assistance? <br />
** By looking over the source file (you got stuck at) & figuring out its component (use Mercurial’s blame feature). With that component, you identify the team that likely has more knowledge over the problem. Contact the team & ask someone there to assist you.<br />
** You can also search for the associated file name in [https://searchfox.org/ searchfox], and find the corresponding Bugzilla component within the nearest <code>moz.build</code> file.<br />
<br />
= FAQ =<br />
<br />
== Can I have many bugs assigned to me? ==<br />
<br />
Yes, it's possible to be actively working on several bugs at a time. The definition of "actively working" is loose; you can use your intuition.<br />
<br />
== When should I unassign myself from a bug that I have started to work on? ==<br />
<br />
As mentioned above, the definition of "actively working" on a bug is loose, so you can use your intuition. If you notice that you won't be making any progress on the bug this week, that probably means that your attention is focused on other work or something else is blocking progress.</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Tools&diff=1247174Performance/Tools2023-07-24T16:01:36Z<p>Davehunt: </p>
<hr />
<div>{{DISPLAYTITLE:Performance Tools 🔥🦊⏱️🛠️}}[[File:Fxperftest.png|thumb|right]]<br />
<br />
= Who we are =<br />
* Carla Severe [:carla] 🇺🇸<br />
* Andrej Glavic [:aglavic] 🇨🇦<br />
* Greg Mierzwinski [:sparky] 🇨🇦<br />
* Kash Shampur [:kshampur] 🇨🇦<br />
* Adam Brouwers-Harries [:aabh] 🇬🇧<br />
* Dave Hunt [:davehunt] 🇬🇧<br />
* Julien Wajsberg [:julienw] 🇫🇷<br />
* Nazım Can Altınova [:canova] 🇩🇪<br />
* Alex Finder [:afinder] 🇷🇴<br />
* Alex Ionescu [:alexandrui] 🇷🇴<br />
* Andra Esanu [:andra.esanu] 🇷🇴<br />
* Beatrice Acasandrei [:beatrice-acasandrei] 🇷🇴<br />
<br />
= Where to find us =<br />
* [https://chat.mozilla.org/#/room/#perftest:mozilla.org #perftest]<br />
* [https://chat.mozilla.org/#/room/#profiler:mozilla.org #profiler]<br />
<br />
= Meetings =<br />
{{/Meetings}}<br />
<br />
= Onboarding =<br />
Welcome to the team! You are encouraged to improve the [[/Onboarding|onboarding page]]. If you need to ask questions that are not already covered, please update the page so that the next person has a better onboarding experience.</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Platforms&diff=1247161TestEngineering/Performance/Platforms2023-07-21T14:21:17Z<p>Davehunt: /* MacBook Pro */</p>
<hr />
<div>This page details the hardware profiles of all machines used for running performance tests in automation. All performance tests run on physical hardware. Every attempt to run these tests in a virtualised environment has resulted in the hypervisor getting in the way and creating too much noise in the tests, no matter how much we try and tweak it.<br />
<br />
= Support =<br />
All hardware listed below is maintained by Mozilla's operations team.<br />
<br />
= Desktop =<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/HPE+Moonshot HPE Moonshot] ==<br />
The HPE Moonshot System supports up to 45 servers in a single chassis. Each server resides on a cartridge.<br />
* '''Platforms''': linux64, windows10-64.<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
<br />
HPE Moonshot 1500 System (45 cartridges per 4.3U)<br />
1500W Hot Plug redundant (1+1) Power Supply<br />
45 m710x ProLiant cartridges<br />
1 [https://ark.intel.com/content/www/us/en/ark/products/93741/intel-xeon-processor-e3-1585l-v5-8m-cache-3-00-ghz.html Intel E3-1585L v5] 3.0GHz CPU (4 cores)<br />
8GB DDR4 2400MHz RAM<br />
1 256GB PCIe M.2 2280 SSD<br />
1 64GB SATA M.2 2242 SSD<br />
1 Intel Iris Pro Graphics P580<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/Apple+Mac+Mini+R8 Apple Mac Mini R8] ==<br />
* '''Platforms''': macosx<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
* '''Note''': All Mac Minis have EDID devices attached that set the resolution<br />
<br />
Model Name: Mac Mini<br />
Model Identifier: [https://everymac.com/ultimate-mac-lookup/?search_keywords=Macmini8,1 Macmini8,1]<br />
Processor Name: 6-Core Intel Core i7 (i7-8700B)<br />
Processor Speed: 3.2 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== MacBook Pro ==<br />
{{warning|These will be removed via {{bug|1828660}}}}<br />
* '''Platforms''': macosx1014-64-power<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops currently for power testing<br />
<br />
Model Name: MacBook Pro laptop<br />
Model Identifier: MacBookPro15,1<br />
Processor Name: Intel Core i7 (I7-9750H)<br />
Processor Speed: 2.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
GPU:<br />
Intel UHD Graphics 630<br />
Radeon Pro 555X<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== Windows 2017 Reference ==<br />
* '''Platforms''': windows10-64-ref-hw-2017<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Acer Aspire 15<br />
Model Identifier: E5-575-33BM<br />
Processor Name: Intel Core i3-7100U<br />
Processor Speed: 2.4 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD Graphics 620<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 3 MB<br />
Memory: 4GB DDR4<br />
Disk: 1TB SATA Hard Drive (5400RPM)<br />
Resolution: 1920 x 1080<br />
<br />
== Windows 2018 Reference ==<br />
* '''Platforms''': TBA<br />
* '''Location''': TBA<br />
* '''Note''': no devices available in CI<br />
<br />
Model Name: Dell Inspiron 15 3000<br />
Model Identifier: inspiron15<br />
Processor Name: Intel Celeron N3060<br />
Processor Speed: 1.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD 400<br />
L2 Cache: 2MB<br />
Memory: 4GB DDR4<br />
Disk: 500GB <br />
Resolution: 1920 x 1080<br />
<br />
== Windows ARM64 ==<br />
* '''Platforms''': win64-aarch64<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Lenovo Yoga C630<br />
Model Identifier: C630<br />
Processor Name: Qualcomm Snapdragon 850<br />
Processor Speed: 2.96 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 8<br />
GPU: Qualcomm Adreno 630<br />
Memory: 8GB<br />
Disk: 128GB SSD<br />
Resolution: 1920 x 1080<br />
<br />
= Mobile =<br />
== Samsung A51 ==<br />
* '''Platforms''': android-hw-a51<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 27 devices total (17 @4GB RAM, 4 @6GB RAM, 8 @8GB RAM) (27 for perf)<br />
<br />
Model Name: Samsung A51<br />
Model Identifier: SM_A515F<br />
Processor Name: Exynos 9611<br />
Processor Speed: 4x2.3 GHz Cortex-A73 & 4x1.7 GHz Cortex-A53<br />
Number of Processors: 2<br />
Total Number of Cores: 8 (4 each)<br />
GPU: Mali-G72 MP3<br />
Memory: 4 | 6 | 8 GB (see note above)<br />
Disk: 64 | 128 | 256 GB</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Platforms&diff=1247160TestEngineering/Performance/Platforms2023-07-21T14:19:04Z<p>Davehunt: /* Windows ARM64 */</p>
<hr />
<div>This page details the hardware profiles of all machines used for running performance tests in automation. All performance tests run on physical hardware. Every attempt to run these tests in a virtualised environment has resulted in the hypervisor getting in the way and creating too much noise in the tests, no matter how much we try and tweak it.<br />
<br />
= Support =<br />
All hardware listed below is maintained by Mozilla's operations team.<br />
<br />
= Desktop =<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/HPE+Moonshot HPE Moonshot] ==<br />
The HPE Moonshot System supports up to 45 servers in a single chassis. Each server resides on a cartridge.<br />
* '''Platforms''': linux64, windows10-64.<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
<br />
HPE Moonshot 1500 System (45 cartridges per 4.3U)<br />
1500W Hot Plug redundant (1+1) Power Supply<br />
45 m710x ProLiant cartridges<br />
1 [https://ark.intel.com/content/www/us/en/ark/products/93741/intel-xeon-processor-e3-1585l-v5-8m-cache-3-00-ghz.html Intel E3-1585L v5] 3.0GHz CPU (4 cores)<br />
8GB DDR4 2400MHz RAM<br />
1 256GB PCIe M.2 2280 SSD<br />
1 64GB SATA M.2 2242 SSD<br />
1 Intel Iris Pro Graphics P580<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/Apple+Mac+Mini+R8 Apple Mac Mini R8] ==<br />
* '''Platforms''': macosx<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
* '''Note''': All Mac Minis have EDID devices attached that set the resolution<br />
<br />
Model Name: Mac Mini<br />
Model Identifier: [https://everymac.com/ultimate-mac-lookup/?search_keywords=Macmini8,1 Macmini8,1]<br />
Processor Name: 6-Core Intel Core i7 (i7-8700B)<br />
Processor Speed: 3.2 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== MacBook Pro ==<br />
* '''Platforms''': macosx1014-64-power<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops currently for power testing<br />
<br />
Model Name: MacBook Pro laptop<br />
Model Identifier: MacBookPro15,1<br />
Processor Name: Intel Core i7 (I7-9750H)<br />
Processor Speed: 2.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
GPU:<br />
Intel UHD Graphics 630<br />
Radeon Pro 555X<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== Windows 2017 Reference ==<br />
* '''Platforms''': windows10-64-ref-hw-2017<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Acer Aspire 15<br />
Model Identifier: E5-575-33BM<br />
Processor Name: Intel Core i3-7100U<br />
Processor Speed: 2.4 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD Graphics 620<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 3 MB<br />
Memory: 4GB DDR4<br />
Disk: 1TB SATA Hard Drive (5400RPM)<br />
Resolution: 1920 x 1080<br />
<br />
== Windows 2018 Reference ==<br />
* '''Platforms''': TBA<br />
* '''Location''': TBA<br />
* '''Note''': no devices available in CI<br />
<br />
Model Name: Dell Inspiron 15 3000<br />
Model Identifier: inspiron15<br />
Processor Name: Intel Celeron N3060<br />
Processor Speed: 1.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD 400<br />
L2 Cache: 2MB<br />
Memory: 4GB DDR4<br />
Disk: 500GB <br />
Resolution: 1920 x 1080<br />
<br />
== Windows ARM64 ==<br />
* '''Platforms''': win64-aarch64<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Lenovo Yoga C630<br />
Model Identifier: C630<br />
Processor Name: Qualcomm Snapdragon 850<br />
Processor Speed: 2.96 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 8<br />
GPU: Qualcomm Adreno 630<br />
Memory: 8GB<br />
Disk: 128GB SSD<br />
Resolution: 1920 x 1080<br />
<br />
= Mobile =<br />
== Samsung A51 ==<br />
* '''Platforms''': android-hw-a51<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 27 devices total (17 @4GB RAM, 4 @6GB RAM, 8 @8GB RAM) (27 for perf)<br />
<br />
Model Name: Samsung A51<br />
Model Identifier: SM_A515F<br />
Processor Name: Exynos 9611<br />
Processor Speed: 4x2.3 GHz Cortex-A73 & 4x1.7 GHz Cortex-A53<br />
Number of Processors: 2<br />
Total Number of Cores: 8 (4 each)<br />
GPU: Mali-G72 MP3<br />
Memory: 4 | 6 | 8 GB (see note above)<br />
Disk: 64 | 128 | 256 GB</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Platforms&diff=1247159TestEngineering/Performance/Platforms2023-07-21T14:18:36Z<p>Davehunt: /* Windows ARM64 */</p>
<hr />
<div>This page details the hardware profiles of all machines used for running performance tests in automation. All performance tests run on physical hardware. Every attempt to run these tests in a virtualised environment has resulted in the hypervisor getting in the way and creating too much noise in the tests, no matter how much we try and tweak it.<br />
<br />
= Support =<br />
All hardware listed below is maintained by Mozilla's operations team.<br />
<br />
= Desktop =<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/HPE+Moonshot HPE Moonshot] ==<br />
The HPE Moonshot System supports up to 45 servers in a single chassis. Each server resides on a cartridge.<br />
* '''Platforms''': linux64, windows10-64.<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
<br />
HPE Moonshot 1500 System (45 cartridges per 4.3U)<br />
1500W Hot Plug redundant (1+1) Power Supply<br />
45 m710x ProLiant cartridges<br />
1 [https://ark.intel.com/content/www/us/en/ark/products/93741/intel-xeon-processor-e3-1585l-v5-8m-cache-3-00-ghz.html Intel E3-1585L v5] 3.0GHz CPU (4 cores)<br />
8GB DDR4 2400MHz RAM<br />
1 256GB PCIe M.2 2280 SSD<br />
1 64GB SATA M.2 2242 SSD<br />
1 Intel Iris Pro Graphics P580<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/Apple+Mac+Mini+R8 Apple Mac Mini R8] ==<br />
* '''Platforms''': macosx<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
* '''Note''': All Mac Minis have EDID devices attached that set the resolution<br />
<br />
Model Name: Mac Mini<br />
Model Identifier: [https://everymac.com/ultimate-mac-lookup/?search_keywords=Macmini8,1 Macmini8,1]<br />
Processor Name: 6-Core Intel Core i7 (i7-8700B)<br />
Processor Speed: 3.2 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== MacBook Pro ==<br />
* '''Platforms''': macosx1014-64-power<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops currently for power testing<br />
<br />
Model Name: MacBook Pro laptop<br />
Model Identifier: MacBookPro15,1<br />
Processor Name: Intel Core i7 (I7-9750H)<br />
Processor Speed: 2.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
GPU:<br />
Intel UHD Graphics 630<br />
Radeon Pro 555X<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== Windows 2017 Reference ==<br />
* '''Platforms''': windows10-64-ref-hw-2017<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Acer Aspire 15<br />
Model Identifier: E5-575-33BM<br />
Processor Name: Intel Core i3-7100U<br />
Processor Speed: 2.4 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD Graphics 620<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 3 MB<br />
Memory: 4GB DDR4<br />
Disk: 1TB SATA Hard Drive (5400RPM)<br />
Resolution: 1920 x 1080<br />
<br />
== Windows 2018 Reference ==<br />
* '''Platforms''': TBA<br />
* '''Location''': TBA<br />
* '''Note''': no devices available in CI<br />
<br />
Model Name: Dell Inspiron 15 3000<br />
Model Identifier: inspiron15<br />
Processor Name: Intel Celeron N3060<br />
Processor Speed: 1.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD 400<br />
L2 Cache: 2MB<br />
Memory: 4GB DDR4<br />
Disk: 500GB <br />
Resolution: 1920 x 1080<br />
<br />
== Windows ARM64 ==<br />
* '''Platforms''': win64-aarch64-shippable<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Lenovo Yoga C630<br />
Model Identifier: C630<br />
Processor Name: Qualcomm Snapdragon 850<br />
Processor Speed: 2.96 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 8<br />
GPU: Qualcomm Adreno 630<br />
Memory: 8GB<br />
Disk: 128GB SSD<br />
Resolution: 1920 x 1080<br />
<br />
= Mobile =<br />
== Samsung A51 ==<br />
* '''Platforms''': android-hw-a51<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 27 devices total (17 @4GB RAM, 4 @6GB RAM, 8 @8GB RAM) (27 for perf)<br />
<br />
Model Name: Samsung A51<br />
Model Identifier: SM_A515F<br />
Processor Name: Exynos 9611<br />
Processor Speed: 4x2.3 GHz Cortex-A73 & 4x1.7 GHz Cortex-A53<br />
Number of Processors: 2<br />
Total Number of Cores: 8 (4 each)<br />
GPU: Mali-G72 MP3<br />
Memory: 4 | 6 | 8 GB (see note above)<br />
Disk: 64 | 128 | 256 GB</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Platforms&diff=1247158TestEngineering/Performance/Platforms2023-07-21T14:13:05Z<p>Davehunt: Undo revision 1247156 by Davehunt (talk)</p>
<hr />
<div>This page details the hardware profiles of all machines used for running performance tests in automation. All performance tests run on physical hardware. Every attempt to run these tests in a virtualised environment has resulted in the hypervisor getting in the way and creating too much noise in the tests, no matter how much we try and tweak it.<br />
<br />
= Support =<br />
All hardware listed below is maintained by Mozilla's operations team.<br />
<br />
= Desktop =<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/HPE+Moonshot HPE Moonshot] ==<br />
The HPE Moonshot System supports up to 45 servers in a single chassis. Each server resides on a cartridge.<br />
* '''Platforms''': linux64, windows10-64.<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
<br />
HPE Moonshot 1500 System (45 cartridges per 4.3U)<br />
1500W Hot Plug redundant (1+1) Power Supply<br />
45 m710x ProLiant cartridges<br />
1 [https://ark.intel.com/content/www/us/en/ark/products/93741/intel-xeon-processor-e3-1585l-v5-8m-cache-3-00-ghz.html Intel E3-1585L v5] 3.0GHz CPU (4 cores)<br />
8GB DDR4 2400MHz RAM<br />
1 256GB PCIe M.2 2280 SSD<br />
1 64GB SATA M.2 2242 SSD<br />
1 Intel Iris Pro Graphics P580<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/Apple+Mac+Mini+R8 Apple Mac Mini R8] ==<br />
* '''Platforms''': macosx<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
* '''Note''': All Mac Minis have EDID devices attached that set the resolution<br />
<br />
Model Name: Mac Mini<br />
Model Identifier: [https://everymac.com/ultimate-mac-lookup/?search_keywords=Macmini8,1 Macmini8,1]<br />
Processor Name: 6-Core Intel Core i7 (i7-8700B)<br />
Processor Speed: 3.2 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== MacBook Pro ==<br />
* '''Platforms''': macosx1014-64-power<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops currently for power testing<br />
<br />
Model Name: MacBook Pro laptop<br />
Model Identifier: MacBookPro15,1<br />
Processor Name: Intel Core i7 (I7-9750H)<br />
Processor Speed: 2.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
GPU:<br />
Intel UHD Graphics 630<br />
Radeon Pro 555X<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== Windows 2017 Reference ==<br />
* '''Platforms''': windows10-64-ref-hw-2017<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Acer Aspire 15<br />
Model Identifier: E5-575-33BM<br />
Processor Name: Intel Core i3-7100U<br />
Processor Speed: 2.4 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD Graphics 620<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 3 MB<br />
Memory: 4GB DDR4<br />
Disk: 1TB SATA Hard Drive (5400RPM)<br />
Resolution: 1920 x 1080<br />
<br />
== Windows 2018 Reference ==<br />
* '''Platforms''': TBA<br />
* '''Location''': TBA<br />
* '''Note''': no devices available in CI<br />
<br />
Model Name: Dell Inspiron 15 3000<br />
Model Identifier: inspiron15<br />
Processor Name: Intel Celeron N3060<br />
Processor Speed: 1.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD 400<br />
L2 Cache: 2MB<br />
Memory: 4GB DDR4<br />
Disk: 500GB <br />
Resolution: 1920 x 1080<br />
<br />
== Windows ARM64 ==<br />
* '''Platforms''': windows10-64-ref-hw-2017<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Lenovo Yoga C630<br />
Model Identifier: C630<br />
Processor Name: Qualcomm Snapdragon 850<br />
Processor Speed: 2.96 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 8<br />
GPU: Qualcomm Adreno 630<br />
Memory: 8GB<br />
Disk: 128GB SSD<br />
Resolution: 1920 x 1080<br />
<br />
= Mobile =<br />
== Samsung A51 ==<br />
* '''Platforms''': android-hw-a51<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 27 devices total (17 @4GB RAM, 4 @6GB RAM, 8 @8GB RAM) (27 for perf)<br />
<br />
Model Name: Samsung A51<br />
Model Identifier: SM_A515F<br />
Processor Name: Exynos 9611<br />
Processor Speed: 4x2.3 GHz Cortex-A73 & 4x1.7 GHz Cortex-A53<br />
Number of Processors: 2<br />
Total Number of Cores: 8 (4 each)<br />
GPU: Mali-G72 MP3<br />
Memory: 4 | 6 | 8 GB (see note above)<br />
Disk: 64 | 128 | 256 GB</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Platforms&diff=1247157TestEngineering/Performance/Platforms2023-07-21T14:07:54Z<p>Davehunt: /* Windows 2018 Reference */</p>
<hr />
<div>This page details the hardware profiles of all machines used for running performance tests in automation. All performance tests run on physical hardware. Every attempt to run these tests in a virtualised environment has resulted in the hypervisor getting in the way and creating too much noise in the tests, no matter how much we try and tweak it.<br />
<br />
= Support =<br />
All hardware listed below is maintained by Mozilla's operations team.<br />
<br />
= Desktop =<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/HPE+Moonshot HPE Moonshot] ==<br />
The HPE Moonshot System supports up to 45 servers in a single chassis. Each server resides on a cartridge.<br />
* '''Platforms''': linux64, windows10-64.<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
<br />
HPE Moonshot 1500 System (45 cartridges per 4.3U)<br />
1500W Hot Plug redundant (1+1) Power Supply<br />
45 m710x ProLiant cartridges<br />
1 [https://ark.intel.com/content/www/us/en/ark/products/93741/intel-xeon-processor-e3-1585l-v5-8m-cache-3-00-ghz.html Intel E3-1585L v5] 3.0GHz CPU (4 cores)<br />
8GB DDR4 2400MHz RAM<br />
1 256GB PCIe M.2 2280 SSD<br />
1 64GB SATA M.2 2242 SSD<br />
1 Intel Iris Pro Graphics P580<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/Apple+Mac+Mini+R8 Apple Mac Mini R8] ==<br />
* '''Platforms''': macosx<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
* '''Note''': All Mac Minis have EDID devices attached that set the resolution<br />
<br />
Model Name: Mac Mini<br />
Model Identifier: [https://everymac.com/ultimate-mac-lookup/?search_keywords=Macmini8,1 Macmini8,1]<br />
Processor Name: 6-Core Intel Core i7 (i7-8700B)<br />
Processor Speed: 3.2 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== MacBook Pro ==<br />
* '''Platforms''': macosx1014-64-power<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops currently for power testing<br />
<br />
Model Name: MacBook Pro laptop<br />
Model Identifier: MacBookPro15,1<br />
Processor Name: Intel Core i7 (I7-9750H)<br />
Processor Speed: 2.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
GPU:<br />
Intel UHD Graphics 630<br />
Radeon Pro 555X<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== Windows 2017 Reference ==<br />
* '''Platforms''': windows10-64-ref-hw-2017<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Acer Aspire 15<br />
Model Identifier: E5-575-33BM<br />
Processor Name: Intel Core i3-7100U<br />
Processor Speed: 2.4 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD Graphics 620<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 3 MB<br />
Memory: 4GB DDR4<br />
Disk: 1TB SATA Hard Drive (5400RPM)<br />
Resolution: 1920 x 1080<br />
<br />
== Windows 2018 Reference ==<br />
* '''Platforms''': TBA<br />
* '''Location''': TBA<br />
* '''Note''': no devices available in CI<br />
<br />
Model Name: Dell Inspiron 15 3000<br />
Model Identifier: inspiron15<br />
Processor Name: Intel Celeron N3060<br />
Processor Speed: 1.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD 400<br />
L2 Cache: 2MB<br />
Memory: 4GB DDR4<br />
Disk: 500GB <br />
Resolution: 1920 x 1080<br />
<br />
= Mobile =<br />
== Samsung A51 ==<br />
* '''Platforms''': android-hw-a51<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 27 devices total (17 @4GB RAM, 4 @6GB RAM, 8 @8GB RAM) (27 for perf)<br />
<br />
Model Name: Samsung A51<br />
Model Identifier: SM_A515F<br />
Processor Name: Exynos 9611<br />
Processor Speed: 4x2.3 GHz Cortex-A73 & 4x1.7 GHz Cortex-A53<br />
Number of Processors: 2<br />
Total Number of Cores: 8 (4 each)<br />
GPU: Mali-G72 MP3<br />
Memory: 4 | 6 | 8 GB (see note above)<br />
Disk: 64 | 128 | 256 GB</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Platforms&diff=1247156TestEngineering/Performance/Platforms2023-07-21T14:05:37Z<p>Davehunt: </p>
<hr />
<div>This page details the hardware profiles of all machines used for running performance tests in automation. All performance tests run on physical hardware. Every attempt to run these tests in a virtualised environment has resulted in the hypervisor getting in the way and creating too much noise in the tests, no matter how much we try and tweak it.<br />
<br />
= Support =<br />
All hardware listed below is maintained by Mozilla's operations team.<br />
<br />
= Desktop =<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/HPE+Moonshot HPE Moonshot] ==<br />
The HPE Moonshot System supports up to 45 servers in a single chassis. Each server resides on a cartridge.<br />
* '''Platforms''': linux64, windows10-64.<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
<br />
HPE Moonshot 1500 System (45 cartridges per 4.3U)<br />
1500W Hot Plug redundant (1+1) Power Supply<br />
45 m710x ProLiant cartridges<br />
1 [https://ark.intel.com/content/www/us/en/ark/products/93741/intel-xeon-processor-e3-1585l-v5-8m-cache-3-00-ghz.html Intel E3-1585L v5] 3.0GHz CPU (4 cores)<br />
8GB DDR4 2400MHz RAM<br />
1 256GB PCIe M.2 2280 SSD<br />
1 64GB SATA M.2 2242 SSD<br />
1 Intel Iris Pro Graphics P580<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/Apple+Mac+Mini+R8 Apple Mac Mini R8] ==<br />
* '''Platforms''': macosx<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
* '''Note''': All Mac Minis have EDID devices attached that set the resolution<br />
<br />
Model Name: Mac Mini<br />
Model Identifier: [https://everymac.com/ultimate-mac-lookup/?search_keywords=Macmini8,1 Macmini8,1]<br />
Processor Name: 6-Core Intel Core i7 (i7-8700B)<br />
Processor Speed: 3.2 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== MacBook Pro ==<br />
* '''Platforms''': macosx1014-64-power<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops currently for power testing<br />
<br />
Model Name: MacBook Pro laptop<br />
Model Identifier: MacBookPro15,1<br />
Processor Name: Intel Core i7 (I7-9750H)<br />
Processor Speed: 2.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
GPU:<br />
Intel UHD Graphics 630<br />
Radeon Pro 555X<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== Windows 2017 Reference ==<br />
* '''Platforms''': windows10-64-ref-hw-2017<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Acer Aspire 15<br />
Model Identifier: E5-575-33BM<br />
Processor Name: Intel Core i3-7100U<br />
Processor Speed: 2.4 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD Graphics 620<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 3 MB<br />
Memory: 4GB DDR4<br />
Disk: 1TB SATA Hard Drive (5400RPM)<br />
Resolution: 1920 x 1080<br />
<br />
== Windows 2018 Reference ==<br />
* '''Platforms''': TODO (not in taskcluster)<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Dell Inspiron 15 3000<br />
Model Identifier: inspiron15<br />
Processor Name: Intel Celeron N3060<br />
Processor Speed: 1.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD 400<br />
L2 Cache: 2MB<br />
Memory: 4GB DDR4<br />
Disk: 500GB <br />
Resolution: 1920 x 1080<br />
<br />
= Mobile =<br />
== Samsung A51 ==<br />
* '''Platforms''': android-hw-a51<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 27 devices total (17 @4GB RAM, 4 @6GB RAM, 8 @8GB RAM) (27 for perf)<br />
<br />
Model Name: Samsung A51<br />
Model Identifier: SM_A515F<br />
Processor Name: Exynos 9611<br />
Processor Speed: 4x2.3 GHz Cortex-A73 & 4x1.7 GHz Cortex-A53<br />
Number of Processors: 2<br />
Total Number of Cores: 8 (4 each)<br />
GPU: Mali-G72 MP3<br />
Memory: 4 | 6 | 8 GB (see note above)<br />
Disk: 64 | 128 | 256 GB</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Platforms&diff=1247155TestEngineering/Performance/Platforms2023-07-21T14:04:24Z<p>Davehunt: /* Windows 2017 Reference */</p>
<hr />
<div>This page details the hardware profiles of all machines used for running performance tests in automation. All performance tests run on physical hardware. Every attempt to run these tests in a virtualised environment has resulted in the hypervisor getting in the way and creating too much noise in the tests, no matter how much we try and tweak it.<br />
<br />
= Support =<br />
All hardware listed below is maintained by Mozilla's operations team.<br />
<br />
= Desktop =<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/HPE+Moonshot HPE Moonshot] ==<br />
The HPE Moonshot System supports up to 45 servers in a single chassis. Each server resides on a cartridge.<br />
* '''Platforms''': linux64, windows10-64.<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
<br />
HPE Moonshot 1500 System (45 cartridges per 4.3U)<br />
1500W Hot Plug redundant (1+1) Power Supply<br />
45 m710x ProLiant cartridges<br />
1 [https://ark.intel.com/content/www/us/en/ark/products/93741/intel-xeon-processor-e3-1585l-v5-8m-cache-3-00-ghz.html Intel E3-1585L v5] 3.0GHz CPU (4 cores)<br />
8GB DDR4 2400MHz RAM<br />
1 256GB PCIe M.2 2280 SSD<br />
1 64GB SATA M.2 2242 SSD<br />
1 Intel Iris Pro Graphics P580<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/Apple+Mac+Mini+R8 Apple Mac Mini R8] ==<br />
* '''Platforms''': macosx<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
* '''Note''': All Mac Minis have EDID devices attached that set the resolution<br />
<br />
Model Name: Mac Mini<br />
Model Identifier: [https://everymac.com/ultimate-mac-lookup/?search_keywords=Macmini8,1 Macmini8,1]<br />
Processor Name: 6-Core Intel Core i7 (i7-8700B)<br />
Processor Speed: 3.2 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== MacBook Pro ==<br />
* '''Platforms''': macosx1014-64-power<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops currently for power testing<br />
<br />
Model Name: MacBook Pro laptop<br />
Model Identifier: MacBookPro15,1<br />
Processor Name: Intel Core i7 (I7-9750H)<br />
Processor Speed: 2.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
GPU:<br />
Intel UHD Graphics 630<br />
Radeon Pro 555X<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== Windows 2017 Reference ==<br />
* '''Platforms''': windows10-64-ref-hw-2017<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Acer Aspire 15<br />
Model Identifier: E5-575-33BM<br />
Processor Name: Intel Core i3-7100U<br />
Processor Speed: 2.4 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD Graphics 620<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 3 MB<br />
Memory: 4GB DDR4<br />
Disk: 1TB SATA Hard Drive (5400RPM)<br />
Resolution: 1920 x 1080<br />
<br />
== Windows 2018 Reference ==<br />
* '''Platforms''': TODO (not in taskcluster)<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Dell Inspiron 15 3000<br />
Model Identifier: inspiron15<br />
Processor Name: Intel Celeron N3060<br />
Processor Speed: 1.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD 400<br />
L2 Cache: 2MB<br />
Memory: 4GB DDR4<br />
Disk: 500GB <br />
Resolution: 1920 x 1080<br />
<br />
== Windows ARM64 ==<br />
* '''Platforms''': windows10-64-ref-hw-2017<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 35 laptops for perf and unittests (typically 15 online at a time)<br />
<br />
Model Name: Lenovo Yoga C630<br />
Model Identifier: C630<br />
Processor Name: Qualcomm Snapdragon 850<br />
Processor Speed: 2.96 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 8<br />
GPU: Qualcomm Adreno 630<br />
Memory: 8GB<br />
Disk: 128GB SSD<br />
Resolution: 1920 x 1080<br />
<br />
= Mobile =<br />
== Samsung A51 ==<br />
* '''Platforms''': android-hw-a51<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 27 devices total (17 @4GB RAM, 4 @6GB RAM, 8 @8GB RAM) (27 for perf)<br />
<br />
Model Name: Samsung A51<br />
Model Identifier: SM_A515F<br />
Processor Name: Exynos 9611<br />
Processor Speed: 4x2.3 GHz Cortex-A73 & 4x1.7 GHz Cortex-A53<br />
Number of Processors: 2<br />
Total Number of Cores: 8 (4 each)<br />
GPU: Mali-G72 MP3<br />
Memory: 4 | 6 | 8 GB (see note above)<br />
Disk: 64 | 128 | 256 GB</div>Davehunthttps://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Platforms&diff=1247154TestEngineering/Performance/Platforms2023-07-21T13:58:37Z<p>Davehunt: /* HPE Moonshot */</p>
<hr />
<div>This page details the hardware profiles of all machines used for running performance tests in automation. All performance tests run on physical hardware. Every attempt to run these tests in a virtualised environment has resulted in the hypervisor getting in the way and creating too much noise in the tests, no matter how much we try and tweak it.<br />
<br />
= Support =<br />
All hardware listed below is maintained by Mozilla's operations team.<br />
<br />
= Desktop =<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/HPE+Moonshot HPE Moonshot] ==<br />
The HPE Moonshot System supports up to 45 servers in a single chassis. Each server resides on a cartridge.<br />
* '''Platforms''': linux64, windows10-64.<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
<br />
HPE Moonshot 1500 System (45 cartridges per 4.3U)<br />
1500W Hot Plug redundant (1+1) Power Supply<br />
45 m710x ProLiant cartridges<br />
1 [https://ark.intel.com/content/www/us/en/ark/products/93741/intel-xeon-processor-e3-1585l-v5-8m-cache-3-00-ghz.html Intel E3-1585L v5] 3.0GHz CPU (4 cores)<br />
8GB DDR4 2400MHz RAM<br />
1 256GB PCIe M.2 2280 SSD<br />
1 64GB SATA M.2 2242 SSD<br />
1 Intel Iris Pro Graphics P580<br />
<br />
== [https://mana.mozilla.org/wiki/display/ROPS/Apple+Mac+Mini+R8 Apple Mac Mini R8] ==<br />
* '''Platforms''': macosx<br />
* '''Location''': MDC1 (Sacramento, CA)<br />
* '''Note''': All Mac Minis have EDID devices attached that set the resolution<br />
<br />
Model Name: Mac Mini<br />
Model Identifier: [https://everymac.com/ultimate-mac-lookup/?search_keywords=Macmini8,1 Macmini8,1]<br />
Processor Name: 6-Core Intel Core i7 (i7-8700B)<br />
Processor Speed: 3.2 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== MacBook Pro ==<br />
* '''Platforms''': macosx1014-64-power<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops currently for power testing<br />
<br />
Model Name: MacBook Pro laptop<br />
Model Identifier: MacBookPro15,1<br />
Processor Name: Intel Core i7 (I7-9750H)<br />
Processor Speed: 2.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 6<br />
GPU:<br />
Intel UHD Graphics 630<br />
Radeon Pro 555X<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 12 MB<br />
Memory: 16 GB<br />
Disk: SSD 251 GB (251,000,193,024 bytes)<br />
<br />
== Windows 2017 Reference ==<br />
* '''Platforms''': windows10-64-ref-hw-2017<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 16 laptops<br />
<br />
Model Name: Acer Aspire 15<br />
Model Identifier: E5-575-33BM<br />
Processor Name: Intel Core i3-7100U<br />
Processor Speed: 2.4 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD Graphics 620<br />
L2 Cache (per Core): 256 KB<br />
L3 Cache: 3 MB<br />
Memory: 4GB DDR4<br />
Disk: 1TB SATA Hard Drive (5400RPM)<br />
Resolution: 1920 x 1080<br />
<br />
== Windows 2018 Reference ==<br />
* '''Platforms''': TODO (not in taskcluster)<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 2 laptops<br />
<br />
Model Name: Dell Inspiron 15 3000<br />
Model Identifier: inspiron15<br />
Processor Name: Intel Celeron N3060<br />
Processor Speed: 1.6 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 2<br />
GPU: Intel HD 400<br />
L2 Cache: 2MB<br />
Memory: 4GB DDR4<br />
Disk: 500GB <br />
Resolution: 1920 x 1080<br />
<br />
== Windows ARM64 ==<br />
* '''Platforms''': windows10-64-ref-hw-2017<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 35 laptops for perf and unittests (typically 15 online at a time)<br />
<br />
Model Name: Lenovo Yoga C630<br />
Model Identifier: C630<br />
Processor Name: Qualcomm Snapdragon 850<br />
Processor Speed: 2.96 GHz<br />
Number of Processors: 1<br />
Total Number of Cores: 8<br />
GPU: Qualcomm Adreno 630<br />
Memory: 8GB<br />
Disk: 128GB SSD<br />
Resolution: 1920 x 1080<br />
<br />
= Mobile =<br />
== Samsung A51 ==<br />
* '''Platforms''': android-hw-a51<br />
* '''Location''': Bitbar (San Jose, CA)<br />
* '''Note''': 27 devices total (17 @4GB RAM, 4 @6GB RAM, 8 @8GB RAM) (27 for perf)<br />
<br />
Model Name: Samsung A51<br />
Model Identifier: SM_A515F<br />
Processor Name: Exynos 9611<br />
Processor Speed: 4x2.3 GHz Cortex-A73 & 4x1.7 GHz Cortex-A53<br />
Number of Processors: 2<br />
Total Number of Cores: 8 (4 each)<br />
GPU: Mali-G72 MP3<br />
Memory: 4 | 6 | 8 GB (see note above)<br />
Disk: 64 | 128 | 256 GB</div>Davehunthttps://wiki.mozilla.org/index.php?title=Performance/Tools&diff=1246910Performance/Tools2023-06-29T15:08:14Z<p>Davehunt: </p>
<hr />
<div>{{DISPLAYTITLE:Performance Tools 🔥🦊⏱🛠️}}[[File:Fxperftest.png|thumb|right]]<br />
<br />
= Who we are =<br />
* Carla Severe [:carla] 🇺🇸<br />
* Andrej Glavic [:aglavic] 🇨🇦<br />
* Greg Mierzwinski [:sparky] 🇨🇦<br />
* Kash Shampur [:kshampur] 🇨🇦<br />
* Adam Brouwers-Harries [:aabh] 🇬🇧<br />
* Dave Hunt [:davehunt] 🇬🇧<br />
* Julien Wajsberg [:julienw] 🇫🇷<br />
* Nazım Can Altınova [:canova] 🇩🇪<br />
* Alex Finder [:afinder] 🇷🇴<br />
* Alex Ionescu [:alexandrui] 🇷🇴<br />
* Andra Esanu [:andra.esanu] 🇷🇴<br />
* Beatrice Acasandrei [:beatrice-acasandrei] 🇷🇴<br />
<br />
= Where to find us =<br />
* [https://chat.mozilla.org/#/room/#perftools:mozilla.org #perftools]<br />
<br />
= Meetings =<br />
{{/Meetings}}<br />
<br />
= Onboarding =<br />
Welcome to the team! You are encouraged to improve the [[/Onboarding|onboarding page]]. If you need to ask questions that are not already covered, please update the page so that the next person has a better onboarding experience.</div>Davehunt