Confirmed users
1,022
edits
(Updated KR lead) |
(Updates to KRs and motivations for OKR 1-transparency, and OKR 3 -bias.) |
||
| Line 31: | Line 31: | ||
|- | |- | ||
| 1.1 | | 1.1 | ||
| ''' | | '''100 AI practitioners endorse Mozilla’s AI transparency best practices.''' | ||
'''Motivation:''' <br /> | '''Motivation:''' <br /> | ||
Our work on misinfo and political ads established us as a champion of AI transparency. In 2021, we will broaden this work by a. working with builders to create a list of AI transparency best practices and b. creating a transparency rating rubric for Privacy Not Included. | Our work on misinfo and political ads established us as a champion of AI transparency. In 2021, we will broaden this work by a. working with builders to create a list of AI transparency best practices and b. creating a transparency rating rubric for Privacy Not Included. | ||
'''Sample Activities:''' | '''Sample Activities:''' | ||
*Develop a taxonomy + gap analysis of ‘AI + transparency’ best practices in consumer internet tools and platforms (H1). | * Develop a taxonomy + gap analysis of ‘AI + transparency’ best practices in consumer internet tools and platforms (H1). | ||
*Involve builders in the research, iteration and sharing of best practices. | * Involve builders in the research, iteration and sharing of best practices. | ||
* | * Publish + recruit co-signatories on best practices framework. | ||
| Eeva Moore | | Eeva Moore | ||
|- | |- | ||
| 1.2 | | 1.2 | ||
| '''25 citations of Mozilla data/models by policymakers as part of AI transparency work.''' | | '''25 citations of Mozilla data/models by policymakers or policy influencers as part of AI transparency work.''' | ||
'''Motivation:''' <br /> | '''Motivation:''' <br /> | ||
Projects like Regrets Reporter and Firefox Rally show citizens will engage in efforts to make platforms more transparent. In 2021, we want to test whether evidence gathered from this type of research is effective in driving enforcement and policy change related to AI transparency. | Projects like Regrets Reporter and Firefox Rally show citizens will engage in efforts to make platforms more transparent. In 2021, we want to test whether evidence gathered from this type of research is effective in driving enforcement and policy change related to AI transparency. An early indication of success in this area is direct citation of our work by policymakers and/or policy influencers (namely, key policy-focused journalists or agenda-setting policy think tanks). | ||
'''Sample Activities:''' | '''Sample Activities:''' | ||
* Recruit additional YouTube Regrets users w/ movement partners to generate additional data and reporting, particularly regions where AI transparency is gaining momentum. | * Recruit additional YouTube Regrets users w/ movement partners to generate additional data and reporting, particularly regions where AI transparency is gaining momentum. | ||
* Use YouTube Regrets findings | * Use YouTube Regrets findings to demonstrate the need for specific AI transparency policies in key jurisdictions (Europe, Latin America, etc.) | ||
* Use the Rally platform to run up to five in-depth research studies by Mozilla and others that demonstrate the value of transparency in guiding decisions re: misinfo + AI. | * Use the Rally platform to run up to five in-depth research studies by Mozilla and others that demonstrate the value of transparency in guiding decisions re: misinfo + AI. | ||
* Use research findings from OKR 1.3 to drive media coverage in policy publications about the consumer demand for greater transparency in AI-enabled consumer tech | |||
| Brandi Geurkink | | Brandi Geurkink | ||
|- | |- | ||
| 1.3 | | 1.3 | ||
| '''5 | | '''5 pieces of research published that envision what meaningful transparency looks like for consumers.''' | ||
'''Motivation:''' <br /> | '''Motivation:''' <br /> | ||
Our hope is that more AI transparency will give people more agency -- and that this is something people want. However, we don’t know that this is true. In 2021, we want to fund or | Our hope is that more AI transparency will give people more agency -- and that this is something people want. However, we don’t know that this is true. In 2021, we want to fund or produce research to better understand consumer expectations around transparency in AI-enabled tech + how they respond to existing (or potential) transparency in practice. | ||
'''Sample Activities:''' | '''Sample Activities:''' | ||
* | * Publish taxonomy of existing transparency features in consumer tech products; solicit input on additional examples | ||
* Produce definitive consumer market research on global consumer values/ranking of transparency in AI-enabled consumer tech; test specific features identified in taxonomy with consumers | |||
* Develop research-validated ratings of AI in new PNI guide; test with PNI readers for additional insight/learning | |||
* Publish results from study of ‘user control’ features in YouTube with University of Exeter (underway) | |||
* Partner (w/MoCo?) to develop speculative design proposal for transparency in key product feature | |||
* Feature findings from these and other reports in Internet Health Report, MozFest, D+D, etc. | * Feature findings from these and other reports in Internet Health Report, MozFest, D+D, etc. | ||
* Partner with cities to test efficacy of AI registry and transparency tools (pending funding). | * Partner with cities to test efficacy of AI registry and transparency tools (pending funding). | ||
* Model + test transparent recommendation engine designs based on what we learned from YouTube Regrets (pending funding). | * Model + test transparent recommendation engine designs based on what we learned from YouTube Regrets (pending funding). | ||
| Becca Ricks | | Becca Ricks | ||
|} | |} | ||
| Line 122: | Line 128: | ||
|- | |- | ||
| 3.1 | | 3.1 | ||
| ''' | | '''Increase the total investment in existing AI + bias grantees by 50%.''' | ||
'''Motivation:''' <br /> | '''Motivation:''' <br /> | ||
We are already investing in a number of projects related to AI and bias: the AJL CRASH project; Common Voice; Creative Media Award grantees/projects, and ideas surfaced at MozFest. In 2021, we plan to kickstart our increased focus on this topic by providing additional funding, | We are already investing in a number of projects related to AI and bias: the AJL CRASH project; Common Voice; Creative Media Award grantees/projects, and ideas surfaced at MozFest. In 2021, we plan to kickstart our increased focus on this topic by providing additional funding, external amplification and accompaniment support to these projects. | ||
'''Sample Activities:''' | '''Sample Activities:''' | ||
* Work with grantees to identify areas where additional investment and support is needed/wanted in their projects or others. | * Work with grantees to identify areas where additional investment and support is needed/wanted in their projects or others. | ||
* | * Develop accompaniment strategies focused on comms and marketing (paid + owned Mozilla channels); hire Comms Program Officer, invest in global PR/media resources | ||
* Increase support of Algorithmic Justice League/CRASH project focused on bias bounties. | * Increase support of Algorithmic Justice League/CRASH project focused on bias bounties. | ||
| Jenn Beard | | Jenn Beard | ||
|- | |- | ||
| 3.2 | | 3.2 | ||
| ''' | | '''50,000 people participate (share stories, donate data, etc.) in projects on mitigating bias in AI as a result of Mozilla promotion.''' | ||
'''Motivation:''' <br /> | '''Motivation:''' <br /> | ||
Last year we observed that bias is a topic that gets the public to pay attention to trustworthy AI issues. In 2021, we want to see if we can go further by getting the public to engage in projects that concretely advance a trustworthy AI agenda. | Last year we observed that bias is a topic that gets the public to pay attention to trustworthy AI issues. In 2021, we want to see if we can go further by getting the public to engage in projects that concretely advance a trustworthy AI agenda. | ||
| Line 141: | Line 147: | ||
* Use our platforms to drive direct participation in the tools and media created by our Creative Media Awardees (e.g. watch their media projects, directly engage with their computational art projects, etc.). | * Use our platforms to drive direct participation in the tools and media created by our Creative Media Awardees (e.g. watch their media projects, directly engage with their computational art projects, etc.). | ||
* Help accelerate AJL CRASH Project by identifying sources of bias from our grassroots supporters (if desired by AJL). | * Help accelerate AJL CRASH Project by identifying sources of bias from our grassroots supporters (if desired by AJL). | ||
* Recruit additional RegretsReporter participants in under-represented regions/languages to uncover bias of platform policies/procedures. | |||
| Xavier Harding | | Xavier Harding | ||
|- | |- | ||
| Line 146: | Line 154: | ||
| ''Pipeline of additional projects Mozilla can support to mitigate bias in AI established.'' | | ''Pipeline of additional projects Mozilla can support to mitigate bias in AI established.'' | ||
'''Motivation:''' <br /> | '''Motivation:''' <br /> | ||
The previous KRs focus on projects we already know about. We know much more is happening in this space. Over the coming year, we will | The previous KRs focus on projects we already know about. We know much more is happening in this space. Over the coming year, we will articulate a clear, investment approach including a pipeline of additional funding, engagement and philanthropic advocacy opportunities related to AI bias which we can use to drive our work in 2022+. | ||
'''Sample Activities:''' | '''Sample Activities:''' | ||