Foundation/2021/OKRs

From MozillaWiki
Jump to: navigation, search

Mozilla Foundation 2021 OKRs

Mozilla Foundation is fueling a movement for a healthy internet.

This document outlines the Mozilla Foundation’s organizational objectives and key results (OKRs) for 2021.

  • Objectives = “What do we want to do and why?”
  • Key Results = “How will we know if we’re successful?”

These objectives were developed based on our work in 2020, and are linked to our multi-year Trustworthy AI theory of change, with links to specific short term outcomes.

In 2021, the Foundation will also continue to increase its focus on Diversity, Equity and Inclusion, both internally and externally. Most OKRs have DEI related activities. Also, the Foundation is developing a multi-year DEI strategy that builds on past efforts, including our 2020 Racial Justice Commitments.


Related documents:

Org-wide OKRs

2021 OKRs are informed by Mozilla Foundation's three year narrative arcs.

OKR 1: Making AI Transparency the Norm: Test AI transparency "best practices" to increase adoption by builders and policymakers.

Responsible: Ashley Boyd

# Key Results KR Lead
1.1 100 AI practitioners endorse Mozilla’s AI transparency best practices.

Motivation:
Our work on misinfo and political ads established us as a champion of AI transparency. In 2021, we will broaden this work by a. working with builders to create a list of AI transparency best practices and b. creating a transparency rating rubric for Privacy Not Included.

Sample Activities:

  • Develop a taxonomy + gap analysis of ‘AI + transparency’ best practices in consumer internet tools and platforms (H1).
  • Involve builders in the research, iteration and sharing of best practices.
  • Publish + recruit co-signatories on best practices framework.
Eeva Moore
1.2 25 citations of Mozilla data/models by policymakers or policy influencers as part of AI transparency work.

Motivation:
Projects like Regrets Reporter and Firefox Rally show citizens will engage in efforts to make platforms more transparent. In 2021, we want to test whether evidence gathered from this type of research is effective in driving enforcement and policy change related to AI transparency. An early indication of success in this area is direct citation of our work by policymakers and/or policy influencers (namely, key policy-focused journalists or agenda-setting policy think tanks).

Sample Activities:

  • Recruit additional YouTube Regrets users w/ movement partners to generate additional data and reporting, particularly regions where AI transparency is gaining momentum.
  • Use YouTube Regrets findings to demonstrate the need for specific AI transparency policies in key jurisdictions (Europe, Latin America, etc.)
  • Use the Rally platform to run up to five in-depth research studies by Mozilla and others that demonstrate the value of transparency in guiding decisions re: misinfo + AI.
  • Use research findings from OKR 1.3 to drive media coverage in policy publications about the consumer demand for greater transparency in AI-enabled consumer tech
Brandi Geurkink
1.3 5 pieces of research published that envision what meaningful transparency looks like for consumers.

Motivation:
Our hope is that more AI transparency will give people more agency -- and that this is something people want. However, we don’t know that this is true. In 2021, we want to fund or produce research to better understand consumer expectations around transparency in AI-enabled tech + how they respond to existing (or potential) transparency in practice.

Sample Activities:

  • Publish taxonomy of existing transparency features in consumer tech products; solicit input on additional examples
  • Produce definitive consumer market research on global consumer values/ranking of transparency in AI-enabled consumer tech; test specific features identified in taxonomy with consumers
  • Develop research-validated ratings of AI in new PNI guide; test with PNI readers for additional insight/learning
  • Publish results from study of ‘user control’ features in YouTube with University of Exeter (underway)
  • Partner (w/MoCo?) to develop speculative design proposal for transparency in key product feature
  • Feature findings from these and other reports in Internet Health Report, MozFest, D+D, etc.
  • Partner with cities to test efficacy of AI registry and transparency tools (pending funding).
  • Model + test transparent recommendation engine designs based on what we learned from YouTube Regrets (pending funding).
Becca Ricks

OKR 2: Modeling Good Data Stewardship: Accelerate more equitable data governance alternatives to advance trustworthy AI.

Responsible: J Bob Alotta

# Key Results KR Lead
2.1 7 projects tested with real users to identify building blocks for viable data stewardship models.

Motivation:
We have a number of projects funded or underway to test our alternative data stewardship models. In 2021, we want to: a. Design, implement, test and advance these projects; and b. establish a set of ‘success criteria’ for these projects in the process.

Sample Activities:

  • Develop and document success criteria for Data Futures Lab, Common Voice, MoFo CRM.
  • Document Data Futures Lab grantee partners success and failures, and feed this into the development of criteria for replicablicablity.
  • Take over stewardship of the Common Voice project, modeling and documenting our thinking on how citizen-built data commons for AI can work.
  • Use CRM update project to develop new MoFo data governance processes, Pan Mozilla data sharing framework and ways to model citizen-centric approaches to data stewardship.
Mehan Jayasuriya
2.2 5 regulatory jurisdictions utilize our input to enable collective data rights for users.

Motivation:
While many jurisdictions are giving people new data rights, there are few places where people can pursue these rights collectively or are protected from collective harm. In 2021, we want to develop -- and advocate for -- concrete policy proposals related to collective data rights.

Sample Activities:

  • Work with Data Futures Lab grantees to use existing regulatory frameworks collectively on behalf of their constituents (eg. labour + consumers).
  • Set up a data rights policy working group (team/fellows) to develop a position on -- and advocate for -- collective data rights in regulations in EU, UK, Canada, and India.
  • Also, develop recommendations on collective data rights for inclusion in U.S. platform accountability approaches being considered by the new administration.
Mathias Vermeulen
2.3 6 stakeholder groups established as constituents of the Data Futures Lab.

Motivation:
We now have a ‘proto’ Data Futures Lab in place. In 2021, we will fully launch the Lab, creating a kinetic point of connection across many disciplines and geographies. As an increasing number of researchers, policy makers, activists, designers, developers, legal experts and funders ‘join’, momentum, funding, expertise and impact will grow from the Lab.

Sample Activities:

  • Launch Lab, hire staff, establish stakeholder engagement.
  • Make Infrastructure Fund grants to stakeholders w high motivation for alternative data governance models. * Source second cohort of Prototype Fund grantee partners.
  • Core funders engage other funders who join collaborative. Non-tech funders invest in Lab.
  • Convene developers and builders; journalists, researchers, activists.
Kasia Odrozek

OKR 3: Mitigating AI bias: Accelerate the impact of people working to mitigate bias in AI.

Responsible: Ashley Boyd

# Key Results KR Lead
3.1 Increase the total investment in existing AI + bias grantees by 50%.

Motivation:
We are already investing in a number of projects related to AI and bias: the AJL CRASH project; Common Voice; Creative Media Award grantees/projects, and ideas surfaced at MozFest. In 2021, we plan to kickstart our increased focus on this topic by providing additional funding, external amplification and accompaniment support to these projects.

Sample Activities:

  • Work with grantees to identify areas where additional investment and support is needed/wanted in their projects or others.
  • Develop accompaniment strategies focused on comms and marketing (paid + owned Mozilla channels); hire Comms Program Officer, invest in global PR/media resources
  • Increase support of Algorithmic Justice League/CRASH project focused on bias bounties.
Jenn Beard
3.2 50,000 people participate (share stories, donate data, etc.) in projects on mitigating bias in AI as a result of Mozilla promotion.

Motivation:
Last year we observed that bias is a topic that gets the public to pay attention to trustworthy AI issues. In 2021, we want to see if we can go further by getting the public to engage in projects that concretely advance a trustworthy AI agenda.

Sample Activities:

  • Recruit additional Common Voice community members in languages/regions currently under-represented.
  • Use our platforms to drive direct participation in the tools and media created by our Creative Media Awardees (e.g. watch their media projects, directly engage with their computational art projects, etc.).
  • Help accelerate AJL CRASH Project by identifying sources of bias from our grassroots supporters (if desired by AJL).
  • Recruit additional RegretsReporter participants in under-represented regions/languages to uncover bias of platform policies/procedures.
Xavier Harding
3.3 Pipeline of additional projects Mozilla can support to mitigate bias in AI established.

Motivation:
The previous KRs focus on projects we already know about. We know much more is happening in this space. Over the coming year, we will articulate a clear, investment approach including a pipeline of additional funding, engagement and philanthropic advocacy opportunities related to AI bias which we can use to drive our work in 2022+.

Sample Activities:

  • Research promising approaches + gaps in ‘AI + bias’ work as a way to build a pipeline of opportunities for work in late 2021 and 2022.
  • Engage Program Committee + Fellows to identify potential projects for our grantmaking and advocacy pipeline.
Roselyn Odoyo

OKR 4: Growing Across Movements: Strengthen partnership with diverse movements to deepen intersections between their primary issues and ours, including trustworthy AI.

Responsible: J Bob Alotta

# Key Results KR Lead
4.1 Phase 1 Landscape analysis is complete and we have identified partner movements.

Motivation:
Our trustworthy AI theory of change presupposes others playing key roles in pursuit of shared outcomes. In 2021, we will evaluate where we currently stand with potential partners and commit ourselves to working with specific partners from other movements in pursuit of these outcomes.

Sample Activities:

  • Share landscape analysis internally -- including with the board -- for response and reflection (phase 1).
  • Develop commitments and messaging (phase 2) with partner movements in mind. This includes our Reimagine Open work.
  • Fund cohort(s) of grantee partners, host orgs and fellows from chosen partner movements with whom our organizational priorities are most aligned. Initiate fellows as ‘organizers’.
Hanan Elmasu
4.2 The Foundation’s African Mradi workstream centering local expertise is developed.

Motivation:
Our Mradi work allows us to upend extractive approaches long played out in Africa and address mistakes made in past work on the continent. The most important thing we can do to this end is put local leaders and experts at the fore of this work, and facilitate their connection and access to resources over an extended period. In 2021, the MoFo Mradi workstream, including our work on Common Voice, will take this approach.

Sample Activities:

  • Conduct East/Southern African Landscape analysis to assess where best to invest, including viability analysis of establishing local Fund.
  • Establish grantmaking criteria and scope panel of African experts with local relationships.
  • Connect ongoing regional efforts to workstream, eg. Common Voice Kiswahili initiative.
  • Center African voices and concerns in our Dialogues & Debates.
Chenai Chair
4.3 Synchronize internal operations to strengthen ability to strategically partner externally.

Motivation:
Last year, we discovered that our partnerships are most robust when they exist across multiple teams. In 2021, we will seek out ways for teams to work with each other and external movement partners on activities related to our trustworthy AI goals and org OKRs.

Sample Activities:

  • Include strong focus on program work in our Diversity, Equity and Inclusion strategy.
  • Develop interdependent plans across teams re: bias, transparency and data stewardship.
  • Strategic priorities from grants/fellowships/insights/engagement/advocacy drive and are driven by the content and methodologies of MozFest. Use this to drive “One Mozilla” work.
Lindsey Frost Dodson

OKR 5: Organizational Effectiveness: Enhance our organizational systems and capabilities to support more data-informed decision-making.

Responsible: Angela Plohman

# Key Results KR Lead
5.1 2022 planning and budget decisions driven by systematic evaluation of our work in 2021.

Motivation:
We have become skilled at using OKRs to organize our work and measure our impact. In 2021, we will make a next step by updating our CRM, financial planning and evaluation tools to more rigorously use info on what’s working (and what’s not) to drive future plans.

Sample Activities:

  • Develop MoFo-wide data, impact and measurement strategy, including multi-year metrics.
  • Develop metrics and reporting model as part of Diversity, Equity and Inclusion strategy.
  • Ensure financial systems and reporting serve both administrative and team purposes.
  • Undertake job role audit and develop skills analysis and succession planning exercises.
  • Improve grant management tools and data gathering that can be used to analyze impact.
Lainie DeCoursy
5.2 100% of teams have workflows and reports that are supported by our integrated CRM.

Motivation:
Over the last 5+ years, we have relied on MoCo’s CRM infrastructure, which did not meet our needs and left us with data silos across MoFo. In 2021, we will take over our CRM and put in place tools that both enable movement-building and serve as a model of good data stewardship.

Sample Activities:

  • Create and deploy MoFo-wide data, impact and measurement strategy that is enabled by the new data infrastructure.
  • Transition MoFo to Salesforce.org and Acoustic, building out CRM ecosystem infrastructure including data integration/governance platform.
Jackie Lu
5.3 Complete data analysis that reveals best approaches for converting ‘subscribers’ to ‘donors’.

Motivation:
Over recent years, we have both improved our email content and grown the number of email subscribers. In 2021, we want to understand what strategies and tactics will convert email subscribers to first time donors. Ultimately this will help us grow donations and our donor base..

Sample Activities:

  • Build, test and iterate onboarding series for new subscribers that highlight our case for support and warmup potential donors.
  • Establish a system that allows us to reliably identify first-time gifts from new donors.
  • Run 8 conversion tests implementing best practices for converting subscribers to first-time donors - (recalibrate targets after the first tests).
  • Ensure AI & Love, AI & Bias, AI & Data, YouTube Regrets campaigns in H1 include significant onboarding and donor conversion email components.
  • Track and test post-petition modal solicitations.
Will Easton