Foundation/2022/OKRs

From MozillaWiki
Jump to: navigation, search

Mozilla Foundation 2022 OKRs

Mozilla Foundation is fueling a movement for a healthy internet.

This document outlines the Mozilla Foundation’s organizational objectives and key results (OKRs) for 2022.

  • Objectives = “What do we want to do and why?”
  • Key Results = “How will we know if we’re successful?”

These objectives were developed based on our work in 2020 and 2021, and are linked to our multi-year Trustworthy AI theory of change, with links to specific short term outcomes.

In 2022, the Foundation will also add an OKR for Diversity, Equity and Inclusion, based on an action plan developed following our Racial Equity and Belonging Audit (REBA) and collaborative decision-making process.


Org-wide OKRs

2022 OKRs are informed by Mozilla Foundation's three year narrative arcs.

OKR 1: Making AI Transparency the Norm: Test AI transparency "best practices" to increase adoption by builders and policymakers.

Responsible: Ashley Boyd

# Key Results KR Lead
1.1 150 builders involved in the creation of the Best Practices Framework

Motivation:
We want to learn from and seed a global community of builders committed to interrogating current practices and championing trustworthy AI within product teams at tech companies

Sample Activities:

  • Transparency regulation in India
  • Using ‘computable contracts’ to support AI transparency
  • Meaningful AI Transparency’ research
  • US-focused public awareness/policy push on systemic transparency
  • Mozilla Open Source Auditing Project (M-OAT) research/ paper


Temi Popo
1.2 5 communities use RegretsReporter data as a platform to test its relevance

Motivation:
In 2021 we tested whether data donation was effective in driving policy change related to AI transparency in the EU. In 2022, we’ll use RegretsReporter in more regions and with specific communities to test its impact.

Sample Activities:

  • Investigating potential for data donation tools to drive policy change
  • Chico/Vero Instituto using RegretsReporter to support a YouTube investigation ahead of the Brazilian election in August
  • Collaboration with GLAAD on an investigation into what YouTube is recommending LGBTQ+ individuals
Brandi Geurkink
1.3 25 bi-partisan policymakers endorse one or more aspects of our US platform transparency campaign

Motivation:
We’re aiming to grow public awareness about the need for ad transparency and researcher access in the US to spark action by regulators.

Sample Activities:

  • Privacy Not Included: Review of kids platforms
  • Partnerships with civil society orgs - Women’s March, MomsRising, etc.
  • US public message development + testing, paid advertising + marketing partnerships, and earned media/PR
  • Advocacy research monitoring platform activities
  • Monitoring bias in dating apps


Carys Afoko / Shore Consulting

OKR 2: Modeling Good Data Stewardship: Accelerate more equitable data governance alternatives as a way to advance trustworthy AI.

Responsible: J Bob Alotta

# Key Results KR Lead
2.1 4 DFL Prototypes reach 20k users, demonstrating solutions to key responsible data governance challenges

Motivation:
We’re building on the success of the first DFL prototypes, with a specific focus on data portability and data use license work via the Infrastructure Fund.

Sample Activities:

  • Work on equitable data set curation and management
  • Work on building and evaluating data collaboratives
  • Support and amplification of 2022 DFL Prototype Cohort
Marie Goumballa
2.2 60% increase in funding to data governance projects led by historically underfunded communities globally and/or teams in the global majority

Motivation:
We’re investing in projects that have different systems of thought to help redefine "governance," or shift the value of data away from current profit hoarding centers, or shift engineering because of a different notion of collectivism, etc.

Sample Activities:

  • Data Futures infrastructure grants on indigenous data sovereignty
  • Prototype grants
  • CMA Grants
  • USAID - Strengthening Data Ecosystems
Jessica Gonzalez-Wagner and Kofi Yeboah
2.3 Pan-Mozilla common policy position and advocacy narrative published

Motivation:
Our prototypes push the edges of regulatory possibility and bridge the theoretical gap for regulators. We’ll clarify the interplay between policy and product by translating our values and goals into technical and legal product design principles.

Sample Activities:

  • Alternative Data Governance Playbook
  • Better Data Governance policy paper
  • Internet Health Report podcast re:data governance innovations in the builder & policy space
  • Exploration of data donation tools and policies
Max Gahntz and Udbhav Tiwari

OKR 3: Mitigating AI bias: Accelerate the impact of people working to mitigate bias in AI.

Responsible: Ashley Boyd

# Key Results KR Lead
3.1 50 technologists take part in a Common Voice Inclusion- Performant Automatic Speech Recognition competition

Motivation:
The size and diversity of the Common Voice platform is one way it supports bias mitigation in the voice technology space. It also serves as an opportunity to developing practical frameworks and new industry norms for assessing diversity in Voice AI training sets.

Sample Activities:

  • CV competition (launch in May, runs June/July)
  • MarComms project on reaching ‘builder’ audiences
  • Work on equitable data set curation and management
  • Common Voice KiSwahili Contribute-a-thon
EM Lewis-Jong
3.2 25% increase in press mentions of open source projects focused on bias mitigation that we have supported

Motivation:
Mozilla Tech Fund grantees plus Senior Fellows representing a mix of disciplines (policy, campaign, builders, etc.) can accelerate this field and provide pathways for future grantmaking for Mozilla and others.

Sample Activities:

  • Mozilla Tech Fund projects focus on bias and transparency.
  • Mozilla Open Source Auditing Project
  • Trustworthy AI Working Groups
Shandukani Mulaudzi
3.3 25% increase in contributors to RCS Playbook from India/Kenya/South Africa to help us understand bias in new contexts

Motivation:
The expansion of the Responsible Computer Science Challenge program into new geographies will enable us to diversify the RCS Playbook to include what bias in AI looks like in contexts outside of North America.

Sample Activities:

  • RCS Landscape analyses in Kenya, South Africa, and India completed in Q2.
  • RCS community building and outreach in new geographies in Q3.
  • Cohort meetings for in new geographies launch end of 2022.
Crystal Lee

OKR 4: Growing Across Movements: Strengthen partnership with diverse movements to deepen intersections between their primary issues and ours, including trustworthy AI.

Responsible: J Bob Alotta

# Key Results KR Lead
4.1 We have defined engagement pathways informed by piloting MozFest Plaza platform and data

Motivation:
We need our tools and technology to support our growth across movements, to make decisions, stay connected with others, comprehend the field, evaluate our impact, fundraise, etc.

Sample Activities:

  • Tech for Movement Building Strategy
  • MozFest Platform / digital ecosystem
  • Movement Building compass and AI intersections launched
  • MEL(D) frameworks piloted with 2 programs and templated for others
  • Engagement team digital engagement eg. grassroots fundraising and campaigns
Marc Walsh and Anil Kanji
4.2 In 3 geographies, 4 or more programmatic initiatives are implemented in concert

Motivation:
By building geographically specific bodies of work that leverage multiple programmatic initiatives, we can be more impactful, signal our commitment and bring meaning to our “global” claim. We’ll focus first on Kenya, East Africa, India, South Africa, US, Brazil, Germany and the UK.

Sample Activities:

  • Develop geographic specific strategies in Kenya, East Africa, India, South Africa, US, Brazil, Germany, UK
  • RCS teaching responsible tech (w/HBCUs, MSIs and BIPOC scholars from PWIs)
  • Landscape analysis in India, Kenya & S. Africa for RCS expansion in each country
  • Experimental investments and PO pipeline grants
Solana Larsen and Koliwe Majama
4.3 100% of CFPs are launched with an accompaniment strategy

Motivation:
Accompaniment is everything we offer beyond the grant that enables our fellows, grantee partners and community members to thrive. It’s a way to live our values, grow on-the-ground capacity, and build trust with our partners. In turn, it becomes a reason for folks to invest in us.

Sample Activities:

  • Define accompaniment strategy as a key component of F&A strategy
  • MozFest nodal events, and plaza as virtual, year round platform for communities to connect
  • TBD: Facilitative Leadership Program (The goal of FLiP is to share the methods used by the MozFest team to help community members become confident conveners, organizers, etc.)
Janice Wait and Stephanie Wright

OKR 5: Organizational Effectiveness: Enhance our organizational systems and capabilities to support more data-informed decision-making.

Responsible: Angela Plohman

# Key Results KR Lead
5.1 MEL framework piloted for RCS and Mradi to enable us to measure and report on their impact

Motivation:
We need systems to better understand our impact. We’ll start by developing Measurement, Evaluation, Learning and Data (MELD) capacity that can be supported by the Movement Building Tech Strategy.

Sample Activities:

  • Develop MEL frameworks for RCS and Mradi in collaboration with stakeholders
  • Create and pilot a template MEL framework for other programs
  • Create an action plan for training and capacity building
  • Plan for how we will establish initial indicators for STO in the ToC
Lainie DeCoursy
5.2 We’re tracking donor conversion, growth, and retention in Salesforce on a monthly basis to develop strategies for growth

Motivation:
We need a clearer, more helpful picture of our fundraising health, including how many new donors we acquire, retain, and upgrade each year. This will help us make informed decisions about focused or further investment for fundraising efforts.

Sample Activities:

  • Develop structure (incl roles) for best measuring retention, conversion, and growth
  • Ensure fundraising emails can solicit donations based on a donor’s list or largest gift amount
  • Segment existing/potential supporters so we can develop more relevant solicitations
  • Track and report on performance in a more granular and sophisticated way
Juan Barami
5.3 20% increase to the engagement survey question, “The technology we use at MoFo helps me do my best work”

Motivation:
We want our technology to supercharge our work. Right now it’s an active blocker to org effectiveness, revenue growth and staff getting their best work done.

Sample Activities:

  • Tech assessment with BuildTank
  • Create technology strategy for movement building
  • Document toolkits for our tech and resource stacks
  • Onboard three+ programs/ teams into the CRM for active relationship management
Jackie Lu