Foundation/AI: Difference between revisions

Jump to navigation Jump to search
5,323 bytes removed ,  12 May 2020
→‎Trustworthy AI Brief V0.9: replaced with readme text and opening of mark's blog.
(→‎2020 OKRs: removed table, converted to bulleted list.)
(→‎Trustworthy AI Brief V0.9: replaced with readme text and opening of mark's blog.)
Line 12: Line 12:
<div style="display:block;-moz-border-radius:10px;background-color:#b7b9fa;padding:20px;margin-top:20px;">
<div style="display:block;-moz-border-radius:10px;background-color:#b7b9fa;padding:20px;margin-top:20px;">
<div style="display:block;-moz-border-radius:10px;background-color:#FFFFFF;padding:20px;margin-top:20px;">
<div style="display:block;-moz-border-radius:10px;background-color:#FFFFFF;padding:20px;margin-top:20px;">
= Trustworthy AI Brief V0.9=
= Background: Mozilla and Trustworthy AI =


''A downloadable version of the issue brief is available here: [https://mzl.la/AIIssueBrief https://mzl.la/AIIssueBrief]. For earlier versions see [https://mzl.la/IssueBriefv01 v0.1], [https://drive.google.com/file/d/1o8bK5qmMYzABk9aEO21bjW3_y1vKuXgB/view?usp=sharing v0.6], and [https://mzl.la/IssueBriefV061 v0.61].''
''Downloadable versions of previous issue briefs are available here: [https://mzl.la/AIIssueBrief https://mzl.la/AIIssueBrief], [https://mzl.la/IssueBriefv01 v0.1], [https://drive.google.com/file/d/1o8bK5qmMYzABk9aEO21bjW3_y1vKuXgB/view?usp=sharing v0.6], and [https://mzl.la/IssueBriefV061 v0.61].''


In 2019, Mozilla Foundation decided that a significant portion of its internet health programs would focus on AI topics. This brief offers an update and opens the door to collaboration from others.


'' Summary ''
In 2019, Mozilla Foundation decided that a significant portion of its internet health programs would focus on AI topics. We launched that work a little over a year ago, with a post arguing that: [https://marksurman.commons.ca/2019/03/06/mozillaaiupdate/ if we want a healthy internet -- and a healthy digital society -- we need to make sure AI is trustworthy]. AI, and the large pools of data that fuel it, are central to how computing works today. If we want apps, social networks, online stores and digital government to serve us as people -- and as citizens -- we need to make sure the way we build with AI has things like privacy and fairness built in from the get go. 


Current debates about AI often skip over a critical question: is AI enriching the lives of human beings?
Since writing that post, a number of us at Mozilla -- along with literally hundreds of partners and collaborators -- have been exploring the questions: What do we really mean by ‘trustworthy AI’? And, what do we want to do about it?  


AI has immense potential to improve our quality of life: teeing up the perfect song; optimizing the delivery of goods; solving medical mysteries. But adding AI to the digital products we use everyday can equally compromise our security, safety and privacy. Time and again, concerning stories regarding AI, big data and targeted marketing are hitting the news. The public is losing trust in big tech yet doesn’t have any alternatives. There is much at stake.


Mozilla believes we need to ensure that the use of AI in consumer technology enriches the lives of human beings rather than harms them. We need to build more trustworthy AI. For us, this means two things: personal agency is a core part of how AI is built and integrated and corporate accountability is real and enforced. This will take AI in a direction different than where it’s headed now.
'''How do we collaboratively make trustworthy AI a reality?'''


The best way to make this happen is to work like a movement: collaborating with citizens, companies,
We think part of the answer lies in collaborating and gathering input. In May 2020, we launched a request for comment on  v0.9 of Mozilla’s Trustworthy AI Whitepaper -- and on the accompanying theory of change (see below) that outlines the things we think need to happen.  
technologists, governments and organizations around the world working to make ‘trustworthy AI’ a
reality. This is Mozilla’s approach. We already have collaborative projects underway in four areas:


* Helping developers build more trustworthy AI, collaborating with Pierre Omidyar and others to put $3.5 million behind professors integrating ethics into computer science curriculum.


* Generating interest and momentum around trustworthy AI technology, backing innovators working on ideas like data trusts and working on open source voice technology.
'' What is trustworthy AI and why? ''


* Building consumer demand -- and encouraging consumers to be demanding, starting with resources like our Privacy Not Included guide and pushing platforms to tackle misinformation.
We have chosen to use the term AI because it is a term that resonates with a broad audience, is used extensively by industry and policymakers, and is currently at the center of critical debate about the future of technology. However, we acknowledge that the term has come to represent a broad range of fuzzy, abstract ideas. Mozilla’s definition of AI includes everything from algorithms and automation to complex, responsive machine learning systems and the social actors involved in maintaining those systems.  


* Encouraging governments to promote trustworthy AI, including work by Mozilla Fellows to map out a policy and litigation agenda that taps into current momentum in Europe.


These projects are just a sample -- and just a start -- on how we hope to move the ball forward through
Mozilla is working towards what we call trustworthy AI, a term used by the European High Level Expert Group on AI. '''Mozilla defines trustworthy AI as AI that is demonstrably worthy of trust. Privacy, transparency, and human well-being are key considerations and there is accountability for harms.'''
this collaborative strategy. We have more in the works.


[http://2020%20Collaborative%20Roadmap%20to%20Trustworthy%20AIhttps://foundation.mozilla.org/en/initiatives/2020-collaborative-roadmap-trustworthy-ai/ 2020 Collaborative Roadmap to Trustworthy AI]


Mozilla’s roots are as a community driven organization that works with others. We are constantly
Mozilla’s theory of change (below) is a detailed map for arriving at more trustworthy AI. It focuses on AI in consumer technology: general purpose internet products and services aimed at a wide audience. This includes products and services from social platforms, apps, and search engines, to e-commerce and ride sharing technologies, to smart home devices, voice assistants, and wearables.  
looking for allies and collaborators to work with on our trustworthy AI efforts. As a part of this, we are
looking for AI experts to join our program advisory board.
 
 
'' What is trustworthy? ''
 
Our definition of trustworthy AI is encompassed by two key concepts: agency and accountability. We
will know we have built and designed AI that is serving rather than harming humanity when:
 
All AI is designed with personal agency in mind. Privacy, transparency and human wellbeing are key considerations.
 
and
 
Companies are held to account when their AI systems make discriminatory decisions, abuse data, or make people unsafe.
 
Mozilla is a part of a growing chorus of voices calling for a better direction for AI. Dozens of groups
have put out principles and guidelines describing what this might look like. We’re excited to see this
momentum and to work with others to make this vision a reality. See AI goals framework in appendix.
 
 
'' What’s at stake? ''
 
AI is playing a role in everything from directing our attention to deciding who gets mortgages to
solving complex human problems. This will have a big impact on humanity. The stakes include:
 
* Privacy: Our personal data powers everything from traffic maps to targeted advertising.
Trustworthy AI should let people decide how their data is used and what decisions are made with it.
 
* Fairness: We’ve seen time and again that historical bias can show up in automated decision making.
To effectively address discrimination, we need to look closley at the goals and data that fuel our AI.
 
* Trust: Algorithms on sites like YouTube often push people towards extreme, misleading content.
Overhauling these content recommendation systems could go a long way to curbing misinformation.
 
* Safety: Experts have raised the alarm that AI could increase security risks and cyber crime. Platform
developers will need to create stronger measures to protect our data and personal security.
 
* Transparency: Automated decisions can have huge personal impact, yet the reasons for decisions
are often opaque. We need breakthroughs in explainability and transparency to protect users.
 
Many people do not understand how AI regularly touches our lives and feel powerless in the face of
these systems. Mozilla is dedicated to making sure the public understands that we can and must have a
say in when machines are used to make important decisions – and shape how those decisions are made.
 
 
'' How do we move the ball forward? ''
 
 
1. Help developers build more trustworthy AI.
 
Goal: developers increasingly build things using trustworthy AI guidelines and technologies.
 
What we’re doing now: working with professors at 17 universities across the US to develop
curriculum on ethics and responsible design for computer science undergraduates.
 
Where we need help: we are looking for partners to scale this work in Europe and Asia, and to find
ways to work with developers, designers and project managers already working in the industry.
 
 
2. Generate interest and momentum around trustworthy AI technology.
 
Goal: trustworthy AI products and services (personal agents, data trusts, offline data, etc.) are increasingly embraced by early adopters and attract investment.
 
What we’re doing now: developing open source voice technology for others to build on, and supporting
Mozilla Fellows and others doing early pilot work on concepts like data trusts.
 
Where we need help: we’re looking for people with novel yet pragmatic ideas on how to make
trustworthy AI a reality. We also want to meet and learn from investors in this space.
 
 
3. Build consumer demand -- and encourage consumers to be demanding.
 
Goal: consumers choose trustworthy products when available and call for them when they aren’t.
 
What we’re doing now: highlighting trustworthy products through our Privacy Not Included buyer’s
guide, and pushing platforms like YouTube and PayPal for AI and data related product changes.
 
Where we need help: we’re looking for more trustworthy products to highlight, and for people both
inside and outside major tech companies who can help us drive product improvements.
 
 
4. Encourage governments to promote trustworthy AI.
 
Goal: new and existing laws are used to make the AI ecosystem more trustworthy.
 
What we’re doing now: building more momentum for trustworthy AI and better data protection in
Europe through Mozilla Fellows, partner orgs and lobbying across the region.
 
Where we need help: we’re looking for additional partners to help us sharpen our thinking on where
we can have the most impact on the current political window of opportunity in Europe.




'' About Mozilla ''
'' About Mozilla ''


Mozilla exists to guard the open nature of the internet and to ensure it remains a global public
The ‘trustworthy AI’ activities outlined in the white paper are primarily a part of the movement activities housed at the Mozilla Foundation — efforts to work with allies around the world to build momentum for a healthier digital world. These include: thought leadership efforts like the Internet Health Report and the annual Mozilla Festival, fellowships and awards for technologists, policymakers, researchers and artists, and advocacy to mobilize public awareness and demand for more responsible tech products. Mozilla’s roots are as a collaborative, community driven organization.  
resource, open and accessible to all. Founded as a community open source project in 1998, Mozilla
currently consists of two organizations: the 501(c)3 Mozilla Foundation, which leads our movement
building work; and its wholly owned subsidiary, the Mozilla Corporation, which leads our market-based
work. The two organizations work in concert with each other and a global community of tens of
thousands of volunteers under the single banner: Mozilla.


The ‘trustworthy AI’ activities outlined in this document are primarily a part of the movement
activities housed at the Mozilla Foundation -- efforts to work with allies around the world to build
momentum for a healthier digital world. These include: thought leadership efforts like the Internet
Health Report and the annual Mozilla Festival; $7M in fellowships and awards for technologists, policy
makers, researchers and artists; and campaigns to mobilize public awareness and demand for more
responsible tech products. Approximately 60% of the $25M/year invested in these efforts is focused on
trustworthy AI.


Mozilla’s roots are as a collaborative, community driven organization. We are constantly looking for
Mozilla’s roots are as a collaborative, community driven organization. We are constantly looking for
allies and collaborators to work with on our trustworthy AI efforts.
allies and collaborators to work with on our trustworthy AI efforts.


For more on Mozilla’s values, see: [https://www.mozilla.org/en-US/about/manifesto/]. Our Trustworthy AI
For more on Mozilla’s values, see: [https://www.mozilla.org/en-US/about/manifesto/]. Our Trustworthy AI
Line 160: Line 51:
and building an internet that enriches the lives of individual human beings (principles 3).
and building an internet that enriches the lives of individual human beings (principles 3).


For more on Trustworthy AI programs, see [https://wiki.mozilla.org/Foundation/AI]


</div>
For more on Trustworthy AI programs, see [https://wiki.mozilla.org/Foundation/AI https://wiki.mozilla.org/Foundation/AI]
</div>


<div style="display:block;-moz-border-radius:10px;background-color:#666666;padding:20px;margin-top:20px;">
<div style="display:block;-moz-border-radius:10px;background-color:#666666;padding:20px;margin-top:20px;">
196

edits

Navigation menu