Innovation/Open Source Experiments
Mozilla has a new “open source experiments” program. Here’s what we’re up to and how you can participate.
What is this?
For those outside Mozilla, it might be confusing that we’re doing this, considering that Mozilla already has great resources on how to do open collaboration. Check out the Working Open Project Guide for a good example. So what’s new with open source experiments?
Really briefly, we’re making software and we’re doing open source, but mainly trying new ways to do the meta stuff such as tooling, infrastructure, organization, and incentives. Some questions we’re working on now:
- How can C-to-Rust translation help maintainers of old codebases?
- What can sentiment analysis of project discussions predict about contributor engagement in the future?
- When people leave or cut back on their open source participation, what do they do instead, and what does that activity do for them that open source isn’t?
- Are users interested in collaborating on text classification in order to understand the "filter bubbles" they participate in?
In the future, we’re interested in finding answers to questions such as...
- How does literate programming affect the questions and other interactions that users have with a project?
- How do contribution patterns change when bots join a project? How much more can bots do?
- Can we provide a simple way for a downstream developer to visit quickly and add a regression test without investing a lot of time in setting up a build environment?
(That’s just the kind of thing we’re looking for. Other topics welcome.)
From a day-to-day point of view, open source still basically works. But our community is still living off the economic gains from software and from Internet collaboration. It’s like we’re loggers in a primeval forest, where you don’t have to be especially good at running a logging camp to make a living. But open source is on track to be displaced by other sets of norms and processes. Open source is accumulating long-term problems such as
Burnout: People are putting in unsustainable amounts of time, so projects look artificially good.
Bugs: Quinn Norton said it best: everything is broken. Open source got publicity by being “slightly less broken” than the Windows and Unix systems of the 1990s—but has open source quality peaked? Bugs tend to pile up over years.
Bias: We’re not using all the people we could be using, for all the things they could be doing for us.This is not just a human equal opportunity thing. We’re facing the real risk that a closed platform will provide better safety and opportunity for underrepresented demographic groups than the open web does.
As alternate movements outside the open-source scene learn to fix those problems and attract contributors—who might not get the long-term benefits of open source but get their immediate problems fixed—then the open Internet loses.
In important ways, open source is already starting to lose. New participation models borrow concepts from open source, but use Internet-enabled collaboration to build network effects and lock-in. For example, the “gig economy” and gamified data collection can take advantage of lightweight contributions without building common resources for all contributors. And software patent holders have learned how to participate in open source, but divert the wealth created by collaboration into existing rent-seeking models.
We need to understand what the potential pool of contributors is, what those people need from open source, and how to expand the number of people for whom open source is a rewarding activity. And the best ways to do that may not involve just doing open source “best practices” as usual. Open source is famously being used to transform other industries, but the way that open source projects work is remarkably risk-averse and lore-driven. (There has been one fundamental innovation in 25 years, and nobody is giving Larry McVoy any credit for it).
People participate in open source for a lot of reasons, some of which sound hard-nosed and business-like (commoditize the complement! Economic signaling of developer skill!) and some of which sound social or mystical. But we can’t take that participation for granted. Future Mozilla projects will need to fit into an ecosystem that work for everybody. Not just end users and code contributors as found in an all-in-one user-facing open source product, but also a developer community of practice that includes “downstream” integrators and web developers.
Mozilla at its best is no worse at writing software than other open-source organizations. But we know that in a lot of ways we’re doing the traditional, safe practices just like the rest of the open-source scene. Where we have the opportunity to try something transformative, we are going to take it.
We’re going to try to learn more about participation by doing stand-alone experiments that could be a win for new projects, but really bad ideas to apply to existing projects until we learn what they do. We are now looking for small-scale open-source projects that can move rapidly, so that experiments don’t cause trouble for large-scale projects with non-technical users. Instead of having to be conservative about development practices in order to do a project acceptably well, we can take on a project that would be useful if it happened but wouldn’t break things for users or for the organization if it didn’t.
If one of these projects does graduate from experiment to mostly user-focused we’ll consider it a success, cut it loose, and start another experiment. In order to produce meaningful results, we can look for low-profile but real niches to build software with participation opportunities.
How it works
Every experiment needs three top-level items in order to launch.
Project: a meaningful open source code base that we can work with, big enough to be useful but small enough that we can take risks.
Principal investigator: the person directly looking for answers to the questions. May or may not be the same person as project leader on the open source project
Question(s): What we want to learn from the project
Criteria for evaluating a project
Is this project ambitious enough that it could be useful and produce meaningful results?
On what technologies does this project depend? Do we have a good working relationship with the maintainers of our dependencies?
Criteria for evaluating a principal investigator
Can this person operate independently and communicate clearly?
Criteria for evaluating a question
How would possible answers from this question affect how open source projects work?
What Mozilla teams or programs need an answer to this question? How important is the answer to them?
The flow is relatively simple.
- Write and approve a proposal (here is a sample Innovation/Proposal Outline)
- Innovation/Experiment Setup
- Run and document experiment
- Evaluate and transfer as needed
Our first three experiments started as a package including principal investigator, project, and questions as a unit. As we expand we are also going to be looking for stand-alone items such as
- Orphan or new open-source projects of manageable size that are good candidates to experiment on
- Principal investigators who have projects or questions but not a complete package
- Questions that are important to Mozilla teams that do not have a project or PI in mind
- Developers who want to take on a small contract project to build software needed for an experiment
We will track these items and assemble complete experiments. In some cases there will be a principal investigator plus another contractor, on separate contracts. (For example, a project requiring a complex bot might have a PI focused on the codebase being developed and a bot developer/administrator working mostly in a separate bot repository.)
Deliverables for experiments should be simple and self-contained. For example, an ideal deliverable is a report or blog post. Software projects for experiments will run more smoothly if the deliverable is a set of data or results _about_ the software, and not the whole project.
Organizational limits on a project include:
- Total budget per contract: less than $25,000 for small exploratory projects, larger for longer or multi-experimenter projects
- Duration of project: 6 months or less (typical)
- Payment structure: paid by milestones, not hourly
Sticking within those limits will minimize overhead. Although our first three experiments are running within one calendar quarter, future experiments will be longer in order to increase opportunities for engagement.
If you’re interested, please mail me: email@example.com.
Related Mozilla programs
We encourage people who are interested in open source experiments to sign up for other (independently operated) programs at Mozilla, too.
- TechSpeakers: Increase developer awareness and adoption of the Web, Firefox, and Mozilla through a strong community-driven technical speaker development program.
- #Mozfest: is an annual gathering of passionate thinkers and inventors from around the world who meet to learn from each other and help forge the future of the web.