P2PULearningChallenges/Metrics
From MozillaWiki
Contents
Final Metrics for Challenges on P2PU
Evaluation Goals and Metrics for Challenges
- The objective of these metrics is to assess whether or not Challenges on P2PU are a good way to begin creating educational content for Mozilla projects. Both quantitative and qualitative methods are used to determine value for Mozilla, P2PU, and the user.
- The focus is on overall value - We have content and we are moving it into this challenge framework. How is that going? How many people are completing our content (within this framework). Conversion funnel. Completion rates.
- Conversion rate
- Users who hit the site
- N = not logged-in users (data from google analytics) accessing:
- School of Webcraft home page
- Challenge set home pages
- Challenge home pages
- Challenge Task pages
- N = not logged-in users (data from google analytics) accessing:
- Share of users who register but never start a challenge
- Registered users only --> Visit a challenge related page but no challenge is started
- Share of users who start a challenge
- Share of users who complete a challenge
- Only users who started a challenge at least 3 days prior to reporting
- Share who abandon a challenge
- Users who are inactive for 10 days
- Users who manually "leave" a challenge
- Share of users who complete a challenge
- Share of users who complete more than one challenge (2, 3, 4, ...)
- Share of users who complete a series (=set) of challenges
- Number of Challenges completed per learner
- N = Users who completed at least one challenge
- Users who hit the site
- Badges
- Number of Badges issued over time (organized by badge type)
- Number of badges issued per learner (badge categories: making, understanding)
- Mentors
- Number of Mentors over time (some measure of level of activity)
- Todo -> Check with John
- Number of Mentors over time (some measure of level of activity)
- Qualitative Survey to ask people how they find the experience
- Laura to prepare
- Laura to prepare
Hackasaurus Metrics
Understanding
- Number of Users who hit the site
- Number of Badges issued over time
- Number of people signing up for our email/group lists/pledges/campaigns/twitter/bookmarks
- Number of Hacktivity Kit downloads
Making
- Number of websites developed
- People using our templates
- Number of hacked webpages
- Number of Hackasaurus events
- Number of event participants
- Goggle activations (bookmarklet uses) per day
Innovating
- patches (or forks) from volunteer contributors (dev and curriculum)
- x # of hackasaurus "experiments" by community members
- localizations (Hacktivity Kit + web+ goggles)
Project Components:
Website:
- Number of people visiting
- XXX page views per day
- Number of people signing up for our email/group lists/pledges/campaigns/twitter/bookmarks
- page views per visit
- return visitors
- links clicked
- # of backlinks
- XXX activations (bookmarklet uses) per day
- XXX patches (or forks) from volunteer contributors
- XXX # of downloads by the end of 2012/ Q1 - I'd collect this monthly (or weekly)
- x# of Hacktivity Kit localizations
- or 2x the number of localizations from 2011
- x # of feedback questionnaires completed
- x # of webpages created as a result of challenges
- Share of users who start a (Hackasaurus) challenge
- Share who abandon a challenge
- Share of users who complete a challenge
- Share of users who complete more than one challenge (2, 3, 4, ...)
- Share of users who complete a series of challenges
- Number of Challenges completed per learner
- Number of Badges issued over time (organized by badge type)
- Number of badges issued per learner (making and understanding badges)
- Number of Mentors over time (some measure of level of activity)
- x # of countries running hack jams
- x # of hackasaurus "experiments" by community members
- x # of localizations
- x # of participants in Hackasaurus events
BRAINSTORMING METRICS:
Overall goals
>Evaluation Goals and Metrics for Challenges- The objective of these metrics is to assess whether or not Challenges on P2PU are a good way to begin creating educational content for Mozilla projects. Both quantitative and qualitative methods are used to determine value for Mozilla, P2PU, and the user.
- Two sets of metrics:
- NOT FOCUS -> Internal - Once within a challenge - what are the metrics that tell us how well someone is doing, and how to make the challenge features / UX better. Click patterns. Drop off points. Etc.
- FOCUS -> Overall value - We have content and we are moving it into this challenge framework. How is that going? How many people are completing our content (within this framework). Conversion funnel. Completion rates.
Framing:
- Focus metrics on state change - attitude, awareness, actions taken
- Understanding - what increase in understanding/awareness of the open web and our programs (this is not as relevant for P2PU challenges specifically)
- Participation - how many people are participating, in what ways, how deeply
- Making - how are people improving their skills
- Innovation - how are people innovating (not as relevant?)
Can we currently measure all these on P2PU? What do the metrics look like for Hackasaurus (what do we have available to us there?)
CV: no we cannot probably. but we can start considering ways we can measure participation (stealth assessment) as well as understanding (manual assessment)
PS: Main comment -> Need to reduce number of metrics / let's track general traffic and then pick a few important ones (max 3-5) to drill down
Goals
Participation (beyond Reach)
How many people is Hackasaurus currently engaging? How many more might Challenges bring to the Hackasaurus Project? historic, current, and predictive views using quantitative data
Method: data analysis (Monitor for trends) – historical and current, weekly collection
Metrics (increasing depth of participation):
- Access
- Basic demographics (from Google Analytics)
- Referrer stats (from Google Analytics)
- Session Duration
- Number of total page views by language and country
- Conversion rate
- Users who hit the site
- Share of users who register but never start a challenge
- Share of users who start a challenge
- Share who abandon a challenge
- Share of users who complete a challenge
- Share of users who complete more than one challenge (2, 3, 4, ...)
- Share of users who complete a series of challenges
- Number of Challenges completed per learner
- Nice to have: Usage pattern over time (levels of activity / completion speed)
- Maybe hours / week over time
- Learning
- Number of Active Learners over time (ie those who complete a task once a week)
- Number of Challenge/Curriculum Completions over time (ie Learners who have completed all the Challenge/Curriculum tasks)
- Share of users who complete at least one challenge
- Ratio of users who started a challenge and completed it
- Click Stream
- Badges
- Number of Badges issued over time (organized by badge type)
- Number of badges issued per learner (making and understanding badges)
- what about collaborating = learning behavior?
- Mentoring
- Number of Mentors over time (some measure of level of activity)
- This can probably only be done through a survey, because we let mentors structure their communication however they want - a lot of their interaction happens outside of p2pu.org
- Conversation rates from participant to helper to mentor (deeper participation)
- sequence of comments > that is something we could measure
- Number of Gurus (e.g. users who have ability to issue valuable badges)
- (does this exist?)- Yes, during the first pilot of Badges on P2PU, they seeded the community with gurus (ie "Has-A-Badge") assessors. @Chloe @Philipp - Do you know how many "gurus" are active on the site?
- Number of Mentors over time (some measure of level of activity)
- Peer Assessment (as a form of participation)
- Share of total users who participated in peer assessment (e.g. 20%)
- Share of peer to peer badges compared to overall badges (e.g. 15%)
- Number of badges that lack sufficient reviews
Making
- Number of Badges (that are tied to making) issued over time
- Number of Links to participants work (gathering external links will allow us to see what/if people are making)
- Handle through survey questions
- Handle through survey questions
Share of total users who participated in peer assessment (e.g. 20%)Share of peer to peer badges compared to overall badges (e.g. 15%)Number of badges that lack sufficient reviews
4: User Survey - Determine perceived value to users
Method: survey 3 months after launch
Sample strata
- Learners who abandoned a challenge (target problem areas)
- Learners who never started a challenge
- Learners who completed a challenge
- [future version] + contribution to getting a job / only for relevant challenges
- Contribution to personal satisfaction
- Contribution to social recognition
- Focus on state changes - changes in attitude, awareness, actions taken
- Did you tell your friends about this?
- Would you recommend it to a friend?
- .... need to do more work here
Method: Each learner is asked a single question after completing a challenge
Metrics
- + Learner Satisfaction
- + Challenge structure success (target problem areas)
- Help us make this challenge better!
- Did you think this challenge was a little silly – just right - too boring
- Did you think this challenge was too hard – just right – too easy / simple
ROI (Long Term)
How have donations increased or decreased? How has participation spread?
Method: data analysis (Quantitative from Mozilla and P2PU user data, donation data) – Ongoing after launch, long term.
Metrics
- + donation stream to p2pu and Mozilla
- + number of new signups
- - Staff Hours
- - Average Response Rate
- + Total Likes, One ups or RT on social media messaging marked #challenges
- Suggestion is to postpone this and add to later phase (agree)
- Risk that adding option to donate into the challenge will change the user's learning experience (we really want to measure if the learning works at this point - adding donations may influence the outcomes)
- - Average Challenge Completion Time (currently located within the Challenge metrics)
- - Average Task Completion Times
- + Number of new challenges by the community
Quantitative
Collecting these metrics will allow us to define meta metrics (ie number of learners vs non learners vs power learners or whatever)
Across the Board:
Metrics for Understanding and Awareness of Open Web and Mozilla Projects
- Basic demographics
- Number of total page views by language and country
- Referrer stats
- Click Stream
Participation Depth
- IP
- Session Duration & Clicks per Session
- Think time
- Conversion rate
- Share of users who register but never do anything
The Metrics Story: Although IP logging is a raw metric, with thousands of users it will be valuable to see how deep into Mozilla programming users are going. By cross comparing IP logs between programs, Mozilla will have a better view of influence and partipation depth across the board. Collecting session durations and clicks per session and seeing an increase in these two metrics over time will further underline this viewpoint. Think time can be used to filter out users who simply browse. Decreasing negative conversion rates is important in showing strength of programming.
Skill Improvement
- Number of Badges issued over time (organized by badge type)
- Number of Links to participants work (gathering external links will allow us to see what/if people are making)
Platform Specific
P2PU
Participation Depth
- Number of Mentors over time
- Conversation rates
- from participant to helper to mentor (deeper participation)
- Number of Active Learners over time (ie those who complete a task once a week)
- Share of users who complete at least one challenge
- Ratio of users who started a challenge and completed it
- Share of total users who participated in peer assessment (e.g. 20%)
- Share of peer to peer badges compared to overall badges (e.g. 15%)
- Number of Gurus (e.g. users who have ability to issue valuable badges)
- Number of badges that lack sufficient reviews
- Number of Challenge/Curriculum Completions over time (ie Learners who have completed all the Challenge/Curriculum tasks)
Attendees
- Steph
- Laura
- Philipp
- Chloe
- Need to extend Arlton's contract to work on Challenges UX – involved 3.5 months ago. P2PU hired 2 people, 1 for Challenge content and 1 for UX. Arlton and Jamie Curle.
- Was getting the UX person the most efficient way to use the funds (Philipp says no). Discuss in longer call (TBD).
- Need to figure out the longer term strategy for people
- SoW – funded early badges work, SoW Community Management, Webmaking 101 Challenges. 50% p2pu funding (Zuzel, Chloe, John paid with p2pu funds) and 50% from SoW funding.
- By January, launching Hackasaurus on P2PU – get Challenges rocking the house.
- Zuzel is superhuman -- really?
<a href="https://etherpad.mozilla.org/challenges-evaluation-goals">https://etherpad.mozilla.org/challenges-evaluation-goals</a>
Laura to set up metric meeting with Jess. Consistent set of metrics! Invite Steph
How do the p2pu metrics line up with the Hackasaurus metrics, which one are close enough aligned that we can then compare. Comparision chart.
Parse metrics – send out
Agreements
Challenge Fixes Board: Released Dec. 14
Implementation of Partner Account that allows Hackasaurus to run their challenges
Success looks like:
The list of agreed upon must haves and launch by mid January