Drumbeat/MoJo/hackfest/berlin/projects/ATTN-SPAN

From MozillaWiki
< Drumbeat‎ | MoJo‎ | hackfest‎ | berlin‎ | projects
Jump to: navigation, search

Project Plan

Project Information

Project Name: ATTN-SPAN

Project Lead: Dan Schultz

More Info: Final Project Proposal

Demo: Demo Page

Source Code: Github

Plan

Minimum Viable Product (Hacktoberfest)

Goals for the end of Hacktoberfest:

  1. System that accepts a transcribed video clip with identified key "moments", calculates and stores segments (e.g. Person A spoke from 1:02 to 1:33) from those moments.
  2. API that allows a system to retrieve a set of video clips related to a block of text (and optionally, tailored to user interests)

Use cases specific to data generation and metadata extraction have been separated into The Meta Project.

Schema

Project Needs

Pending needs: I could use better NLP toolkits and techniques for identifying key concepts and actors. I could use advice on serving / streaming video. I could use some help with the front end design (I'm not known for my design abilities.)

Progress

Hackfest

Day 1

  • Redesigned schema based on new direction (breaking away from MetaVid, moving into a better way of tagging and understanding the shape of videos)
  • Began implementing new models and updating the schema generation script
  • Broke the workhorse tasks (video analysis / metadata extraction / OCR / etc.) into a separate project

Day 2

  • Continued implementing the new models
  • Came to the conclusion that it would make more sense to focus on Meta Meta Project to ensure a great data pool.

Project Status

Currently working features include...

  • Schema is developed and generation script is updated
  • Models are for the most part updated. They include:
    • Users - allow for accounts.
    • Interests - allow users to define interests which will be used to help tailor video segment recommendation.
    • Videos - store actual video files.
    • Segments - sections of video related to a specific topic, theme, or person.
    • Moments - moments in video which reflect a specific event. This might be use of a key word or phrase, laughter, or gavel hits.

Collaborators

The project has so far been something of a one-guy-show (unfortunately) but this is largely because it was tabled for the majority of the hackfest.

Next steps

Timeline

I'll be continuing to develop this as part of my thesis, with the target of January 2012 for a fleshed out / robust product which will take the form of an in-the-page-bookmarklet (i.e. the code can be embedded either by the user via bookmarklet click or by the organization) and possibly browser extension.

As the project develops it will be useful to test it in several fairly specific contexts to understand its potential, get feedback, and see how it shapes the news consumption experience:

  • Newsrooms with wider coverage (e.g. New York Times) to see how well a national article can be personalized with footage of personal reps
  • Newsrooms with more niche coverage (e.g. The Boston Globe, Zeit) to see how well a local article can be supplemented with footage of local reps
  • Newsrooms with international coverage (e.g. BBC, Guardian, Al Jazeera) to see how well an international article can be supplemented with footage of key participants
  • Users living in a specific area (e.g. a large city like Boston, or a small town) who use a variety of information sources to see how well more generic articles can be personalized and supplemented.

Technical

The only road block right now is information extraction algorithms, which is going to be handled by Meta Meta shortly and will open up the doors for at least a prototype. Once that is handled the current backend (model / DB) infrastructure is fairly developed and the UX needs to be designed and implemented.