Intellego/Meetings/Status/2014-01-23

From MozillaWiki
Jump to: navigation, search

https://intellego.etherpad.mozilla.org/ep/pad/view/ro.8xafZ2WPK5AlVPYWfo0Do/rev.433

Meeting Details

Talking Points

  • Action item follow-up
  • Beta testing in q1 for short term solution.
    • Target markets are Poland, Turkey, and Vietnam.

Previous Action Items

  • Determine Phase 1 milestones
  • Rework research questions within the travel metaphor paradigm
  • [Jeff] MT output evaluation research for Polish, Turkish, and Vietnam
    • We need to find out why these languages were chosen, to help frame our longer-term goals.
  • [Kensie] Spiel for Intellego

Action Items

Research

Evaluating translation output

Studies to share with Bill

http://www.est-translationstudies.org/intranet/research/MT.pdf
Describes the results of some studies performed testing the quality of Turkish MT output.
http://delivery.acm.org/10.1145/1880000/1873930/p1326-zhao.pdf?ip=204.228.136.8&id=1873930&acc=OPEN&key=BF13D071DEA4D3F3B0AA4BA89B4BCA5B&CFID=402558063&CFTOKEN=29922176&__acm__=1390496320_c41a9c6fa52e420dbfcdeadceab68b1d
Study performed by Baidu researchers concerning using multiple MT engines to increase translation accuracy.
http://www.raco.cat/index.php/Tradumatica/article/view/225899/307310
Basic overview of MT and evaluation of four MT engines (Microsoft, Systran, and Google being 3 of them). Rather than seemingly arbitrary numbers, this displays the raw output from each engine and notes accuracy errors.
http://amta2012.cloudapp.net/AMTA2012Files/papers/Richardson.pdf
A detailed description of how a prominent, global organization (LDS church) implemented Microsoft MT organization-wide and rolled it out for approximately 10 languages.
http://www.itl.nist.gov/iad/mig//tests/mt/2006/doc/mt06eval_official_results.html
US government study evaluating the raw output of 24 MT engines (Systran, Google, and Microsoft included) using the BLEU methodology.