Confirmed users
2
edits
(added WER) |
|||
| Line 35: | Line 35: | ||
** Evaluation based on unigram recall consistency, rather than precision (as BLEU and NIST do). | ** Evaluation based on unigram recall consistency, rather than precision (as BLEU and NIST do). | ||
* LEPOR - http://en.wikipedia.org/wiki/LEPOR | * LEPOR - http://en.wikipedia.org/wiki/LEPOR | ||
** New MT evaluation model that is based on evaluating precision, recall, sentence-length and n-gram based word order. | ** New MT evaluation model that is based on evaluating precision, recall, sentence-length and n-gram based word order. | ||
* WER score - https://en.wikipedia.org/wiki/Word_error_rate | |||
** The Word Error Rate calculates the word-level Levenshtein distance between MT output and a reference translation. Should correlate with the difficulty of post-editing machine translation output for publication. | |||
** PWER (Position-independent WER) is a variant where reorderings are disregarded. | |||
===What prominent machine translation engines are out there and what are they known for?=== | ===What prominent machine translation engines are out there and what are they known for?=== | ||
;[https://en.wikipedia.org/wiki/Comparison_of_machine_translation_applications This is a much more concise table] of the current offerings. Includes both open and closed source engines that have front-end applications. | ;[https://en.wikipedia.org/wiki/Comparison_of_machine_translation_applications This is a much more concise table] of the current offerings. Includes both open and closed source engines that have front-end applications. | ||