Anonymous Patron writes:
This year, Google entered the fourth annual National Institute of Standards and Technology (NIST) Machine Translation evaluation. Their approach was to use statistical translation models learned from parallel text, that is, sets of documents and their translations. The system learns a model automatically from the parallel data. This approach differs from the rule-based approach used by many existing commercial machine translation companies which is based on large sets of handwritten translation rules.
First Google Bork and now this. Can automated translation (or indexing, cataloging, abstracting, etc. for that matter) ever beat out human work?