The Uneasy Metamorphoses of Machine Translation" (the first of a series), featured by Exchanges Literary Journal and accompanied by an eBook on "10 poems ruthlessly mangled by Google Translate" and "10 poems ruthlessly mangled by Google Mirror."
According to translator David Bellos, the era of modern machine translation began at the start of the Cold War era. The demand for deciphering Russian technical papers greatly outweighed the supply of Russian-English translators available in the United States, and by 1952 the IBM 701, which was programmed with 250 words and six grammar rules, had made its debut. However, limited computing capabilities doomed these initial efforts, and it wasn't until more powerful computers appeared in the 1980s that further advances in machine translation took place. The primary difference between how today's online translators work and the principles behind technologies such as the IBM 701 is that Google Translate (GT) was the first program to rely on inventories of previously-existing translations instead of working with languages as codes. The latter notion views meaning as existing independently of language; this meaning then assumes different forms across the languages that contain this meaning. GT's approach, however, does not conceive of languages as comparable machines with potentially interchangeable parts. Instead of working with a master linguistic code, it searches for patterns in millions of documents already translated by humans. The dream of fully machine-based translation that requires no human interpretation has thus far proved to be impracticable; and even as GT's databases continue to grow, the likelihood of a computer providing satisfactory translations of works of literature remains as scant as the possibility of a machine composing an original novel.... Read Tegan's full essay here.