For those of you that remember the Star Trek movies and TV shows the concept of instantaneous translation of spoken languages is nothing new. The concept is amazing and would allow for two or more individuals of completely different cultures, speaking completely different languages, to communicate seamlessly without the aid of a human (or other living) interpreter standing on the sidelines repeating everything. But the technology is elusive. The challenges have been great. Problems, such as converting spoken language into something a computer can understand and use and then taking that and converting it again into a meaningful and accurate translation in another language that may be structured completely differently, are monumental to overcome. Such technology has long been a programming holy grail.
For decades linguists, computer scientists, mathematicians, and many others have looked to bring this dream to reality. Science fiction writers have sometimes integrated the concept into their entire story lines, coming up with all sorts of exotic solutions such as implantable chips or translating bacteria. Since the late 90’s multiple companies have struggled to make desktop software that can convert speech into text on a computer screen. Most early applications required extensive training sessions with the software, both for the user to adapt to it and for the software to “learn” how the user speaks. As time has gone on we have run into new implementations and applications of this.
Progress in the speech to text arena has been significant. Microsoft Office now has the feature as part of the package. Dragon naturally speaking is used by professionals all over the world. Call centers use multi-million dollar software and hardware packages to manage their phone lines, some of which work amazingly well and some of which fail epically. But while much progress has been made here, little has been announced for the meaningful translation aspect. Though in recent years there have been many valiant attempts.
Google, search engine leader and the innovation leaders that they are, recently (a few years ago) proved that the task of translation can be achieved quite well through brute force methods using massive numbers of computers in sync with each other, a task which many said was not possible. Since then the technology and methods have gained steam, evolved. Now it seems that NEC is confident enough to make it a selling point for their latest product, the Tele Scouter.
The Tele Scouter uses a recent development in image projection technology that allow a very small device direct an image directly onto your retina. This is then perceived as a normal-sized or large image because of the relative closeness to the eye. It maintains normal visibility and allows for the user to get addition information without switching focus. This screams sci-fi action movie and brings back memories of seeing through the terminator’s eyes as he hunts down John Connor. The intention of the device is to provide additional information in real-time to sales people but also offers the future potential for use as a translation device. A future that is set to arrive sometime in 2011.
So far there is no news as to how well the current generation of their translation software will work, nor how much it will cost. However, the article does mention that the headsets themselves will sell for around $84,000 dollars for a batch of 30. That price does not include any extra software or hardware needed for translation. If it works even reasonably well I think we can expect that governments and large companies will be chomping at the bit to test and deploy the technology for their international needs.
Original BBC Article: http://news.bbc.co.uk/2/hi/technology/8343941.stm