- Google is working on a system for instant translations
- You could wear it in your ear and it will translate seamlessly
- Sundar Pichai says it’s getting close to seamless
Whether you look at Star Trek, or The HitchHiker’s Guide to the Galaxy, universal translators that can tell you what everyone around you is talking about remain a cool example of science fiction – something that’s steadily become a step closer to reality. Last year, Microsoft announced real-time voice translation for Skype, and ahead of the Pixel launch event, Google CEO Sundar Pichai met with NDTV and revealed that the company is close to releasing some kind of wearable translator as well.
“We are making extraordinary progress in certain things you know. For example, AI is now able to translate much better than ever before – close to human level translation,” Pichai said. “We are constantly making progress.”
“But I do think in a few days we will be talking about something by which you can wear it in your ears and you know and you can speak with two people and it will make this process of translation more seamless,” Pichai told NDTV at Google’s headquarters in Mountain View last week.
However, although the company is close, Pichai cautions that it’s still a process that’s going to take a couple of years before we get to the science fiction scenario of just walking up to people and getting great translations whenever needed.
“I think we are a few years away from where two people you know, regardless of the language they know, can converse with each other and that is absolutely you know in the line of sight,” he said. “You know even a few days from now [at the Pixel event in San Francisco on Wednesday] our first headsets we will show will take a good step in that direction. We are not quite there yet, but it’ll take the first step in that direction… and we will continue to build from there.”
The real potential for AI, Pichai says, lies in ambient computing – where you don’t need to think about what tools to use for a specific task. The vision is that computers should be ubiquitous and context-aware, to help you do whatever you want to.
“I think that the beauty of what I think of as the AI-first world is we don’t have to – users don’t have to – decide [between different computing products]. Computing will be there when you want it when you get into your car, and when you go to your home,” he said. “So I think this notion of ambient computing – which is there to help – it can be in the context of a watch or a car and it’s built in.”
“Ambient computing means that you go to a device, start is off, and work with that device,” he added. “To me, ambient computing is that you are going about your day to day life and computing is there, working for you. So if I run into someone that I need to speak from Hindi to English, you know, it can happen right there, maybe with a watch, maybe a headphone. If you are at home it’s something like Google home, or you know, the TV. Any screen in front of you can help when it needs to.”