Story of the week: Subtitling the real world

This week in the television show Scheire en de Schepping we saw a rather clever combination of two interesting subdomains of HCI: voice recognition and augmented reality. It was about glasses that show subtitles for the conversation the user is having. These are called “SpraakZien” glasses (we find the name rather unimaginative), and they could be a handy tool for people with a hearing impairment.

Alas, we didn’t find a video showing the technology, but here’s something even better (/s) : a list of articles about it! (A little heads up: the fourth article is NSFW.) Also, there’s a photo on the top of this page (think of it as a paused video).

For most deaf people, I think these glasses are risky. They could weaken the users ability to lip-read, and when the glasses fail, he or she might be in trouble. Also, when talking to someone the user wouldn’t be looking that person in the face anymore, which could be annoying.

On the other hand this technology allows the user to understand when someone they can’t see is talking, for example when shouting their name to draw their attention. There is another problem though: the glasses are very visible. I’m not sure, but maybe people with a hearing impairment don’t want to be recognized as such in public. A Google Glass device comes to mind, but an even better solution would be bionic contact lenses.

For the brave souls who made it to the end of this post, as a reward here’s another AR application: The Sulon Cortex (with video this time :p).



  1. Great post, I like how you reflect about the technology when it’s used in the real world!

    Also, great video at the end: it’s something I’ve never seen before, a great new way to break the game market right open. It is part of the movement that brings real life and virtual gaming closer together: a great new way to experience some cool adventures in an environment you know.

  2. The biggest advantage seems translation of foreign languages. I wonder how different people who are used to subtitles react to this compared to people who aren’t (let’s say, Germans and French etc)

    1. Once you have a live transcript you could easily read that to the user through ear buds, thus dubbing the conversation. This way the French and the German are happy as well ;p This would also solve the problem of not looking in the other persons eyes while listening. This application is difficult to get right though. There is voice recognition, translating and reading out loud. None of these are perfect yet.

      One extra problem: while subtitles are unisex, (most) voices are not (this could be a problem or a feature, e.g. no boring meetings anymore. You could also try to mimic the speakers voice, like they did @ Microsoft Research in 2008:

Comments are closed.