Today, the use of adequate communication tools for people with cognitive and speech disorders is often limited, which results in a sense of alienation and a loss of momentum in interactions. Verse is a contextual language interface that uses augmented reality to allow for seamless interactions by understanding a user’s environment and providing content based on their surroundings. By recognizing the people you interact with and the objects in your environment, Verse provides the relevant lexicon to converse in your current context. Eye tracking enables the user to rapidly formulate sentences using contextual vocabulary.
We started the project spending a morning at Treloar, a school for children suffering from physically and mental disorders. The time spent there in company of students and teachers gave us important insights on the inadequacy of commonly-used communication aids.
We created an interactive tool which can recognise tags in an augmented reality environment and generate a set of relevant vocabulary. Inspired by sign language, words and concepts can be selected using an eye-tracking device, and the combination of those generates a suitable sentence for others to understand.
Project in collaboration with Janna Fuller, Setareh Shamdani and Antton Peña.
2015 | Royal College of Art