Gaze interaction has been shown to be useful for many applications but eye information has been shown to be limited for interaction. The point of regard only posses information about position and does not provide sufficient information to make selections (a.k.a Midas touch). Extra information is needed to make convey other pieces of information such as clicks. Dwell-time selection, eye blinks and gaze-gestures have been typical ways of extending the capabilities of eye trackers with methods for communicating with interfaces (e.g. making selections on a screen). Gestures are commonly used for interaction and are used to signify a particular command or intent. However the gaze gesture is not an intuitive way for communication. Within this project we are trying to use the head gestures combined with gaze data for interaction. The head movements can be measured through the eye movements by the gaze trackers. The method uses a combination of gaze and eye movement to infer head gestures. The idea is simply looking at an object and perform a head gesture for controlling the object. The object can be an icon on the computer display or an real object in real world (Figure 1).
Figure 1: Example of two gestures for controlling a lamp.
Compared to other gesture-based methods a major advantage of the method is that the user keeps the gaze on the interaction object while interacting. Recognizing the head gestures by the head-mounted gaze trackers, and using the gaze + head gestures provides a fast and intuitive way for communication and interaction with the environment in the fully mobile situations. Many video based methods have been used so far for detecting the head gestures. In contrast, the presented method, allows for identifying a wide range of head gestures even the small gestures and also the head rolls, accurately and in real time, by only using an eye tracker.