Monday, March 10, 2008

Application of Fitts Law to Eye Gaze Interaction Interfaces (Miniotas, 2000)

Fitts's law (often cited as Fitts' law) is a model of human movement which predicts the time required to rapidly move to a target area, as a function of the distance to the target and the size of the target. Paul M. Fitts (1912 – 1965) was a psychologist at Ohio State University (later at the University of Michigan). He developed a model of human movement, Fitts's law, based on rapid, aimed movement, which went on to become one of the most highly successful and well studied mathematical models of human motion. Fitts's law is used to model the act of pointing, both in the real world (e.g., with a hand or finger) and on computers (e.g., with a mouse) (Source: Wikipedia, 2008-03-11)


I became interested when I found the paper "Application of Fitts Law to Eye Gaze Interaction Interfaces" by Darius Miniotas (2000) at the Siauliai University, Lithuania. The study does only contain six participants. The task consists of keeping a fixation within 26mm x 26mm box continuously for 250ms. Knowing the noise and jitter in all eye trackers (the one I'm using is state-of-the-art 2008) the task might not be the best one for illustrating Fitts law while using eye trackers. Additionally, presenting a visual indicator of gaze position may appear as distracting due to often present offsets in eye tracking algorithms (somewhat off and moving around).

Abstract
An experiment is described comparing the performance of an eye tracker and a mouse in a simple pointing task. Subjects had to make rapid and accurate horizontal movements to targets that were vertical ribbons located at various distances from the cursor's starting position. The dwell-time protocol was used for the eye tracker to make selections. Movement times were shorter for the mouse than for the eye tracker. Fitts' Law model was shown to predict movement times using both interaction techniques equally well. The model is thus seen to be a potential contributor to design of modern multimodal human computer interfaces. (ACM Paper)

No comments: