Friday, July 31, 2009

SMI RED 250!


Today SMI announced the new RED250 which, as the name suggests, has an impressive 250Hz sampling rate. It has an accuracy of 0.5 degrees or below (typ.), less than 10 ms. latency and operates within 60-80 cm head distance. The track-box is 40x40 cm at 70cm distance and will recover tracking faster than the previous model. No details on pricing yet but top of the line performance never comes cheap. Get the flyer as pdf.

Survey on gaze visualization in 3D virtual environments

Got an email today from Sophie Stellmach a PhD student from the User Interface & Software Engineering group at the Otto-von-Guericke University in Germany. She has posted an online survey and would like your some input from eye tracking specialists on 3D gaze visualization.

"In the course of my work I have developed several gaze visualizations for facilitating eye tracking studies in (static) three-dimensional virtual environments. In order to evaluate the potential utility of these techniques, I am conducting an online survey with eye tracking researchers and professionals. I would like to invite you to this survey as I think that your answers are highly valuable for this investigation. The survey should take less than 10 minutes of your time! Your answers will be stored anonymously. You can access the survey under the following link: http://gamescience.bth.se/survey/index.php?sid=27319=en "

Wednesday, July 22, 2009

Telegaze update

Remember the TeleGaze robot developed by Hemin Omer which I wrote about last September? Today there is a new video available showing an updated interface which appears to be somewhat improved, no further information is available.
Update: The new version includes an automatic "person-following" mode which can be turned on or off through the interface. See video below

Gaze Interaction in Immersive Virtual Reality - 3D Eye Tracking in Virtual Worlds

Thies Pfeiffer (blog) working in the A.I group at the Faculty of technology, Bielefeld University in Germany have presented some interesting research on 3D gaze interaction in virtual environments. As the video demonstrates they have achieved high accuracy for gaze based pointing and selection. This opens up for a wide range of interesting man-machine interaction where digital avatars may mimic natural human behavior. Impressive.



Publications
  • Pfeiffer, T. (2008). Towards Gaze Interaction in Immersive Virtual Reality: Evaluation of a Monocular Eye Tracking Set-Up. In Virtuelle und Erweiterte Realität - Fünfter Workshop der GI-Fachgruppe VR/AR, 81-92. Aachen: Shaker Verlag GmbH. [Abstract] [BibTeX] [PDF]
  • Pfeiffer, T., Latoschik, M.E. & Wachsmuth, I. (2008). Evaluation of Binocular Eye Trackers and Algorithms for 3D Gaze Interaction in Virtual Reality Environments. Journal of Virtual Reality and Broadcasting, 5 (16), dec. [Abstract] [BibTeX] [URL] [PDF]
  • Pfeiffer, T., Donner, M., Latoschik, M.E. & Wachsmuth, I. (2007). 3D fixations in real and virtual scenarios. Journal of Eye Movement Research, Special issue: Abstracts of the ECEM 2007, 13.
  • Pfeiffer, T., Donner, M., Latoschik, M.E. & Wachsmuth, I. (2007). Blickfixationstiefe in stereoskopischen VR-Umgebungen: Eine vergleichende Studie. In Vierter Workshop Virtuelle und Erweiterte Realität der GI-Fachgruppe VR/AR, 113-124. Aachen: Shaker. [Abstract] [BibTeX] [PDF]
List of all publications available here.

Wednesday, July 15, 2009

Gaze & Voice recognition game development blog

Jonathan O'Donovan, a masters student in Interactive Entertainment Technology at the Trinity College in Dublin, have recently started a blog for his thesis. It will combine gaze and voice recognition for developing a new video game. So far the few posts available have mainly concerned the underlying framework but a proof-of-concept combining gaze and voice is demonstrated. The project will be developed on a Microsoft Windows based platform and utilizes the XNA game development framework for graphics and the Microsoft Speech SDK for voice input. The eye tracker of choice is a Tobii T60 provided by Acuity ETS (Reading, UK). The thesis will be supervised by Veronica Sundstedt at the Trinity College Computer Science dept.
Keep us posten Jonathan, excitied to see what you'll come up with!





Update: 
The project resulted in the Rabbit Run game which is documented in the following publication:

  • J. O’Donovan, J. Ward, S. Hodgins, V. Sundstedt (2009) Rabbit Run: Gaze and Voice Based Game Interaction (PDF). 

Monday, July 13, 2009

Oculis labs Chameleon prevents over shoulder reading

"Two years ago computer security expert Bill Anderson read about scientific research on how the human eye moves as it reads and processes text and images. 'This obscure characteristic... suddenly struck me as (a solution to) a security problem,' says Anderson. With the help of a couple of software developers, Anderson developed a software program called Chameleon that tracks a viewer's gaze patterns and only allows an authorized user to read text on the screen, while everyone else sees gibberish. Chameleon uses gaze-tracking software and camera equipment to track an authorized reader's eyes to show only that one person the correct text. After a 15-second calibration period in which the software learns the viewer's gaze patterns, anyone looking over that user's shoulder sees dummy text that randomly and constantly changes. To tap the broader consumer market, Anderson built a more consumer-friendly version called PrivateEye, which can work with a simple Webcam to blur a user's monitor when he or she turns away. It also detects other faces in the background, and a small video screen pops up to alert the user that someone is looking at the screen. 'There have been inventions in the space of gaze-tracking. There have been inventions in the space of security,' says Anderson. 'But nobody has put the two ideas together, as far as we know.'" (source)

Patent application
Article by Baltimore Sun

Monday, June 29, 2009

Video from COGAIN2009

John Paulin Hansen has posted a video showing some highlights from the annual COGAIN conference. It demonstrates three available gaze interaction solutions, the COGAIN GazeTalk interface, Tobii Technologies MyTobii and Alea Technologies IG-30. These interfaces relies on dwell-activated on-screen keyboards ( i.e. same procedure as last year).


Monday, June 1, 2009

COGAIN 2009 Proceedings now online

There is little reason to doubt the vitality of the COGAIN network. This years proceedings presents an impressive 18 papers spread out over one hundred pages. It covers a wide range of areas from low-cost eye tracking, text entry, gaze input for gaming, multimodal interaction to environment control, clinical assessments and case studies. Unfortunately I was unable to attend the event this year (recently relocated) but with the hefty proceedings being available online there is plenty of material to be read through (program and links to authors here) Thanks to Arantxa Villanuevua, John Paulin Hansen and Bjarne Kjaer Ersboll for the editorial effort.

Tuesday, May 26, 2009

Toshiba eye tracking for automotive applications

Seen this one coming for a while. Wonder how stable it would be in a real-life scenario..
Via Donald Melanson at Engadget:
"We've seen plenty of systems that rely on facial recognition for an interface, but they've so far been a decidedly rarer occurrence when it comes to in-car systems. Toshiba looks set to change that, however, with it now showing off a new system that'll not only let you control the A/C or radio with the glance of your eye, but alert you if you happen to take your eyes off the road for too long. That's done with the aid of a camera mounted above the steering wheel that's used to identify and map out the driver's face, letting the car (or desktop PC in this demonstration) detect everything from head movement and eye direction to eyelid blinks, which Toshiba says could eventually be used to alert drowsy drivers. Unfortunately, Toshiba doesn't have any immediate plans to commercialize the technology, although it apparently busily working to make it more suited for embedded CPUs." (source)

Tuesday, May 19, 2009

Hands-free Interactive Image Segmentation Using Eyegaze (Sadeghi, M. et al, 2009)

Maryam Sadeghi, a Masters student at the Medical Image Analysis Lab at the Simon Fraser University in Canada presents an interesting paper on using eye tracking for gaze driven image segmentation. The research has been performed in cooperation with Geoffry Thien (Ph.D student), Dr. Hamarneh and Stella Atkins (principal investigators). More information is to be published on this page. Geoffry Thien completed his M.Sc thesis on gaze interaction in March under the title "Building Interactive Eyegaze Menus for Surgery" (abstract) unfortunately I have not been able to located a electronic copy of that document.

Abstract
"This paper explores a novel approach to interactive user-guided image segmentation, using eyegaze information as an input. The method includes three steps: 1) eyegaze tracking for providing user input, such as setting object and background seed pixel selection; 2) an optimization method for image labeling that is constrained or affected by user input; and 3) linking the two previous steps via a graphical user interface for displaying the images and other controls to the user and for providing real-time visual feedback of eyegaze and seed locations, thus enabling the interactive segmentation procedure. We developed a new graphical user interface supported by an eyegaze tracking monitor to capture the user's eyegaze movement and fixations (as opposed to traditional mouse moving and clicking). The user simply looks at different parts of the screen to select which image to segment, to perform foreground and background seed placement and to set optional segmentation parameters. There is an eyegaze-controlled "zoom" feature for difficult images containing objects with narrow parts, holes or weak boundaries. The image is then segmented using the random walker image segmentation method. We performed a pilot study with 7 subjects who segmented synthetic, natural and real medical images. Our results show that getting used the new interface takes about only 5 minutes. Compared with traditional mouse-based control, the new eyegaze approach provided a 18.6% speed improvement for more than 90% of images with high object-background contrast. However, for low contrast and more difficult images it took longer to place seeds using the eyegaze-based "zoom" to relax the required eyegaze accuracy of seed placement." Download paper as pdf.

The custom interface is used to place backgound (red) and object (green) seeds which are used in the segmentation process. The custom fixation detection algorithm triggers a mouse click to the gaze position, if 20 of the previous 30 gaze samples lies within a a 50 pixel radius.


The results indicate a certain degree of feasibility for gaze assisted segmentation, however real-life situations often contain more complex images where borders of objects are less defined. This is also indicated in the results where the CT brain scan represents the difficult category. For an initial study the results are interesting and it's likely that we'll see more of gaze interaction within domain specific applications in a near future.


  • Maryam Sadeghi, Geoff Tien, Ghassan Hamarneh, and Stella Atkins. Hands-free Interactive Image Segmentation Using Eyegaze. In SPIE Medical Imaging 2009: Computer-Aided Diagnosis. Proceedings of the SPIE, Volume 7260 (pdf)