Wednesday, December 22, 2010
Tuesday, December 14, 2010
"A system for automatic mapping of eye-gaze data to hypermedia content utilizes high-level content-of-interest tags to identify regions of content-of-interest in hypermedia pages. User's computers are equipped with eye-gaze tracker equipment that is capable of determining the user's point-of-gaze on a displayed hypermedia page. A content tracker identifies the location of the content using the content-of-interest tags and a point-of-gaze to content-of-interest linker directly maps the user's point-of-gaze to the displayed content-of-interest. A visible-browser-identifier determines which browser window is being displayed and identifies which portions of the page are being displayed. Test data from plural users viewing test pages is collected, analyzed and reported."
To conclude the idea is to have multiple clients equipped with eye trackers that communicates with a server. The central machine coordinates studies and stores the gaze data from each session (in the cloud?). Overall a strategy that makes perfect sense if your differentiating factor is low-cost.
Monday, November 15, 2010
Purpose: Conventional computer-assisted detection CADe systems in screening mammography provide the same decision support to all users. The aim of this study was to investigate the potential of a context-sensitive CADe system which provides decision support guided by each user’s focus of attention during visual search and reporting patterns for a specific case.
Methods: An observer study for the detection of malignant masses in screening mammograms was conducted in which six radiologists evaluated 20 mammograms while wearing an eye-tracking device. Eye-position data and diagnostic decisions were collected for each radiologist and case they reviewed. These cases were subsequently analyzed with an in-house knowledge-based CADe system using two different modes: conventional mode with a globally fixed decision threshold and context-sensitive mode with a location-variable decision threshold based on the radiologists’ eye dwelling data and reporting information.
Results: The CADe system operating in conventional mode had 85.7% per-image malignant mass sensitivity at 3.15 false positives per image FPsI . The same system operating in context-sensitive mode provided personalized decision support at 85.7%–100% sensitivity and 0.35–0.40 FPsI to all six radiologists. Furthermore, context-sensitive CADe system could improve the radiologists’ sensitivity and reduce their performance gap more effectively than conventional CADe.
Monday, November 8, 2010
Wednesday, November 3, 2010
- NEW device: MEG 250
- RED5: improved tracking stability
- RED: improved pupil diameter calculation
- RED: improved distance measurement
- RED: improved 2 and 5 point calibration model
- file transfer server is installed with iView X now
- added configurable parallel port address
- RED5 camera drop outs in 60Hz mode on Clevo Laptop
- initializes LPT_IO and PIODIO on startup correctly
- RED standalone mode can be used with all calibration methods via remote commands
- lateral offset in RED5 head position visualization
- HED: Use TimeStamp in [ms] as Scene Video Overlay
- improved rejection parameters for NNL Devices
- crash when using ET_CAL in standalone mode
- strange behaviour with ET_REM and eT_REM. Look up in command list is now case-insensitive.
- RED5: Default speed is 60Hz for RED and 250Hz for RED250
- and many more small fixes and improvements
Tuesday, November 2, 2010
Optimization and Dynamic Simulation of a Parallel Three Degree-of-Freedom Camera Orientation System (T. Villgrattner, 2010)
German researchers have developed a robotic camera that mimics the motion of real eyes and even moves at superhuman speeds. The camera system can point in any direction and is also capable of imitating the fastest human eye movements, which can reach speeds of 500 degrees per second. But the system can also move faster than that, achieving more than 2500 degrees per second. It would make for very fast robot eyes. Led by Professor Heinz Ulbrich at the Institute of Applied Mechanics at theTechnische Universität München, a team of researchers has been working on superfast camera orientation systems that can reproduce the human gaze.
In many experiments in psychology, human-computer interaction, and other fields, researchers want to monitor precisely what subjects are looking at. Gaze can reveal not only what people are focusing their attention on but it also provides clues about their state of mind and intentions. Mobile systems to monitor gaze include eye-tracking software and head-mounted cameras. But they're not perfect; sometimes they just can't follow a person's fast eye movements, and sometimes they provide ambiguous gaze information.
In collaboration with their project partners from the Chair for Clinical Neuroscience, Ludwig-Maximilians Universität München, Dr. Erich Schneider, and Professor Thomas Brand the Munich team, which is supported in part by the CoTeSys Cluster, is developing a system to overcome those limitations. The system, propped on a person's head, uses a custom made eye-tracker to monitor the person's eye movements. It then precisely reproduces those movements using a superfast actuator-driven mechanism with yaw, pitch, and roll rotation, like a human eyeball. When the real eye move, the robot eye follows suit.
The engineers at the Institute of Applied Mechanics have been working on the camera orientation system over the past few years. Their previous designs had 2 degrees of freedom (DOF). Now researcher Thomas Villgrattner is presenting a system that improves on the earlier versions and features not 2 but 3 DOF. He explains that existing camera-orientation systems with 3 DOF that are fast and lightweight rely on model aircraft servo actuators. The main drawback of such actuators is that they can introduce delays and require gear boxes.
So Villgrattner sought a different approach. Because this is a head-mounted device, it has to be lightweight and inconspicuous -- you don't want it rattling and shaking on the subject's scalp. Which actuators to use? The solution consists of an elegant parallel system that uses ultrasonic piezo actuators. The piezos transmit their movement to a prismatic joint, which in turns drives small push rods attached to the camera frame. The rods have spherical joints on either end, and this kind of mechanism is known as a PSS, or prismatic, spherical, spherical, chain. It's a "quite nice mechanism," says Masaaki Kumagai, a mechanical engineering associate professor at Tohoku Gakuin University, in Miyagi, Japan, who was not involved in the project. "I can't believe they made such a high speed/acceleration mechanism using piezo actuators."
The advantage is that it can reach high speeds and accelerations with small actuators, which remain on a stationary base, so they don't add to the inertial mass of the moving parts. And the piezos also provide high forces at low speeds, so no gear box is needed. Villgrattner describes the device's mechanical design and kinematics and dynamics analysis in a paper titled "Optimization and Dynamic Simulation of a Parallel Three Degree-of-Freedom Camera Orientation System," presented at last month's IEEE/RSJ International Conference on Intelligent Robots and Systems.
The current prototype weighs in at just 100 grams. It was able to reproduce the fastest eye movements, known as saccades, and also perform movements much faster than what our eyes can do. The system, Villgrattner tells me, was mainly designed for a "head-mounted gaze-driven camera system," but he adds that it could also be used "for remote eye trackers, for eye related 'Wizard of Oz' tests, and as artificial eyes for humanoid robots." In particular, this last application -- eyes for humanoid robots -- appears quite promising, and the Munich team is already working on that. Current humanoid eyes are rather simple, typically just static cameras, and that's understandable given all the complexity in these machines. It would be cool to see robots with humanlike -- or super human -- gaze capabilities.
Below is a video of the camera-orientation system (the head-mount device is not shown). First, it moves the camera in all three single axes (vertical, horizontal, and longitudinal) with an amplitude of about 30 degrees. Next it moves simultaneously around all three axes with an amplitude of about 19 degrees. Then it performs fast movements around the vertical axis at 1000 degrees/second and also high dynamic movements around all axes. Finally, the system reproduces natural human eye movements based on data from an eye-tracking system." (source)
Monday, November 1, 2010
We present a novel approach to comparing saccadic eye movement sequences based on the Needleman–Wunsch algorithm used in bioinformatics to compare DNA sequences. In the proposed method, the saccade sequence is spatially and temporally binned and then recoded to create a sequence of letters that retains fixation location, time, and order information. The comparison of two letter sequences is made by maximizing the similarity score computed from a substitution matrix that provides the score for all letter pair substitutions and a penalty gap. The substitution matrix provides a meaningful link between each location coded by the individual letters. This link could be distance but could also encode any useful dimension, including perceptual or semantic space. We show, by using synthetic and behavioral data, the benefits of this method over existing methods. The ScanMatch toolbox for MATLAB is freely available online (www.scanmatch.co.uk).
- Filipe Cristino, Sebastiaan Mathôt, Jan Theeuwes, and Iain D. Gilchrist
ScanMatch: A novel method for comparing fixation sequences
Behav Res Methods 2010 42:692-700; doi:10.3758/BRM.42.3.692
Abstract Full Text (PDF) References
An improved algorithm for automatic detection of saccades in eye movement data and for calculating saccade parameters (Behrens et al, 2010)
"This analysis of time series of eye movements is a saccade-detection algorithm that is based on an earlier algorithm. It achieves substantial improvements by using an adaptive-threshold model instead of fixed thresholds and using the eye-movement acceleration signal. This has four advantages: (1) Adaptive thresholds are calculated automatically from the preceding acceleration data for detecting the beginning of a saccade, and thresholds are modified during the saccade. (2) The monotonicity of the position signal during the saccade, together with the acceleration with respect to the thresholds, is used to reliably determine the end of the saccade. (3) This allows differentiation between saccades following the main-sequence and non-main-sequence saccades. (4) Artifacts of various kinds can be detected and eliminated. The algorithm is demonstrated by applying it to human eye movement data (obtained by EOG) recorded during driving a car. A second demonstration of the algorithm detects microsleep episodes in eye movement data."
- F. Behrens, M. MacKeben, and W. Schröder-Preikschat
An improved algorithm for automatic detection of saccades in eye movement data and for calculating saccade parameters. Behav Res Methods 2010 42:701-708; doi:10.3758/BRM.42.3.701
Abstract Full Text (PDF References
Thursday, October 28, 2010
HD video available (click 360p and select 720p)
Tuesday, August 17, 2010
Download instructions as PDF (8.1Mb)
Monday, August 16, 2010
Special Issue on Eye Gaze in Intelligent Human-Machine Interaction
Aims and ScopePartly because of the increasing availability of nonintrusive and high-performance eye tracking devices, recent years have seen a growing interest in incorporating human eye gaze in intelligent user interfaces. Eye gaze has been used as a pointing mechanism in direct manipulation interfaces, for example, to assist users with “locked-in syndrome”. It has also been used as a reflection of information needs in web search and as a basis for tailoring information presentation. Detection of joint attention as indicated by eye gaze has been used to facilitate computer-supported human-human communication. In conversational interfaces, eye gaze has been used to improve language understanding and intention recognition. On the output side, eye gaze has been incorporated into the multimodal behavior of embodied conversational agents. Recent work on human-robot interaction has explored eye gaze in incremental language processing, visual scene processing, and conversation engagement and grounding.
This special issue will report on state-of-the-art computational models, systems, and studies that concern eye gaze in intelligent and natural human-machine communication. The nonexhaustive list of topics below indicates the range of appropriate topics; in case of doubt, please contact the guest editors. Papers that focus mainly on eye tracking hardware and software as such will be relevant (only) if they make it clear how the advances reported open up new possibilities for the use of eye gaze in at least one of the ways listed above.
- Empirical studies of eye gaze in human-human communication that provide new insight into the role of eye gaze and suggest implications for the use of eye gaze in intelligent systems. Examples include new empirical findings concerning eye gaze in human language processing, in human-vision processing, and in conversation management.
- Algorithms and systems that incorporate eye gaze for human-computer interaction and human-robot interaction. Examples include gaze-based feedback to information systems; gaze-based attention modeling; exploiting gaze in automated language processing; and controlling the gaze behavior of embodied conversational agents or robots to enable grounding, turn-taking, and engagement.
- Applications that demonstrate the value of incorporating eye gaze in practical systems to enable intelligent human-machine communication.
- Elisabeth André, University of Augsburg, Germany (contact: andre[at]informatik[dot]uni-augsburg.de)
- Joyce Chai, Michigan State University, USA
- By December 15th, 2010: Submission of manuscripts
- By March 23rd, 2011: Notification about decisions on initial submissions
- By June 23rd, 2011: Submission of revised manuscripts
- By August 25th, 2011: Notification about decisions on revised manuscripts
- By September 15th, 2011: Submission of manuscripts with final minor changes
- Starting October, 2011: Publication of the special issue on the TiiS website and subsequently in the ACM Digital Library and as a printed issue
Tuesday, August 10, 2010
Wednesday, August 4, 2010
Sunday, July 18, 2010
Monday, June 28, 2010
"A six-year-old boy who nearly went blind in one eye can now see again after he was told to play on a Nintendo games console. Ben Michaels suffered from amblyopia, or severe lazy eye syndrome in his right eye from the age of four. His vision had decreased gradually in one eye and without treatment his sight loss could have become permanent. His GP referred him to consultant Ken Nischal who prescribed the unusual daily therapy. Ben, from Billericay, Essex, spends two hours a day playing Mario Kart on a Nintendo DS with his twin Jake. Ben wears a patch over his good eye to make his lazy one work harder. The twins' mother, Maxine, 36, said that from being 'nearly blind' in the eye, Ben's vision had 'improved 250 per cent' in the first week. She said: 'When he started he could not identify our faces with his weak eye. Now he can read with it although he is still a way off where he ought to be. 'He was very cooperative with the patch, it had phenomenal effect and we’re very pleased.' Mr Nischal of Great Ormond Street Children's Hospital, said the therapy helped children with weak eyesight because computer games encourage repetitive eye movement, which trains the eye to focus correctly. 'A games console is something children can relate to. It allows us to deliver treatment quicker,' he said. 'What we don’t know is whether improvement is solely because of improved compliance, ie the child sticks with the patch more, or whether there is a physiological improvement from perceptual visual learning.' The consultant added that thousands of youngsters and adults could benefit from a similar treatment." (source)
Tuesday, June 15, 2010
Speech Dasher allows writing using a combination of speech and a zooming interface. Users ﬁrst speak what they want to write and then they navigate through the space of recognition hypotheses to correct any errors. Speech Dasher’s model combines information from a speech recognizer, from the
user, and from a letter-based language model. This allows fast writing of anything predicted by the recognizer while also providing seamless fallback to letter-by-letter spelling for words not in the recognizer’s predictions. In a formative user study, expert users wrote at 40 (corrected) words per
minute. They did this despite a recognition word error rate of 22%. Furthermore, they did this using only speech and the direction of their gaze (obtained via an eye tracker).
Wednesday, May 26, 2010
The booklet containing the abstracts for the Scandinavian Workshop on Applied Eye Tracking (SWAET) is now available for download, 55 pages about 1Mb. The abstracts spans a wide range from gaze interaction to behavior and perception. A short one page format makes it attractive to venture into a multitude of domains and acts as a nice little starting point for digging deeper. Shame I couldn't attend, maybe next year. Kudos for making this booklet available.
|Eye movements during mental imagery are not perceptual re-enactments||R. Johansson, J. Holsanova, K. Holmqvist|
|Practice eliminates "looking at nothing"||A. Scholz, K. Mehlhorn, J.F. Krems|
|Learning Perceptual Skills for Medical Diagnosis via Eye Movement Modeling Examples on Patient Video Cases||H. Jarodzka, T. Balslev, K. Holmqvist, K. Scheiter, M. Nyström, P. Gerjets, B. Eika|
|Objective, subjective, and commercial information: The impact of presentation format on the visual inspection and selection of Web search results||Y. Kammerer, P. Gerjets|
|Eye Movements and levels of attention: A stimulus driven approach||F.B. Mulvey, K. Holmqvist, J.P Hansen|
|Player‟s gaze in a collaborative Tetris game||P Jermann, M-A Nüssli, W. Li|
|Naming associated objects: Evidence for parallel processing||L. Mortensen , A.S. Meyer|
|Reading Text Messages - An Eye-Tracking Study on the Influence of Shortening Strategies on Reading Comprehension||V. Heyer, H. Hopp|
|Eye movement measures to study the online comprehension of long (illustrated) texts||J. Hyönä, J.K, Kaakinen|
|Self-directed Learning Skills in Air-traffic Control; A Cued Retrospective Reporting Study||L.W. van Meeuwen, S. Brand-Gruwel, J.J. G. van Merriënboer, J. J.P.R. de Bock, P.A. Kirschner|
|Drivers‟ characteristic sequences of eye and head movements in intersections||A. Bjelkemyr, K. Smith|
|Comparing the value of different cues when using the retrospective think aloud method in web usability testing with eye tracking||A. Olsen|
|Gaze behavior and instruction sensitivity of Children with Autism Spectrum Disorders when viewing pictures of social scenes||B. Rudsengen, F. Volden|
|Impact of cognitive workload on gaze-including interaction||S. Trösterer, J. Dzaack|
|Interaction with mainstream interfaces using gaze alone||H. Skovsgaard, J. P. Hansen, J.C. Mateo|
|Stereoscopic Eye Movement Tracking: Challenges and Opportunities in 3D||G. Öqvist Seimyr, A. Appelholm, H. Johansson R. Brautaset|
|Sampling frequency – what speed do I need?||R. Andersson, M. Nyström, K. Holmqvist|
|Effect of head-distance on raw gaze velocity||M-A Nüssli, P. Jermann|
|Quantifying and modelling factors that influence calibration and data quality||M. Nyström, R. Andersson, J. van de Weijer|
Monday, May 24, 2010
As smartphones evolve researchers are studying new techniques to ease the human-mobile interaction. We propose EyePhone, a novel "hands free" interfacing system capable of driving mobile applications/functions using only the user's eyes movement and actions (e.g., wink). EyePhone tracks the user's eye movement across the phone's display using the camera mounted on the front of the phone; more speci cally, machine learning algorithms are used to: i) track the eye and infer its position on the mobile phone display as a user views a particular application; and ii) detect eye blinks that emulate mouse clicks to activate the target application under view. We present a prototype implementation of EyePhone on a Nokia 810, which is capable of tracking the position of the eye on the display, mapping this positions to a function that is activated by a wink. At no time does the user have to physically touch the phone display.
Thursday, May 20, 2010
If your new to eye tracking it should be noted that the reporter claiming that common video based systems uses infrared lasers is just silly. It's essentially light-sources working in the IR spectrum (similar to the LED in your remote control).
Friday, April 30, 2010
"GazeLib is a programming library which making real-time low-cost gaze tracking becomes possible. The library provide functions performing remote gaze tracking under ambient lighting condition using a single, low cost, off-the-shelf webcam. Developers can easily build gaze tracking technologies implemented applications in only few lines of code. GazeLib project focuses on promoting gaze tracking technology to consumer-grade human computer interfaces by reducing the price, emphasizing ease-of-use, increasing the extendibility, and enhancing the flexibility and mobility."
Monday, April 26, 2010
More info in the press-release.
Sunday, April 25, 2010
Wednesday, April 14, 2010
"The Open-Source ITU Gaze Tracker"
Gaze tracking offers them the possibility of interacting with a computer by just using eye movements, thereby making users more independent. However, some people (for example users with a severe disability) are excluded from access to gaze interaction due to the high prices of commercial systems (above 10.000€). Gaze tracking systems built from low-cost and off-the-shelf components have the potential of facilitating access to the technology and bring prices down.
The ITU Gaze Tracker is an off-the-shelf system that uses an inexpensive web cam or a video camera to track the user’s eye. It is free and open-source, offering users the possibility of trying out gaze interaction technology for a cost as low as 20€, and to adapt and extend the software to suit specific needs.
In this talk we will present the open-source ITU Gaze Tracker and show the different scenarios in which the system has been used and evaluated.
Monday, April 12, 2010
Monday, March 29, 2010
Zdf.de: Wenn das auge die seite umblaettert?
Wired: Eye-Tracking Tablets and the Promise of Text 2.0
More demos at the groups website
"We built an eyetracking system using mass-marketed off-the shelf components at 1/1000 of that cost, i.e. for less then 30 GBP. Once we made such a system that cheap we started thinking of it as a user interface for everyday use for impaired people.. The project was enable by realising that certain mass-marketed web cameras for video game consoles offer impressive performance approaching that of much more expensive research grade cameras.
"From this starting point research in our group has focussed on two parts so far:
1. The TED software, which is composed of two components which can run on two different computers (connected by wireless internet) or run on the same computer. The first component is the TED server (Linux-based) which interfaces directly with the cameras and processes the high-speed video feed and makes the data available (over the internet) to the client software. The client forms the second components, it is written in Java (i.e. it runs on any computer, Windows, Mac, Unix, ...) and provides the Mouse-control-via-eye-movements, the “Pong” video game as well as configuration and calibration functions.
This two part solution allows the cameras to be connected to a cost-effective netbook (e.g. on a wheel chair) and allow control of other computers over the internet (e.g. in the living room, office and kitchen). This software suite, as well as part of the low-level camera driver was implemented by Ian Beer, Aaron Berk, Oliver Rogers and Timothy Treglown, for their undergraduate project in the lab.
Note:the “Pong” video game has a two player mode, allowing two people to play against each other using two eye-trackers or eye-tracker vs keyboard. It is very easy to use, just look where you want the pong paddle to move...
2. The camera-spectacles (visible in most press photos), as well as a two-camera software (Windows-based) able to track eye-movements in 3D (i.e. direction and distance) for wheelchair control. These have been build and developed by William Abbott (Dept. of Bioengineering)."
The Engineer: Eye-movement game targets disabled
Engadget (German): Neurotechnologie: Pong mit Augenblinzeln gespielt in London
Friday, March 26, 2010
Table of Contents
Front matter (cover, title page, table of content, preface)
Back matter (committees and reviewers, industrial supporters, cover image credits, author index)
|SESSION: Keynote address|
|An eye on input: research challenges in using the eye for computer input control |
I. Scott MacKenzie
Pdf (1.52 MB). View online.
SESSION: Long papers 1 -- Advances in eye tracking technology
|Homography normalization for robust gaze estimation in uncalibrated setups |
Dan Witzner Hansen, Javier San Agustin, Arantxa Villanueva
Pdf(942 KB). View online.
|Head-mounted eye-tracking of infants' natural interactions: a new method |
John M. Franchak, Kari S. Kretch, Kasey C. Soska, Jason S. Babcock, Karen E. Adolph (awarded best paper)
Pdf (3.68 MB). View online.
|User-calibration-free remote gaze estimation system |
Dmitri Model, Moshe Eizenman
Pdf (452 KB). View online.
SESSION: Long papers 2 -- Scanpath representation and comparison methods
|Visual scanpath representation |
Joseph H. Goldberg, Jonathan I. Helfman
Pdf (1.68 MB). View online.
|A vector-based, multidimensional scanpath similarity measure |
Halszka Jarodzka, Kenneth Holmqvist, Marcus Nyström
Pdf (425 KB). View online.
|Scanpath comparison revisited |
Andrew T. Duchowski, Jason Driver, Sheriff Jolaoso, William Tan, Beverly N. Ramey, Ami Robbins
Pdf (1.34 MB). View online.
SESSION: Long papers 3 -- Analysis and interpretation of eye movements
|Scanpath clustering and aggregation |
Joseph H. Goldberg, Jonathan I. Helfman
Pdf (636 KB). View online.
|Match-moving for area-based analysis of eye movements in natural tasks |
Wayne J. Ryan, Andrew T. Duchowski, Ellen A. Vincent, Dina Battisto
Pdf (10.41 MB). View online.
|Interpretation of geometric shapes: an eye movement study |
Miquel Prats, Steve Garner, Iestyn Jowers, Alison McKay, Nieves Pedreira
Pdf (1.73 MB). View online.
SESSION: Long papers 4 -- Analysis and understanding of visual tasks
|Fixation-aligned pupillary response averaging |
Pdf (935 KB). View online.
|Understanding the benefits of gaze enhanced visual search |
Pernilla Qvarfordt, Jacob T. Biehl, Gene Golovchinsky, Tony Dunningan
Pages: 283-290 Pdf (694 KB). View online.
|Image ranking with implicit feedback from eye movements |
David R. Hardoon, Kitsuchart Pasupa
Pdf (409 KB). View online.
SESSION: Long papers 5 -- Gaze interfaces and interactions
|How the interface design influences users' spontaneous trustworthiness evaluations of web search results: comparing a list and a grid interface |
Yvonne Kammerer, Peter Gerjets
Pdf (349 KB). View online.
|Space-variant spatio-temporal filtering of video for gaze visualization and perceptual learning |
Michael Dorr, Halszka Jarodzka, Erhardt Barth
Pdf (188 KB). View online.
|Alternatives to single character entry and dwell time selection on eye typing |
Mario H. Urbina, Anke Huckauf
Pdf (802 KB). View online.
SESSION: Long papers 6 -- Eye tracking and accessibility
|Designing gaze gestures for gaming: an investigation of performance |
Howell Istance, Aulikki Hyrskykari, Lauri Immonen, Santtu Mansikkamaa, Stephen Vickers
Pdf (760 KB). View online.
|ceCursor, a contextual eye cursor for general pointing in windows environments |
Marco Porta, Alice Ravarelli, Giovanni Spagnoli
Pdf (884 KB). View online.
|BlinkWrite2: an improved text entry method using eye blinks |
Behrooz Ashtiani, I. Scott MacKenzie
Pdf (1.50 MB). View online.