The predictions of the model are based on the study of the so-called saccadic movements (fast and rhythmic movements of the eye). They accompany the shifts of our gaze from one object to another and can suggest the next fixation point. The ratio between the length, range, and maximum speed of saccadic eye movements is determined by certain empirical regularities. However, these models cannot be used by eye trackers to predict eye movements because they are not accurate enough. Therefore, the researchers focused on a mathematical model that helped them obtain saccadic movement parameters. After that, this data was used to calculate the foveated region of an image.
The new method was tested experimentally using a VR helmet and AR glasses. The eye tracker based on the mathematical model was able to detect minor eye movements (3.4 minutes, which is equal to 0.05 degrees), and the inaccuracy amounted to 6.7 minutes (0.11 degrees). Moreover, the team managed to eliminate the calculation error caused by blinking. A filter included in the model reduced the inaccuracy 10 times. The results of the work could be used in VR modelling, video games, and in medicine for surgeries and vision disorders diagnostics.
"We have effectively solved the issue with the foveated rendering technology that existed in the mass production of VR systems. In the future, we plan to calibrate our eye tracker to reduce the impact of display or helmet movements against a user's head," added Prof. Viktor Belyaev from RUDN University.
The results of the study were published in the SID Symposium Digest of Technical Papers.
Wi-Fi 6 is what gamers have been waiting for
Fast algorithms enable 3D holograms without GPUs
Ultraleap signs multi-year 5G virtual reality deal with Qualcomm
Telit 5G enables NFL fans to watch through player's helmets
Ericsson predicts USD 31 trillion 5G consumer market by 2030