I recently tried an Apple Vision Pro. Amazing technology and an incredible experience, but it’s also a Star Trek sensor suite with 23 sensors and 12 cameras that looks all around you and also at you in order to track your gaze and observe your gestures. Now Cornell University researchers have invented a way to track your eyes and gestures without actually watching them.

How? Think sonar.

The navy uses sonar to find submarines or underwater obstacles, and bats use sonar to navigate the world and catch prey. Cornell researchers are using sound to observe tiny changes in your facial muscles to track eye movements. They’re also using sound to track facial expressions so that future VR systems could create believable real-time avatar expressions without putting everyone under visual surveillance. The two implementations of the same technology are called GazeTrak and EyeEcho, respectively, and EyeEcho is the first glasses-based system to continuously and accurately detect facial expressions with sound, Cornell says.

“In a VR environment, you want to recreate detailed facial expressions and gaze movements so that you can have better interactions with other users,” Ke Li, a doctoral student who led the GazeTrak and EyeEcho development, said in a statement.

It’s tiny, cheap, and low-powered, according to Cornell assistant professor Cheng Zhang, so it won’t take too much battery power.

GazeTrack uses one tiny speaker and four tiny microphones placed around each eye frame of a pair of glasses to bounce sound off the eyeball and capture the echoes. An AI system then processes the results to determine where a user’s gaze is focused. EyeEcho only uses one speaker and one mic per side, aimed down at a user’s cheek. Movements there suggest expressions, which can then be recreated in virtual reality on an avatar.

The product, of course, is a prototype at this point, but it’s something that could be incorporated into future eyewear such as Meta’s Smart Glasses, or VR headsets that use eye tracking for system navigation and control, like Apple Vision Pro.

“With this technology, users can have hands-free video calls through an avatar, even in a noisy café or on the street,” Cornell says. “While some smartglasses have the ability to recognize faces or distinguish between a few specific expressions, currently, none track expressions continuously like EyeEcho.”

According to Cornell, these technologies can be powered for several hours on a typical smart glasses battery, or for a full day on a VR headset battery. Given that they’re prototypes right now, that could improve as well.

Share.
Exit mobile version