Apple Vision Pro’s Eye Tracking Exposed What People Type

The Vision Pro uses 3D avatars on calls and for streaming. These researchers used eye tracking to work out the passwords and PINs people typed with their avatars.
A man wearing a VR headset
Photo-Illustration: Wired Staff; Getty

You can tell a lot about someone from their eyes. They can indicate how tired you are, the type of mood you’re in, and potentially provide clues about health problems. But your eyes could also leak more secretive information: your passwords, PINs, and messages you type.

Today, a group of six computer scientists are revealing a new attack against Apple’s Vision Pro mixed reality headset where exposed eye-tracking data allowed them to decipher what people entered on the device’s virtual keyboard. The attack, dubbed GAZEploit and shared exclusively with WIRED, allowed the researchers to successfully reconstruct passwords, PINs, and messages people typed with their eyes.

“Based on the direction of the eye movement, the hacker can determine which key the victim is now typing,” says Hanqiu Wang, one of the leading researchers involved in the work. They identified the correct letters people typed in passwords 77 percent of the time within five guesses and 92 percent of the time in messages.

To be clear, the researchers did not gain access to Apple’s headset to see what they were viewing. Instead, they worked out what people were typing by remotely analyzing the eye movements of a virtual avatar created by the Vision Pro. This avatar can be used in Zoom calls, Teams, Slack, Reddit, Tinder, Twitter, Skype, and FaceTime.

The researchers alerted Apple to the vulnerability in April, and the company issued a patch to stop the potential for data to leak at the end of July. It is the first attack to exploit people’s “gaze” data in this way, the researchers say. The findings underline how people’s biometric data—information and measurements about your body—can expose sensitive information and be used as part of the burgeoning surveillance industry.

Eye Spy

Your eyes are your mouse when using the Vision Pro. When typing, you look at a virtual keyboard that hovers around, and can be moved and resized. When you’re looking at the right letter, tapping two fingers together works as a click.

What you do stays within the headset, but if you want to jump on a quick Zoom, FaceTime some friends, or livestream, you’ll likely end up using a Persona—the sort of ghostly 3D avatar the Vision Pro creates by scanning your face.

“These technologies … can inadvertently expose critical facial biometrics, including eye-tracking data, through video calls where the user’s virtual avatar mirrors their eye movements,” the researchers write in a preprint paper detailing their findings. Wang says the work relies on two biometrics that can be extracted from recordings of a Persona: the eye aspect ratio (EAR) and eye gaze estimation. (As well as Wang, the research was completed by Siqi Dai, Max Panoff, and Shuo Wang from the University of Florida, Haoqi Shan from blockchain security company CertiK, and Zihao Zhan from Texas Tech University.)

The GAZEploit attack consists of two parts, says Zhan, one of the lead researchers. First, the researchers created a way to identify when someone wearing the Vision Pro is typing by analyzing the 3D avatar they are sharing. For this, they trained a recurrent neural network, a type of deep learning model, with recordings of 30 people’s avatars while they completed a variety of typing tasks.

When someone is typing using the Vision Pro, their gaze fixates on the key they are likely to press, the researchers say, before quickly moving to the next key. “When we are typing our gaze will show some regular patterns,” Zhan says.

Wang says these patterns are more common during typing than if someone is browsing a website or watching a video while wearing the headset. “During tasks like gaze typing, the frequency of your eye blinking decreases because you are more focused,” Wang says. In short: Looking at a QWERTY keyboard and moving between the letters is a pretty distinct behavior.

The second part of the research, Zhan explains, uses geometric calculations to work out where someone has positioned the keyboard and the size they’ve made it. “The only requirement is that as long as we get enough gaze information that can accurately recover the keyboard, then all following keystrokes can be detected.”

Combining these two elements, they were able to predict the keys someone was likely to be typing. In a series of lab tests, they didn’t have any knowledge of the victim’s typing habits, speed, or know where the keyboard was placed. However, the researchers could predict the correct letters typed, in a maximum of five guesses, with 92.1 percent accuracy in messages, 77 percent of the time for passwords, 73 percent of the time for PINs, and 86.1 percent of occasions for emails, URLs, and webpages. (On the first guess, the letters would be right between 35 and 59 percent of the time, depending on what kind of information they were trying to work out.) Duplicate letters and typos add extra challenges.

“It’s very powerful to know where someone is looking,” says Alexandra Papoutsaki, an associate professor of computer science at Pomona College who has studied eye tracking for years and reviewed the GAZEploit research for WIRED.

Papoutsaki says the work stands out as it only relies on the video feed of someone’s Persona, making it a more “realistic” space for an attack to happen when compared to a hacker getting hands-on with someone’s headset and trying to access eye tracking data. “The fact that now someone, just by streaming their Persona, could expose potentially what they’re doing is where the vulnerability becomes a lot more critical,” Papoutsaki says.

While the attack was created in lab settings and hasn’t been used against anyone using Personas in the real world, the researchers say there are ways hackers could have abused the data leakage. They say, theoretically at least, a criminal could share a file with a victim during a Zoom call, resulting in them logging into, say, a Google or Microsoft account. The attacker could then record the Persona while their target logs in and use the attack method to recover their password and access their account.

Quick Fixes

The GAZEploit researchers reported their findings to Apple in April and subsequently sent the company their proof-of-concept code so the attack could be replicated. Apple fixed the flaw in a Vision Pro software update at the end of July, which stops the sharing of a Persona if someone is using the virtual keyboard.

An Apple spokesperson confirmed the company fixed the vulnerability, saying it was addressed in VisionOS 1.3. The company’s software update notes do not mention the fix, but it is detailed in the company's security-specific note. The researchers say Apple assigned CVE-2024-40865 for the vulnerability and recommend people download the latest software updates.

The research highlights how people’s personal data can be inadvertently leaked or exposed. In recent years, police have extracted fingerprints from photographs posted online and identified people by the way they walk in CCTV footage. Law enforcement have also started testing Vision Pros as part of their surveillance efforts.

These privacy and surveillance concerns are likely to become more pressing as wearable technology becomes smaller, cheaper, and able to capture more information about people. “As wearables like glasses, XR, and smartwatches become more integrated into everyday life, users often overlook how much information these devices can collect about their activities and intentions, and the associated privacy risks,” says Cheng Zhang, an assistant professor at Cornell University who also reviewed the Vision Pro research at WIRED’s request. (Zhang’s work has involved creating wearables to help interpret human behaviors.)

“This paper clearly demonstrates one specific risk with gaze typing, but it’s just the tip of the iceberg,” Zhang says. “While these technologies are developed for positive purposes and applications, we also need to be aware of the privacy implications and start taking measures to mitigate potential risks for the future generation of everyday wearables.”

Update 2:30 pm ET, September 12, 2024: Following publication, Apple directed WIRED to a security note where the Vision Pro fix is mentioned. We've updated the story to include this note.