Raguram et al, Reconstruction of Reflected Input (2011)

Extreme philology: typed text can be recovered from a reflected image of the writer’s hand motions, even when the writing device itself is in motion and the writing itself is invisible. For example, if you sit at the front of a vehicle and type on a mobile phone, someone at the back of the vehicle can record the reflection in glasses or a window and extract the typed text from the recording. The process is innovative, but its constituent elements are not; it chains digital magnification, image stabilization, difference matting and optical character recognition into a single hair-raising violation of privacy:

Rahul Raguram, Andrew White, Dibyendusekhar Goswami, Fabian Monrose and Jan-Michael Frahm. ‘iSpy: Automatic Reconstruction of Typed Input from Compromising Reflections’. ACM Conference on Computer and Communications Security (CCS), 2011. [author’s site / PDF]

From the Abstract

Using footage captured in realistic environments (e.g., on a bus), we show that we are able to reconstruct fluent translations of recorded data in almost all of the test cases, correcting users’ typing mistakes at the same time. We believe these results highlight the importance of adjusting privacy expectations in response to emerging technologies.

One Reply to “Raguram et al, Reconstruction of Reflected Input (2011)”

Comments are closed.