Extracting audio from visual information: Algorithm recovers speech from the vibrations of a potato-chip bag filmed through soundproof glass. by Larry Hardesty.
From the post:
Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analyzing minute vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a potato-chip bag photographed from 15 feet away through soundproof glass.
In other experiments, they extracted useful audio signals from videos of aluminum foil, the surface of a glass of water, and even the leaves of a potted plant. The researchers will present their findings in a paper at this year’s Siggraph, the premier computer graphics conference.
“When sound hits an object, it causes the object to vibrate,” says Abe Davis, a graduate student in electrical engineering and computer science at MIT and first author on the new paper. “The motion of this vibration creates a very subtle visual signal that’s usually invisible to the naked eye. People didn’t realize that this information was there.”
A big shout-out to MIT, Microsoft, and Adobe for taking privacy to a new low!
The article cites “…obvious applications in law enforcement and forensics….”
Illegitimate governments, I could name a few, think of their activities as “law enforcement.”
You may want to read up on laser microphones. Being entirely passive, this latest technique will avoid some of the detection difficulties with laser microphones.
I don’t know if wavy glass, one defense against laser microphones, will be effective against this new privacy threat.
On the other hand, there’s always the light switch. 😉