Now Reading
A Digital Camera That Can Help You See Beyond Walls

A Digital Camera That Can Help You See Beyond Walls

Light travels in straight lines. This is why, on a cloudless night, we can look at stars that are millions of light years away from Earth. But when objects are around a bend in the road, or behind a tree or a wall, not enough light from them reaches us for us to make out their appearance.

A group of scientists from Boston University were audacious enough to believe we still ought to be able to see them.

In a study published earlier this month, Vivek Goyal and his colleagues reported how a simple digital camera working with a computer algorithm could help people see objects hidden behind barriers.

With the camera pointed at a wall in a dark room, Goyal and co. team could recreate images displayed on a screen blocked from the cameras view.

Also read: The Next Generation of Cameras Might See Behind Walls

This is called non-line-of-sight (NLOS) imaging, and it might seem exotic at first. It’s really not; in fact, it’s based on an idea perfected during World War I.

At the time, soldiers hiding in deep trenches needed a way to track enemy movement on the ground. That’s how the periscope came to be. It uses two mirrors oriented such that light entering at one height is simply reflected until it emerges at a lower height.

Goyal’s team did something similar: they used reflected light to recreate images displayed on an LED screen placed out of sight. But instead of using a mirror, they banked on rays of light reflected off of a wall.

“When light bounces off of a matte wall, it diffuses in all directions. This doesn’t help recreate images,” Goyal told The Wire. “But when you place a small opaque object in front of the screen, the relation changes completely.”

The experimental setup for NLOS imaging. Credit: Charles Saunders
The experimental setup for NLOS imaging. Credit: Charles Saunders

The opaque object blocks some of the light, not all of it. This forces light reflected from different parts of the LED screen to take different paths to the wall. As a result, the light becomes unmixed – like a complicated braid being forced unravel into distinct strands.

This isn’t useful for the unaided human eye but a camera sensor can pick up on the change. Digital cameras capture light in the primary colour format  (red, green and blue) so they record vital colour information as well. Some of this comes from the shadow and penumbra – “the light grey areas adjoining the darker regions of the shadow” – of the opaque object.

“After recording the images,” Goyal said, “it’s only a matter of solving a linear inverse problem.” This the algorithm did.In this way, the team successfully recreated four out of four images displayed on an LED screen. With knowledge of the opaque object’s position and dimensions, the algorithm required less than a minute to figure out what was on display.

But even when the position was not known, the algorithm could adjust for it based on its shadow, and then go on to recreate the image. The only downside was that it took about 19 minutes this way. The software could also approximate the position of the LED screen with respect to the wall.

https://doi.org/10.1038/s41586-018-0868-6
Source: https://doi.org/10.1038/s41586-018-0868-6
The first column shows what the screen displayed. Row 1 after that shows what was seen; row 2, what was measured; row 3, the reconstructed image; and row 4, the final construction with reduced noise. Credit: https://doi.org/10.1038/s41586-018-0868-6
The first column shows what the screen displayed. Row 1 after that shows what was seen; row 2, what was measured; row 3, the reconstructed image; and row 4, the final construction with reduced noise. Credit: https://doi.org/10.1038/s41586-018-0868-6

“It is a very nice demonstration of how computational processing can tackle different kinds of problems,” Daniele Faccio, a professor of quantum technologies at the University of Glasgow, UK, told The Wire.

This technique – called computational periscopy – deviates from work already done in the area of NLOS imaging. So far, most, if not all, scientists have used lasers to look around bends, using the time taken by pulses to return to the source after hitting an object, with a computer to do the math.

But in Goyal and co.’s case, “we see that incoherent light emitted from the object itself – in this case a screen – can also be used, with no timing or synchronisation required,” Faccio said.

Laser-based detection and ranging systems have gained traction in survey technologies. However, many scientists believe they may not be the ultimate solution for NLOS imaging, especially when using them on the road.

“There are concerns about laser illumination both in terms of safety and practicality,” Faccio explained. “For instance, what will happen when all cars on the roads are shining their lasers at the same time? How will all these systems interfere with each other?”

And then there are the use-cases when the observer wants to remain hidden. “Shining lasers is a giveaway – it’s like shouting ‘I am over here!’. Under these circumstances, passive approaches like looking with a camera using ambient light” could be more desirable.

And without lasers, the technology is also cheaper. “The unique thing about this strategy is that it’s so simple,” Goyal said.

Also read: Cosmic-Ray Imaging Finds Hidden Structure in Egypt’s Great Pyramid

Then again, it’s not time to rule out lasers – especially with the military looking on: the study was funded by the US Defence Advanced Research Projects Agency. Goyal et al’s technique has only been tested in a dark room. And whenever they increased the amount of ambient light, the algorithm would flop.

This is a big roadblock. But because the ‘dark-room’ tests have been so successful, Goyal is hopeful that they can improve the technology.

It’s only at this point that they will be able to think about applications, whether civilian or military.

Their experiment was actually inspired by another one at the Massachusetts Institute of Technology.

“They used a moving obstruction during a video recording of a wall facing a window, and used the video to recreate the image of what lay outside the window. It was a richer data set,” Goyal said. “We wanted to see if the experiment would work in its most basic, simple, elemental form.”

It did. They had stripped down the setup to a single camera – the sensor – and an algorithm – the analyser. Setting aside the darkness caveat, it’s amazing that this is all it takes to able to see things that we’re not supposed to be seeing.

Sarah Iqbal is a freelance science writer.

Scroll To Top