Despite being the size of a grain of salt, a new microscopic camera can capture crisp, full-colour images on par with normal lenses that are 500,000 times larger.
The ultra-compact optical device was developed by a team of researchers from Princeton University and the University of Washington.
It overcomes problems with previous micro-sized camera designs, which have tended to take distorted and fuzzy images with very limited fields of view.
The new camera could allow super-small robots to sense their surroundings, or even help doctors see problems within the human body.
Within a traditional, full-sized camera, a series of curved glass or plastic lenses are used to bend incoming light rays into focus in a piece of film or a digital sensor.
In contrast, the tiny camera developed by computer scientist Ethan Tseng and his colleagues relies on a special ‘metasurface’ studded with 1.6 million cylindrical posts — each the size of a single HIV virus — which can modulate the behaviour of light.
Each of the posts on the 0.5-millimetre-wide surface has a unique shape that allows it to operate like an antenna.
Machine-learning algorithms then interpret each post’s interaction with light, transforming it into an image.
The photographs that the tiny device takes offer the highest-quality images with the widest field of view for any full-colour metasurface camera developed to date.
Previous designs have tended to have major image distortions, restricted fields of view and problems capturing the full spectrum of visible light – known as ‘RGB’ imaging, because it relies on the mixing of the primary colours of red, green and blue to make other colours, just like the mixing of red, yellow and blue paints in primary school.
Aside from a little blurring near the edges of the frame, the images the tiny camera can capture are comparable to those taken with a regular, full-sized camera setup featuring a series of six refractive lenses.
The camera can also function well in natural light, rather than the pure laser light or other highly idealised conditions required by previous metasurface cameras if they were to produce good-quality images.
‘It’s been a challenge to design and configure these little microstructures to do what you want,’ said Mr Tseng, who is based at Princeton University in New Jersey.
‘For this specific task of capturing large field of view RGB images, it’s challenging because there are millions of these little microstructures [on the metasurface], and it’s not clear how to design them in an optimal way.’
To overcome this, University of Washington optics expert Shane Colburn created a digital model that could simulate metasurface designs and their photographic output, allowing the researchers to assess and refine different configurations.
According to Professor Colburn, the sheer number of antennae on each surface and the complexity of their interactions with light meant that each simulation used ‘massive amounts of memory and time.’
‘Although the approach to optical design is not new, this is the first system that uses a surface optical technology in the front end and neural-based processing in the back,’ said optical engineer Joseph Mait, who was not involved in the study.
‘To jointly design the size, shape and location of the metasurface’s million features and the parameters of the post-detection processing to achieve the desired imaging performance,’ Mr Mait added, was a ‘Herculean task’.
The team is now working to add computational abilities to the camera — both to further enhance image quality, but also to incorporate things like object detection, which would be useful for practical applications. The full findings of the study were published in the journal Nature Communications.