Pages

Monday, July 15, 2013

Imaging Congress - final summary


Hey everybody.. the last installment of my Imaging Congress talk summaries.. Other than the last few posts there were a lot of interesting talks, several in the areas of lightfields/plenoptic systems, 3D capture and display and structured light. I can't write about everything, but here's a shot!

Quick ride in the Metro, the Washington Monument and the US Capitol!

Chein-Hung Lu from Princeton spoke about light field imaging. They use two images, one traditional high resolution image and another from a Shack-Hartmann wavefront sensor measurement, to obtain higher resolution lightfield images. They use an iterative Gerchberg-Saxton-like algorithm that upsamples the low resolution lightfield, fourier transforms this, applies a Fourier space constraint from the high resolution traditional image, inverse-Fourier transforms this and reinforces the real and non-negative constraints on the resultant lightfield. They showed results on a microscope with improved resolution.

Anthony Korth from Harvard discussed using two focal plane images and using their moments to obtain perspective images and depth maps.

Basel Salahieh discussed compressive light-field imaging. Instead of using two masks – one at the pupil plane and one at the image plane, to enable compression of the lightfield, they used a single random 2D mask in an intermediate plane and were able to use this joint spatio-angular modulation to obtain compression of 30x.

Bahram Javidi from University of Connecticut and Byoungho Lee from Seoul National University both discussed Integral Imaging and 3D capture/display in their respective talks. Both were exceptional and exhaustive reviews, worth separate posts on their own.

Dan Gray from GE Research spoke about focal shifting 3d capture using an electro-optics module. They use depth from defocus to increase the focal volume and 3D resolution and obtain 3D images. Their system has no moving parts. They take sequential images using an electro-optic module which changes focus at a modest speed ( < 5 seconds). The optical path length (defocus) is changed by using a combination of polarization rotating liquid crystals and birefringent elements like quartz, calcite, etc. Birefringent materials have two refractive indices (ordinary and extra-ordinary) depending on the polarization of incident light. One such birefringent plate with a liquid crystal panel can create two depth planes. N such pairs enable the generation of 2N focal planes. Dan showed images captured with such a module and generated a 3D point cloud.

Achyut Kadambi from MIT spoke about using structured light to image objects through a scattering medium. His application was for vein biometrics, where a transmissive or reflective image of your finger or hand may be used to identify a person. The structure and organization of veins in a person’s body is pretty constant and therefore, the ability to view and identify the pattern of veins is more secure than fingerprints which could be faked by topical application. Products like Accuvein and Hitachi’s IR camera with LED trans-illumination can be used to view veins in one’s hand or finger. The image produced by using structured illumination contains direct and global/scattered components. Direct light component typically contains high spatial frequencies, while the scattered global components contain lower spatial frequencies which don’t change much for slight changes of patterns in the illumination. The authors used 4 wavelengths including an NIR frequency with dictionary learning to view and separate veins from non-veins in a hand.

David Stork from Rambus also spoke on this topic with the same application – vein biometrics. He showed results imaging a wire-mesh through milky water. He used a checkerboard patterned illumination and found ring-like/band-like artifacts in the reconstruction. So they used Hadamard coded illumination patterns which eliminated this artifact.  

Boyd Fowler gave a great summary of highlights from IEEE’s Image Sensors Workshop. This was a nice talk highlighting the design and application aspects of solid state image sensors at the recent IEEE conference. Pixels are getting smaller. Back illuminated sensors are starting to get mainstream for cell phones and related applications. Aptina is doing much work in this, including work on the optical characteristics of pixels with upcoming 0.9 micron pixels. Aptina is also involved in much multi-aperture imaging sensors with a 12M pixel sensor that has 16 focal planes on a single die. There was work on 3D, such as background light suppression for 3D. Organic sensors are being used in cutting edge applications like indirect X-ray imaging to provide more flexible and comfortable imagers for applications like dental x-rays. He discussed some work by Gruev at Washington, St Louis, on polarization and energy sensitive imaging using nanowires for the polarization aspects and vertically stacks pixels to get RGB. Quanta image sensors that give more film-like advantages also seem promising.

I also snuck in a quick tour of National Mall area and saw the amazing buildings in the vicinity. Just about 2 hours including a metro ride, so no real museum visits. Thanks all for the recommendations! I’ll see more next time! :) All I can say is.. Visit Washington DC!

Thursday, July 11, 2013

Seeing is believing, unless it's AR


The vision and desire for wearable displays has been around since 1965 when Ivan Sutherland suggested the idea. Many groups across the world have been working on various methods of executing this vision with large and small devices, which started out looking bulky and geeky, but are beginning to achieve more and more acceptable weight and size. Products from Epson’s Moverio BT-100, Vuzix 1200 XLD and new M100, Sony, Lumus, Digilens from the SBG Labs, Optinvent, and finally Google’s Glass, have all attracted a lot of attention in recent times. As wearable displays start to get more and more practical, applications of all sorts appear more feasible.

Freeform lens from Hong's Lab
Hong Hua from University of Arizona gave an exceptional invited talk on light-weight low cost wearable displays for augmented reality applications at the Applied Industrial Optics meeting. Hong’s talk was from the perspective of practical implementation of the optical design for such light-weight wearables. A fundamental limitation in wearables is a problem of fixed etendue, which forces a tradeoff between field of view and the size of the pupil aperture of the human eye. Since the aperture is fixed, it is difficult to get a large field of view. This problem can be overcome by using off-axis optics which conserve the etendue, such as used by groups like Spitzer et al. at Micro-Optical Corp, and Rolland, et al. Glasses by Epson and Google use reflective waveguides. Nokia uses Holographic diffractive waveguides. Lumus, Optinvent and Q-sight by BAE uses pupil expansion techniques. Hua’s team uses a wedge prism with freeform waveguides. Sensics provides a wide field of view using optical tiling. A common problem with wearables for AR is a ghosting effect from computer generated objects overlying real-world objects. Hua suggested handling it by using a spatial light modulator to modulate the direct-view path, combining the light from the display via a beamsplitter such as done by Kiyokawa (2003), Rolland (2004) and Hua (2012). She proposed managing the vergence-accommodation conflict by using a deformable mirror to change the plane of virtual focus (presumably with eye tracking). 

Freeform optics design and manufacturing has been a recent and welcome revolution in optics, enabling many unusual designs that depart from purely symmetric, spherical or aspheric surfaces. So Hua’s faith in them does seem very reasonable. There still is some time before we see glasses that cover our entire field of view, give us a perfect Augmented Reality experience and exhibit a stylish design, but her results and optimism portend a bright outlook. 

I haven't written here about Kevin Thomson and Jannick Rolland's talks. Parallel sessions! But do check out their websites too.. some of the best work is going on these areas! Jannick just started a Center for Freeforms at Rochester!

Along similar lines, Ram Narayanswamy from the Intel User Experience group spoke about some applications of AR and related technology. He stressed on the importance of combining Optics, Image processing and User experience to deliver unique value to mass markets. He highlighted the role multi-aperture cameras, multiple cameras, and the additional sensors and increased computational abilities of cell phones today in the concept of a social camera. He showed how 3D visualization and location assisted image capture can be used, say, in a tourist application on a mobile device to take pictures in a park and augment your image to appear in the company of a dragon or something fantastic that says “Happy Chinese New Year!” He outlined some of the challenges and desirables in the spaces of sensors, optics and modules. 

Teams like Narayanswamy's that combine user-need with computer vision, imaging, software and hardware are crucial to delivering technologies like Hua's to the consumers of the future!

Wednesday, July 10, 2013

Multi-aperture imaging from Brussels


Following up on some more multi-aperture systems at the Applied Industrial Optics meeting, HeidiOttevaere and her team from Brussels discussed multiple such systems that replace wide angle fisheye lenses. Their multi-channel, multi-resolution system gives less distortion and allows flexible choices in image resolution and fields of view.  They combine two lens arrays in sequence, with baffles acting like channels in between. 


One of the designs from Heidi's team
Ref: Gebirie Y. Belay, Youri Meuret, Heidi Ottevaere,
Peter Veelaert, and Hugo Thienpont, "Design of
a multichannel, multiresolution smart imaging
system," Appl. Opt. 51, 4810-4817 (2012)

She showed two designs, one with three apertures – one with a large field of view and low resolution, one with a small field of view and high resolution and one channel with intermediate FOV and resolution. They used PMMA lenses with four aspheric surfaces in each channel. The alignment of this system was sensitive. In the second design they used only two channels eliminating the intermediate channel. They added a voltage tunable liquid lens in their high resolution subsystem to obtain refocusing capability and extend the depth of field. 

I really liked the fact that most of this work was done at a University, but many of their collaborators were industry. It's tough to do open collaborations like these.. both for students/professors and the industry. Examples like this team demonstrate learning, innovation and value all at once!  

Tuesday, June 25, 2013

Many should be better than one..


Multi-aperture imaging. What is that? Many cameras (lenses/apertures) that work in some synchronized fashion to give something more than a single camera.. typically wider field of view or more resolution. Multi-aperture imaging has been noticeably prominent at many places in the Imaging Congress and I’ll probably write a bit about it here.

The James Webb Space Telescope has many foldable
segments working together to form a single pupil aperture

The volume of an imaging system is proportional to the cube of its aperture diameter. Using multiple smaller cameras instead results in a shorter system. Typically in computational photography we consider multi-view images. Each image in this case is an incoherent image. They can be digitally aligned and combined to get a wider field of view or a higher resolution image.


The Very Large Array telescope at New Mexico
is a radio (not light) telescope array which uses aperture synthesis
Sam Thurman from Lockheed Martin spoke yesterday on this topic at the Computational Optics, Sensing and Imaging (COSI) meeting. He first spoke about combining the optical fields from each imager physically using delay lines and optical phasing, resulting in a single incoherent image capture. This gives better image quality and resolution than separate incoherent images. The SNR of multi-aperture systems depends on the aperture fill-factor. So choosing the arrangement of your apertures is important. If the fill factor reduces, the exposure may need to increase. Gaps in the passband of the system might result in loss of resolution.

Then Sam discussed coherent multi-aperture imaging with active laser illumination and digital holography. In this case each aperture forms a digital hologram. A Fourier-transform, shift, crop and inverse Fourier transform of this hologram gives the reconstructed object field. Obtaining the object fields from each such aperture, digitally phase-aligning and combining them eliminates the physical delay line related hardware. This results in an even lighter system. Sam showed great images taken with such coherent multi-aperture systems. (Note: I'll add images of systems like Sam's if I find free images.. so far the JWST and VLA radio array were the most easily accessible ones online which somewhat illustrate the idea.)

The Applied Industrial Optics meeting has an excellent session planned for Wednesday afternoon that has several folks speaking about the industrial design, manufacture and application of multi-aperture imaging systems. Interested? I’ll be there!