Hey everybody.. the last installment of my Imaging Congress talk summaries.. Other than the last few posts there were a lot of interesting talks, several in the areas of lightfields/plenoptic systems, 3D capture and display and structured light. I can't write about everything, but here's a shot!
|Quick ride in the Metro, the Washington Monument and the US Capitol!|
Chein-Hung Lu from Princeton spoke about light field imaging. They use two images, one traditional high resolution image and another from a Shack-Hartmann wavefront sensor measurement, to obtain higher resolution lightfield images. They use an iterative Gerchberg-Saxton-like algorithm that upsamples the low resolution lightfield, fourier transforms this, applies a Fourier space constraint from the high resolution traditional image, inverse-Fourier transforms this and reinforces the real and non-negative constraints on the resultant lightfield. They showed results on a microscope with improved resolution.
Anthony Korth from Harvard discussed using two focal plane images and using their moments to obtain perspective images and depth maps.
Basel Salahieh discussed compressive light-field imaging. Instead of using two masks – one at the pupil plane and one at the image plane, to enable compression of the lightfield, they used a single random 2D mask in an intermediate plane and were able to use this joint spatio-angular modulation to obtain compression of 30x.
Bahram Javidi from University of Connecticut and Byoungho Lee from Seoul National University both discussed Integral Imaging and 3D capture/display in their respective talks. Both were exceptional and exhaustive reviews, worth separate posts on their own.
Dan Gray from GE Research spoke about focal shifting 3d capture using an electro-optics module. They use depth from defocus to increase the focal volume and 3D resolution and obtain 3D images. Their system has no moving parts. They take sequential images using an electro-optic module which changes focus at a modest speed ( < 5 seconds). The optical path length (defocus) is changed by using a combination of polarization rotating liquid crystals and birefringent elements like quartz, calcite, etc. Birefringent materials have two refractive indices (ordinary and extra-ordinary) depending on the polarization of incident light. One such birefringent plate with a liquid crystal panel can create two depth planes. N such pairs enable the generation of 2N focal planes. Dan showed images captured with such a module and generated a 3D point cloud.
Achyut Kadambi from MIT spoke about using structured light to image objects through a scattering medium. His application was for vein biometrics, where a transmissive or reflective image of your finger or hand may be used to identify a person. The structure and organization of veins in a person’s body is pretty constant and therefore, the ability to view and identify the pattern of veins is more secure than fingerprints which could be faked by topical application. Products like Accuvein and Hitachi’s IR camera with LED trans-illumination can be used to view veins in one’s hand or finger. The image produced by using structured illumination contains direct and global/scattered components. Direct light component typically contains high spatial frequencies, while the scattered global components contain lower spatial frequencies which don’t change much for slight changes of patterns in the illumination. The authors used 4 wavelengths including an NIR frequency with dictionary learning to view and separate veins from non-veins in a hand.
David Stork from Rambus also spoke on this topic with the same application – vein biometrics. He showed results imaging a wire-mesh through milky water. He used a checkerboard patterned illumination and found ring-like/band-like artifacts in the reconstruction. So they used Hadamard coded illumination patterns which eliminated this artifact.
Boyd Fowler gave a great summary of highlights from IEEE’s Image Sensors Workshop. This was a nice talk highlighting the design and application aspects of solid state image sensors at the recent IEEE conference. Pixels are getting smaller. Back illuminated sensors are starting to get mainstream for cell phones and related applications. Aptina is doing much work in this, including work on the optical characteristics of pixels with upcoming 0.9 micron pixels. Aptina is also involved in much multi-aperture imaging sensors with a 12M pixel sensor that has 16 focal planes on a single die. There was work on 3D, such as background light suppression for 3D. Organic sensors are being used in cutting edge applications like indirect X-ray imaging to provide more flexible and comfortable imagers for applications like dental x-rays. He discussed some work by Gruev at Washington, St Louis, on polarization and energy sensitive imaging using nanowires for the polarization aspects and vertically stacks pixels to get RGB. Quanta image sensors that give more film-like advantages also seem promising.
I also snuck in a quick tour of National Mall area and saw the amazing buildings in the vicinity. Just about 2 hours including a metro ride, so no real museum visits. Thanks all for the recommendations! I’ll see more next time! :) All I can say is.. Visit Washington DC!