Take our user survey and make your voice heard.
tech

Camera adjusts focus after taking pictures

14 Comments

The requested article has expired, and is no longer available. Any related articles, and user comments are shown below.

© 2012 AFP

©2024 GPlusMedia Inc.

14 Comments
Login to comment

Betcha they are made in China.

-2 ( +0 / -2 )

The tech is very cool, but the resulting photos are 2007-cellphone-camera-resolution. Hoping for version 2 or 3...

0 ( +1 / -1 )

but the resulting photos are 2007-cellphone-camera-resolution

That's the wrong comparison. It's like saying an A3 sheet of paper is bigger than a bowling ball.

0 ( +1 / -1 )

m5c32Sep. 26, 2012 - 02:16PM JST

That's the wrong comparison.

Hardly, the image resolution is irrespective of imagined greatness. When they did their first trials with a 5D sensor (12MP), it resulted in just 300x200 resolution (i.e. 2001 cellphone camera). They improved the algorithms to the point that they got almost the same image quality from a much smaller sensor, and now they claim 720p resolution from 8MP (.not happening, but lets assume it was the same quality). 720p resolution is most certainly 2007 phone specs.

This camera does not allow focus adjustment after photo is taken, rather records a small amount of distance information for an image that is captured on multiple focus points. That information is then used to compute blurring so you can select approximate focus distance, at a great loss of resolution. You can achieve very similar results using a distance map on a large DOF image in programs like Photoshop. There is nothing really all that innovative in this camera over selective blur in point and shoots currently available, and a far cry from the capabilities of an SLR or large sensor camera.

-5 ( +1 / -6 )

hindsight is 20/20

0 ( +0 / -0 )

More gimmicky junk.

1 ( +1 / -0 )

Betcha they are made in China.

Cool!

-2 ( +0 / -2 )

I can see the value of the Lytro in certain very specialized applications, but it has nothing that makes it look like it would sell to the mass market. An answer in search of a question.

0 ( +0 / -0 )

basroll, you got the explanation of how it functions all wrong. This camera does not take a single 2d image and distance data, but a light field, thousands of small images each from a slightly different angle to the subject.this is done by placing a micro lens array over the sensor, each of the lenses covering the same number of pixels as the final image will have. The thousands of micro images -so called light field- are then later used to compute the final image, including the depth of field information.

This is actually the biggest invention in camera industry since the invention of cameras.

1 ( +2 / -1 )

ebisenSep. 27, 2012 - 06:38AM JST

you got the explanation of how it functions all wrong. This camera does not take a single 2d image and distance data, but a light field, thousands of small images each from a slightly different angle to the subject.this is done by placing a micro lens array over the sensor, each of the lenses covering the same number of pixels as the final image will have. The thousands of micro images -so called light field- are then later used to compute the final image, including the depth of field information.

You can check the research articles if you want.

It's a decomposition of three dimensional data onto two two dimensional planes, and the effective result is a two dimensional image with built in distance data. The final image is always two dimensional, and always of low resolution due to the errors in calculation and sensor data (you can only have so many microlenses). Each pixel is more or less calculated from a distance dimensioned window (the second two dimensional plane), and you can estimate the distance to exaggerate it, but only to a certain extent. Most of the images you see are macro shots though, as the camera sensor and lens size dictate that there should be no noticeable blur to decode in normal images.

The micro-lens array (which all cameras have right now actually), is actually used to change the focus distance at each pixel to infinity. When you combine it with sharpening and extracting software, you can guess at the distance for each light point, but only when you have very distinct subjects. The sensor however, is entirely two dimensional, and the internal processing relies on the flat focus image estimate and desired distance to produce estimated images.

Interestingly, the sharp window is directly related to the sensor resolution over microlens array resolution. And the image resolution is related to the microlens resolution. But as sensor resolution increases the sharpness goes down. So it's a horrible spiral of poorer and poorer quality. If you want selective focus, might as well select the focus ahead of time and get full resolution of your sensor. It's cheaper.

-1 ( +0 / -1 )

The micro-lens array (which all cameras have right now actually), is actually used to change the focus distance at each pixel to infinity

All other cameras have lenses over each individual pixel. This camera has the lenses over an area of pixels. Therefore it takes the same image, over and over again, from slightly different angle, as each lens is positioned in a different place. THEREFORE it takes the necessary information to reconstruct all focusing planes, for a low resolution holographic image.

Exactly the same process (in reverse order btw) is used to have real 3D (holographic) television. NHK Research used an 8K camera and an 8K LCD panel with the corresponding 700X480 microlens array on top of it in order to obtain a low resolution holographic image our of a 2D display. the results looked very promising, but this is the future of real (non-steoroscopic) 3D television. 8 million pixels are not enough, Adobe also published experiments (albeit only for photography) with a 100 Mega pixel camera, able to produce a 3mega pixel holograph.

Read this (the white paper that started it):

http://graphics.stanford.edu/papers/lfcamera/lfcamera-150dpi.pdf

0 ( +1 / -1 )

ebisenSep. 27, 2012 - 09:13AM JST

All other cameras have lenses over each individual pixel. This camera has the lenses over an area of pixels. Therefore it takes the same image, over and over again, from slightly different angle, as each lens is positioned in a different place. THEREFORE it takes the necessary information to reconstruct all focusing planes, for a low resolution holographic image.

I would suggest you re-read the articles, as you have clearly misunderstood them. Only one image is taken for the handheld plenoptic camera. There is a second method for lightfield photography that involves moving the camera plane, but it is entirely different and NOT used in this camera. It cannot reconstruct anything, simply estimate, and given that the sensor is 8MP and they claim up to 1080p output, we can assume they have a 4x4 pixel sub array that gives you double the microlens aperture (likely still f4), but resulting in at best f8 equivalent DOF size(based on their research, which is pretty much 50cm to infinity on most point and shoots) rather than f22 in their tests.

And no, that is NOT the white paper that started it, it is simply a technology review based on work that was a decade older and an experiment that was five years older. The math needed for the plenoptic camera was written before, but the current method uses a pinhole assumption which makes things faster, but less accurate.

0 ( +1 / -1 )

The sensor is 3280x3280 pixels (more than 10MPixels), each lens has a diameter of 10 pixels, the lightfield has about 328x328 individual images. Capturing it does not require moving the sensor plane (obviously).

0 ( +0 / -0 )

...but anyway, this is already getting ridiculous :)

0 ( +0 / -0 )

Login to leave a comment

Facebook users

Use your Facebook account to login or register with JapanToday. By doing so, you will also receive an email inviting you to receive our news alerts.

Facebook Connect

Login with your JapanToday account

User registration

Articles, Offers & Useful Resources

A mix of what's trending on our other sites