Menu Sign In Contact FAQ
Banner
Welcome to our forums

Will a phone ever be anywhere as good as a DSLR?

You can also change the depth of field with the Focos app on the iPhone X.

EDLE, Netherlands

So the phone uses software to work out the “desired foreground object” boundaries, and blurs the background to emulate a wide aperture?

There is surely no way to software-emulate the depth of field effects of a variable aperture optical system. The only way to do it is by object identification within the image and making assumptions about the object distances.

That Apple video is sooo much hype, aimed at people who are easily impressed by technology It does sound like they are doing exactly this. It is obvious around 1:14.

Administrator
Shoreham EGKA, United Kingdom

It might be a hyped up phone and the blurring of the background might not we “real” but the results are stunning and good enough for me to drop my DSLR for most of the picture taking. :-)

EDLE, Netherlands

I can see they have written some software which identifies the outline of a person (after all, software which detects p0rn images has been around for years, for automated monitoring on sites like facebook) and blurs the rest, but what if the foreground object is say a TB20? There cannot be a completely general algorithm, without the application of a vast amount of AI.

Perhaps this was the obvious next step, after the smile detection software we’ve had had for a while in cameras, and the more recent stuff like using a monochrome sensor to reduce image noise in a colour image (because most noise is in the luminance channel).

I just find the “power of the bionic processor” evangelism severely cringe-worthy

Administrator
Shoreham EGKA, United Kingdom

Peter wrote:

but what if the foreground object is say a TB20? There cannot be a completely general algorithm

I don’t see why not, if I had enough brains to write such software. I would only blur areas that were not already sharp. Sharp areas could be determined by contrast, which is how some focusing is done in certain cameras. The blur function be the opposite to the the un-blur that can be used when an optical defect is known. Like for example the defect that the Hubble telescope had.

But like most simulations (including flight) they are never the same as the real thing.

Last Edited by Ted at 20 Sep 18:27
Ted
United Kingdom

The Hubble fix was IIRC just about one order of magnitude improvement and was possible because every object was at infinity so certain assumptions could be made for the deconvolution. In this “computational photography” (the latest fashion term) business, you cannot tell the distance so have to identify objects principally by shape.

Yes, Apple (or whoever wrote the code – I am sure Samsung and Huawei are doing the same already) probably help the identification of the “foreground object” outline by looking at where slightly out of focus line features (those being of background objects, presumably) disappear behind it. I can see that will break in lots of scenarios. But in others it will produce better results than a “proper camera” because e.g. you can have the whole face in sharp focus even though parts of it are nearer the camera, while blurring everything behind, whereas with a normal camera it can be a challenge (or impossible) to get e.g. both eyes in focus, on a slightly side-on shot, if you want the background substantially blurred.

Most likely the biggest investment will have been made in recognising the human form – assisted by the tons of CGI software already produced to simulate that.

I was never a “pro” but most pros will say that reality is irrelevant. Witness the photo libraries, packed with heavily photoshopped images. So this is the way of the future It will sell the phones, for sure.

Administrator
Shoreham EGKA, United Kingdom

Peter wrote:

The Hubble fix was IIRC just about one order of magnitude improvement and was possible because every object was at infinity so certain assumptions could be made for the deconvolution. In this “computational photography” (the latest fashion term) business, you cannot tell the distance so have to identify objects principally by shape.

I would suggest there are similar very real assumptions that could be applied. I don’t see why you need to identify any shapes, you just need to determine if an area is focus, and if it is not in focus then by how much, you might also know the focus distance of the lens (that’s probably essential for a good simulation). Also the distance (parallel to the sensor) between focus areas of the image might be useful.

Image processing techniques that use frequency analysis for example Fourier transforms, and wavelets might be useful.

Last Edited by Ted at 20 Sep 19:55
Ted
United Kingdom

Pre SLR many small format cameras eg 35mm used range finders to determine distances for focussing, then you just looked on the lens which was marked with either depth of field or depth of focus markings and you set the iris/f-stop/T stop to select the areas of the image you wanted to see in focus, I would have thought that software could easy replicate this and provide a selection tool for the human operator to decide which areas needed to be in focus and which blurred.
The way focussing on the eyes and blurring the b/g is achieved in movies is very high tech. The focus puller attaches a tape measure to a part of the camera, made for this very purpose, and measures the distance between it and a point between the eyes, then as above looks at the depth scale on the lens and sets the iris accordingly, usually using between f2.5 and f4. You don’t get much choice in shutter speed so you have to light for exposure. Peter I agree with you its not easy and sometimes the actors head has to be locked in place with a clamp and the marks have to made on the floor.

France

You can measure distance optically and this is very old (WW2 submarines had it in their periscopes for example). Is the Iphone measuring distance? It has something for face recognition but that is just short-range.

Administrator
Shoreham EGKA, United Kingdom

They have investigated light field photography in the past and my initial thought was that maybe they had cracked it. From the video it sounds as if there’s more to it than this, but I too wonder whether their sensor is cleverer than they’re letting on.

Sign in to add your message

Back to Top