Menu Sign In Contact FAQ
Banner
Welcome to our forums

Will a phone ever be anywhere as good as a DSLR?

I don’t think 8 bit colour produces visible banding – unless perhaps one picks an unusual image specially designed to show it. What certainly produces visible banding is excessive compression, and phones are notorious for that.

BTW I looked for the A-D converter spec of the K1 and it sounds like 14-15 bits. Given that 15 or 16 bits (we use them at work) is actually incredibly hard to achieve even on a really carefully constructed data acquisition product if you want less than 1 LSB total error i.e. noise, linearity, monotonicity, never mind absolute accuracy over any temperature range I can’t see much pressure to push this further. The noise floor alone inside a DSLR body, with all the electronics in there, will be a lot more than 1 LSB. I can hear the radiation from the K1 in the aircraft if I pull out the squelch But, it’s a bit like 24 bit sound – it has to be better

Administrator
Shoreham EGKA, United Kingdom

Archie wrote:

The iPhone takes 10-bit color images within a wide color gamut (Display P3)

There is no ‘traditional’ colorspace in the raw data that has yet to been demosaiced, most sensors will have a very wide gamut by definition of how they work (for example they will be able to record more than just the visible spectrum), though accuracy is important. The finer points of the application of the science behind this being proprietary.

The 10bit is a limitation of the A/D converter that could be relatively easily improved. It’s not enough and in the context of this thread you would need 14bit currently.

Archie wrote:

Interesting point that is made is that more than 40% of pro-football jerseys are outside of the sRGB color space

Then the promoter’s aren’t very clever then, since so much of it IS for consumption on the TV.

Ted
United Kingdom

It takes some doing to be outside the sRGB gamut… CMYK is a lot easier to break but nowadays not many people worry about that since printed product literature is not used as much.

Can you tell the sensor resolution from an image?

Administrator
Shoreham EGKA, United Kingdom

Peter wrote:

I don’t think 8 bit colour produces visible banding

It’s not enough for linear encoding, so you will need to gamma encode the analogue signal, and take it out again when applying transformations on the data. Which should not be a problem if you do it perfectly.

Peter wrote:

I can’t see much pressure to push this further

There are other ways to solve these problems, but from a marketing perspective 14bit has to better than 10bit ?

However in the context of raw processing what you need to work with, is very different from what is required at the end.

Ted
United Kingdom

Ted wrote:

It’s not enough and in the context of this thread you would need 14bit currently.

Patience my friend, patience. We are a considerably closer then where we were when this thread started. Specifically:
- 2X optical zoom (in the same form factor, unlike some early phones that were more like a compact camera)
- Optical Image Stabilization
- From 8 to 10-bit color + wider color gamut
- Improved low-light performance
- Depth of field result
- Better apps with full manual control
- RAW photography

Ted wrote:

since so much of it IS for consumption on the TV.

It’s a good selling point for new TV’s even though you can argue the real benefit.Traditional TV is a bit irrelevant for a DSLR thread anyway, no?

Last Edited by Archie at 08 Nov 12:50

Archie wrote:

Patience my friend, patience

Yes your right, they will continue to improve and fix the relatively easy things, as they battle with the difficulty of working at such a small scale. This summer I got my 50 year old camera out of storage and dusted its 4 × 5 inch sensor off to shoot a few portraits, it easily outperformed anything I own or could possible afford without even trying too hard, largely because of it’s scale.

Archie wrote:

It’s a good selling point for new TV’s even though you can argue the real benefit.

99.999% would never understand it correctly anyway . However I think the new HDTV’s are Rec709, which funnily enough has an almost identical gamut as sRGB, which I am sure wasn’t just an accident…

Archie wrote:

Traditional TV is a bit irrelevant for a DSLR thread anyway, no?
Yes your right but how many still have a traditional analog PAL/NTSC TV? I perhaps should have said they would view the images on a device designed and reasonably calibrated to display the sRGB (or something very close) gamut correctly.

Peter wrote:

It takes some doing to be outside the sRGB gamut…

I think its very easy, just hardly anyone cares, I suspect sRGB was agreed on because of ease of manufacture, cost/benefit etc.

Last Edited by Ted at 08 Nov 14:10
Ted
United Kingdom

Ted wrote:

your link about LOG gamma is one good example, that shows that even storing 16bit integers per channel is not enough.

I’ve been reading up on this, but I still don’t understand how LOG gamma (or any gamma) benefits exactly for a smartphone or a DSLR. Maybe you can help.

Simplified the process goes like this:
1. The sensor outputs an analog voltage per pixel
2. This is A/D converted into a digital value
→ This produces the RAW image, with a linear correlation between pixel voltage and digital value.
3. Then is goes through a number of processing steps, gamma, white balance, HEIC or JPG compression etc. (or H.264 for video)
→ That results in the HEIC or JPG or MOV file with the processing baked in

The reason why LOG gamma is advocated is that it gives better shadow detail. I just don’t understand at what point in the process it is applied. If it is applied after step 2. (which I think it is), the shadow information is already lost through A/D conversion. See following image.

In the A/D conversion process, half of available values are ‘wasted’ to the single brightest stop, and very few values are left to store shadow. If you now start multiplying the shadows, which is that LOG gamma tries to achieve, you’re just exaggerating the lack of detail?

Archie wrote:

If it is applied after step 2. (which I think it is), the shadow information is already lost through A/D conversion. See following image.

I don’t know how it’s done in the apple case, but I understand some of the high end cinema camera’s use an alternating exposure, 1 exposure is low gain and the other is high gain. I am sure there are other tricks.

Last Edited by Ted at 10 Nov 03:43
Ted
United Kingdom

Archie wrote:

If you now start multiplying the shadows, which is that LOG gamma tries to achieve, you’re just exaggerating the lack of detail?

LOG gamma, or using Floating Point, is probably complicating the discussion, but it does give an insight into some of the capabilities ‘built into’ older technologies.

It is probably easiest to understand one of the reasons why gamma encoding is needed in the first place. The are also some historical reasons with the way an electron gun in and old fashion TV tube works. (Those reasons possibly tie in with the way our eyes work, as they use electricity too but I am not sure exactly on the later).

However with an 8bit A/D you get 256 shades, if it works in a linear fashion then the perceptional middle grey which is roughly 18% then maths comes out to be 18% of 256 = 46, and 46 shades is not enough for half the range. (think 18% gray card)

If you have 10bit A/D you get 1024 shades, you can encode it easily using a bit of maths to fit into 8 bit and maintain essentially the same ‘tonal’ resolution’.

18% of 1024 = 185, gamma encoded ^2.2 = 118 out 256, which is close enough to half, and at display time you just reverse the math, or and old CRT will do it for you.

So a 10bit A/D (sans any other magic) is the minimum just to properly create a 8 bit JPG… That’s even before you throw colour into the mix…

Last Edited by Ted at 10 Nov 11:54
Ted
United Kingdom

I reckon the A-D in any digital camera is linear. In the old days you had stuff like the μ-law algorithm but I have not heard of any A-D chips (in the 14 bit plus space) which use that today. One could achieve it to some extent by dropping the A-D reference voltage, for small inputs, depending on the upstream noise floor, but I reckon the effective noise floor is that of the sensor, especially in phones whose low light perf is dreadful compared to DSLRs.

Administrator
Shoreham EGKA, United Kingdom
Sign in to add your message

Back to Top