For the last fifteen years, sRGB has been the primary standard for computer monitors (and for consumer-level printing). That's changing now, as wider-gamut LED-backlit monitors become common. Usually, photographers use these with a color space like aRGB, which is semi-standard — my camera can save JPEGs in that space natively, for example.
But there's a new standard widely pushed in the AV industry to replace sRGB. This is IEC 61966-2-4 — xvYCC (or "x.v.Color", for marketing purposes). This color space has a gamut 1.8× larger than sRGB, covering 90% of the color range of human vision (instead of the uninspiring 50% covered by our current common denominator). Read much more at Sony's web site on the xvYCC.
The important point, though, is that this isn't theoretical. It's part of the HDMI 1.3 standard, along with a specification for color depth of 10 to 16 bits per color ("Deep Color", that's called). Unlike aRGB, which is basically a professional niche thing, there's broad support in consumer-level-gear.
That's the background. The question is: given that this is widely catching on, and that we're all likely to have computer (and TV!) hardware capable of supporting it in the next few years, why is this being sold as basically only a video thing? It seems like the camera industry would be happy to get on board.
Sony is big into the idea, and launched video cameras supporting it four years ago now. The Playstation 3 supports it, for goodness's sake! Why not put it in the Sony Alpha dSLRs as well? And Sony's not alone — Canon has video cameras supporting it too.
Of course, if you're shooting RAW, in-camera support is un-important. It's the converter software people who would have to get on board — why isn't there a push for this? As I understand it, xvYCC is an extension of YCbCr, which is already used in JPEG files. But as I read the literature, I find lots of mentions of updated MPEG standards, but nothing about still photographic images.
Why can't we have nice things?
Answer
xvYCC is a particular clever way of encoding color data: it abuses the YCC representation by using previously-forbidden combinations of values to represent colors outside the gamut of the RGB space used in the YCC scheme. That is, some YCC tuples decode to colors with negative R G or B values. Previously these were simply illegal; in xvYCC these are permitted, and displays with bigger gamuts than the RGB system are welcome to render these as best they can. So really it's a clever mostly-compatible hack to get some extra gamut without much changing the format.
Does it make sense to use it in still photography? I don't really think so. There's not really the need to be compatible with YCC, so why not use a wide-gamut space like ProPhoto RGB? Or better yet, since using extra bit depth is not expensive for stills, why not go with something like CIELAB that can cover the whole human perceptible gamut? You have enough bits that the ability to encode all those imaginary colors doesn't cost you any appreciable amount of color resolution.
Of course, the question of camera support is a little bit irrelevant - if you really care about color you should pull raw detector values from the camera and start from those. And even if you do this you're still going to be stuck in the camera's perceptible gamut. And the accuracy of your color representation will also depend on how well your camera's filters approximate the spectral response of human cones - get it wrong and colors that look identical to the eye will look different to your camera. No encoding will fix that. In fact this happened with one cheap digital camera I had- in this case its IR sensitivity made embers look purple. Even if you screen out IR, things with spiky spectra like rainbows and fluorescent lights or minerals (and maybe some dyes) are going to show this effect when continuum spectra look okay.
No comments:
Post a Comment