The Hitchhiker's Guide to Digital Colour

Welcome pixel hustler! If you are new to this site and seeking digital colour comprehension, you might as well start at Question #1…

21 replies on “The Hitchhiker's Guide to Digital Colour”

Hello there!
This content is Truly wonderful.
I had some muddy understanding of how a color is handled digitally. finding proper, non bullsh*t explanation is quiet a quest.
thanks for the hard work, I hope to read more of it soon.

PS: reading the lasts couple of posts I had cold sweat running down my spine, thinking about how of P3 display would “stretch” sRGB code value instead of using an intermediary transfer function when decoding my favorite cats pictures… is it what is happening??? are we doomed?!?!?

Like

Welcome Charlie, and thanks for the kind words.

Regarding your horror, you have about nailed it. The good news is that one operating system is properly colour managed. The bad news is that the others are not.

On one operating system, almost all items are managed. On the others, it varies software to software. That means you can indeed expect the sRGB values to simply be blasted out “as is” or wrongly frequently.

And remember, transfer functions only control the intensity of light. They can’t change the chromaticities of the lights, which requires a different form of a transform. In fact, Apple’s Display P3 colour space uses the exact same transfer function as sRGB! So transfer functions along can’t fix things!

It’s great to see folks with enough foundational concepts to arrive at their own inferences. That’s amazing!

Liked by 1 person

Hello, I just wanted to say that I appreciate your hard work creating Filmic, answering questions on StackExchange and other sites and summing up the knowledge here. You are a hero! 🙂 I have learned a lot using those resources.

Like

I mean there are a lot of expressions I’m not used to, and I have trouble to distinguish what is really informative. it would be possible to summarize main points?

Like

It’s barely a three minute read, of which a summary of a summary seems to yield little. There are no assurances of communication across language barriers, sadly.

Like

Hello! Every time I come back to check info and reread chapters I am instantly reminded of how lucky we are for having this. Thank you so much Troy.

For the past few months I have been reading other books and websites regarding this topic and I believe (I truly hope so) that some of the core concepts are starting to latch on. I have tried my best to follow along the different topics with Houdini and Nuke open to feel it in my own pixels and practically understand how theory is applied. But it is here where I have been stuck for some time: applying all the theory in a Nuke (for example) real workflow. I have been looking into Foundry’s own tutorials on Color Management but they don’t go too in depth.
For this reason I was wondering if you knew any books, sites, videos you could recommend covering how to apply all the knowledge covered here into a software workflow. I have been finding quite hard and labyrinthine to take the step from being able to comprehend the core concepts, understand and house keep OCIO configs and knowing that I am possibly missing something, to actually using them to my advantage in real scenes. Without examples on real cases it is quite complicated to know when I am messing up or when I am hitting the note.

Thanks again. Can’t wait for the next chapter of hg2dc!

Like

Absolutely flattered that you have found any of this useful.

As you have probably discovered, the amount of reliable information of sufficient utility out in the wild is horrifically buried under layers of nonsense, numbers, and often times blind appeals to authority. Couple that with the fact that every single curious mind comes at the subject from a different vantage, and can stumble or get hung up in unique ways. This makes it incredibly challenging to suggest a single book or resource.

Worse, software largely is a mess. Plenty of things are brain wormed with some of the protocols peddled by the massive studios and corporations, or algorithms that are totally busted up rubbish. None of it works. Most of it is nonsense.

The main thing that one can do to insulate themselves from the confusion miasma is try to build up a nuts and bolts understanding. I’d be so bold as to say drawing a distinguishing idea between a “stimulus”, versus a “colour”, is an absolutely gargantuan step to sniffing out garbage. If we firmly locate the creation, perception, and understanding of colour as something that is generated in the human perceptual system, it allows us to immediately second guess nonsense that juggles numbers around, or treats electromagnetic radiation as colour.

To that end, I can think of perhaps no better singular tome to at least get one thinking in the right direction than Ralph Evans’ “The Perception of Color”. While it can be incredibly expensive to purchase a hard copy, there are rumours of a PDF lurking out in the interwebs.

I would be absolutely remiss to not also link you to Dr. David Briggs’ site http://www.huevaluechroma.com. Not only does Briggs cover things in a very observer-image-author vantage, his work covers very contemporary explorations of many often overlooked concepts. It also helps to expose readers to the various models, as well as practical examples to showcase how incredible some of all of this is.

Those are probably two very good entry points to round out understanding, and from there, it can be easier to lift one out of the quagmire of number fscking.

Remember… literally no one on the planet understands how our vision works! Also remember that a majority of the research up to around 1985, often times funded by Kodak, was done using tools that are eclipsed several thousand times over by the device you are reading this on. Their wisdom came from experience and insightful thought. Keep that in mind when you feel you are getting lost in the catacombs of vision, image formation, and numbers!

Like

Hello, love the posts. As a spectral rendering enthusiast, I’m curious if you ever plan to talk about spectral renderers at all, even as an aside. Since you mentioned RGB renderers and some of their downsides. Anyways, I look forward to the next post.

Like

Thanks for the kind words.

Spectral rendering is a fascinating topic, but the more important one has nothing to do with RGB nor spectral rendering. These constructs try to emulate “light transport” to varying degrees, including the somewhat ridiculous idea that a photometric system such as RGB can do so at all given that it is firmly entrenched in photometry.

The much more pressing discussion I hope to stoke the imagination on, is the *output* of these models. No, not the completely underwhelming radiometric or photometric datasets they generate via rendering, but rather how we create pictures using that data as but one ingredient.

The last couple of posts have been slowly inching in that direction, as hopefully folks can see. In traversing this landscape, we also are forced to reconcile some incredibly woeful misinformation or seductive underlying belief structures.

I believe very strongly that a focus on *what a picture is* can help to guide us in our understanding of human cognition, of which visual cognition is but one artificial demarcation facet.

Keep asking the questions. We need more questions.

Like

Hi! I absolutely love this resource, I’ve found it incredibly useful to get a non-BS understanding of colour.

One question I have is that some Rec. 709 images (such as the ones in your Colourimetric Test Imagery repo on github) have negative values. What’s up with that? How should negative values be handles by tonemappers/display transforms?

Like

Thanks so much for the comment.

Let’s see…

Think of colourimetry as an encoding system. We can encode a “coordinate” using negative values.

Relative to the encoding system, such as the purity available to the BT.709 medium, the negative values are bogus nonsense and mean nothing within that system; it is impossible to express a purity “more pure” than the maximally pure value, or “more emissive” than the maximal emission. Relative to a Standard Observer encoding model however, the coordinate can be “meaningful”.

> How should negative values be handles by tonemappers/display transforms?

It’s a great question and the answer is *no one has a shred of a clue*. The reasons are many. To name a few:
* No one understands how pictures work.
* Negative values indicate stimulus that is “purer” than other values, and how to handle that without breaking neurophysiological mechanics is incredibly complex. We really are in an infancy of understanding. Maybe pre-infancy.

A “clip” of a negative lobe, because it represents *negative stimulus*, ends up *artificially increasing the stimulus* and “deforming” the “intention”. But none of that really matters too much until we get minds thinking about what pictures are, how they work, and what mechanics are being leaned into.

The vast majority of the brain wormed discussion around colourimetry is so utterly ill framed that it amounts to noodleberries humping footballs to collect a bag. This includes books written on the subject of “gamut mapping”, because the basis is so utterly disconnected from neurophysiological mechanics so as to be nothing more than incantations of magic trying to control the ether.

For anyone reading this, skip all the rubbish around colourimetry, and focus on the minds like Eric Schwartz and Stephen Grossberg. That’s where the real work rests before us, and we haven’t even begun to scratch the surface with respect to pictures and their formation.

Like

Thank you!

Your informative Blog/Rant is incredibly entertaining as well as a great learning tool. After plowing through all 30something questions I realize that there probably isn’t going to be an answer for us pixel pushers.

But I think I’ve learned:

  • Color is in our heads, and it can be very finicky.
  • There is a color space standard called Rec.709, and maybe stick with that.
  • And I still have no idea how color management could even be possible.

I can’t wait for more posts! On a side note, do you have any recommendations for setting up your AgX to work with BRAW clips?

Thanks again

Like

> After plowing through all 30something questions I realize that there probably isn’t going to be an answer for us pixel pushers.

Given no one on earth has “solved” colour even remotely, that seems like a fair claim.

That said, I reckon there is an invaluable bit of understanding embracing that the idea of “colour management” is fraught with nonsense around stimuli.

Beyond that, there *are* some patterns that image authors can embrace, and lean into, once the general idea that “colour is stimuli” is dismissed and rejected. Those patterns are incredibly powerful.

> On a side note, do you have any recommendations for setting up your AgX to work with BRAW clips?

The AgX experiment was a proof of principle that works with any physically realizable colourimetrically defined space. Pick one that creates the pictures that work for you, and use a CST node in Resolve to conform to that preference.

Like

After posting this comment I happened to discover the compendium you contributed to the Blender Artist Forums. It clarified some of the answers that I learned from here, so I was going to edit/update my original comment. I’m happy you answered back so quickly!

Here are some more tidbits I’ve gathered/understand (at least for this weekend, till I have to come back and read everything again):

  • Hey, why not just make images that look great to me!
  • An actual color space has three rules, and it’s usually in reference to something (like a camera or a display)
  • I was having a hard time getting Resolve Color Management to “work” like I thought it should because no one understands how our eyeballs and brain work to “manage color” in the first place.
  • I don’t know why Aces or DaVinci Wide Gamut have their triangles outside of the CIE 1932 horseshoe. There is math involved, but I’m not sure if it’s necessary, unless ARRI cameras can pick up ghosts now (which would be pretty cool though).
  • I am excited to try some of those patterns we can lean in to. It feels like we can use those like a practical visual effect (like forced perspective). I have no idea where to start, except for Michael Bach’s awesome website!
  • Your Kodak 5219 example of how we fission the fields into “Boy howdy that is a red red!” is fantastic!! We really stumbled upon a happy accident when we created photography.
  • And with AgX, get your raw data into an actual colorspace with a CST node then fiddle away!

Thanks again, Troy, for putting up the good fight. Lots of people don’t like being proven wrong, but for me it’s the only way I learn. I’m glad I studied your thoughts first before latching on to the ACES and RCM crowd, haha!

Thanks for helping out us simple-hobbyist-pixel-pushers!

This is also my 7th time trying to post this reply, so forgive me if you see 8 similar replies 8-|

Like

> Hey, why not just make images that look great to me!

I’d say that identifying what “looks great” is tied into some of these concepts of cognitive fissioning. Putting the burden on an author to completely form a picture from the ground up is *not great*. Hence why these ideas are slowly ascending toward broader ideas around Pictorial Formation.

> I am excited to try some of those patterns we can lean in to. It feels like we can use those like a practical visual effect (like forced perspective). I have no idea where to start, except for Michael Bach’s awesome website!

Indeed. Being able to get a (albeit fleeting) glimpse at the mechanisms *can* help authors produce artwork. It’s tricky to get to the nuts and bolts though, before having a decent foundation of the slightly more abstract ideas.

> Your Kodak 5219 example of how we fission the fields into “Boy howdy that is a red red!” is fantastic!! We really stumbled upon a happy accident when we created photography.

The chemical creative film case is fascinating because it traces a line between mean energy and density. The way the “floor” is offset upwards, and creates an infinite series of smaller and smaller “sub gamuts” is a remarkable way of thinking about the density relationship to energy. Conversely, thinking about a “ceiling” gradually lowering down as a form passes into shadow is also quite useful. These relationships are only apparent after a critical examination of “how creative chemical film works” from the *cognitive* vantage.

If you walk away with anything, walk away with the belief that *everyone* should feel empowered to think about how we “see”. Pretending there is a consensus or that these problems are “solved” is the gravest lie out there.

Keep hammering!

Liked by 1 person

Hey there! First off, wanted to thank you for all your work and the information on this site, super useful stuff.

I’ve been trying to figure out how AgX actually works for the last couple days, and have only ended up with questions about its design.

> The AgX experiment was a proof of principle that works with any colourimetrically defined spaces.

Can you elaborate a little on this? Looking at the OCIO config generation code, it seems intended only for input using BT.709 primaries and (the part i find *really* confusing) output in encoded sRGB.

Basically, will AgX work if the input is using Rec. 2020 or AP1 primaries, if the internal adjusted color space is built from those primaries instead? What should the output be interpreted as in that case?

I also noticed the OCIO config does have options for Display P3 display, which simply does a color space transform on the output. Does this mean AgX produces colors which are out of gamut in sRGB (and thus clipped afterwards) but within P3 primaries? Or is the output simply transformed to P3 but within the sRGB/BT.709 gamut and not using the “full” Display P3 space?

Thanks again!

Like

Leave a comment