A post by Dan Margulis about color correcting "by the numbers" (BTN)

-default
Posts: 1916
Joined: Thu Mar 26, 2015 1:53 am

Postby -default » Tue May 29, 2007 12:47 am

This was posted today by Dan Margulis on his color theory group
http://tech.groups.yahoo.com/group/colortheory/ .

As much as any single article can, I think it sums up the motivation behind correcting "by the numbers", and eloquently describes the major points that this class is designed to cover, including the idea of using the shadow and highlight to extend the range of the image, removing color casts, and adding color saturation and variation in Lab mode. 

Dan's colortheory group, BTW, is an excellent resource that you may find interesting to join.





In view of recent comments about the function of color by the numbers and about viewer color preferences, a discussion of what we're trying to achieve in color correction is in order.

"Correction" means to make better. Sometimes the correction only makes things better in our own eyes and those of people who think like we do, whereas others don't like it. This type of correction is impossible to teach, as it depends on personal preference.

Many moves, however, are accepted *universally*--100% of viewers approve. Isolating these moves is what color by the numbers does. It is not  always possible to spot  these potential moves on a screen, which is why experienced people make such use of the numbers in the Info Palette. In terms of contrast, setting a full range as opposed to something shorter appeals to 100% of viewers. Color-wise, by the numbers is a negative, rather than affirmative, approach. It says that any number that is logically incorrect has to be changed to something that could conceivably be correct, even if that something is not the optimal value. Doing this also appeals to 100% of viewers, as opposed to leaving the impossible color there.

Prepress houses in the 1980s and early 1990s understood this, or at least their scanner operators did. Some of it (setting highlights and shadows) falls in the 100%-of-viewers-approve category. Other parts (like the mantra "clean and bright is always right") appealed to the majority of clients but not to everyone.

I first became interested in the question of 100% acceptance in, as I remember, 1993. A man who was peddling a certain acquisition method for RGB images (I can't recall whether it was called a "profile" back then) wrote an article in one of the magazines I was working for, arguing that his method was much better than any other. As evidence, he showed how it had acquired a certain image of a young boy and girl of different races, and compared it to two other common acquisition methods. The boy was wearing a white t-shirt. The author said that his own version was so clearly better than either of the other two that it proved that anyone who bought into the other methods was an idiot.

Today, we know that any profile will do well on certain images  and less well on others. So, as I started to read the article, I said to myself that it was ridiculous, of course the author can show one image where his method beats the other two, but then there will be other images where it would be the worst.

Then, however, I looked closely at the three versions. One of the two competitors was clearly unacceptable, because it blew out the boy's shirt. But I preferred the other to the one that the vendor was claiming as the best. My preference was so strong that I had the
magazine send me the original RGBs, so that I could see what the author was looking at, in case there had been a printing problem. There hadn't been. And yet, the author was obviously so convinced that a version I didn't like was best that he had wrapped an entire article around it.

I pulled proofs of the three images and showed them to my colleagues, asking them to state, without knowing the story, which they considered to be the best and worst of the three. Then I showed them to clients, and others, and then to classes, and to lecture groups, eventually over a thousand people. The results didn't vary with viewing condition, age, gender, or level of graphic arts experience. A bit over 50% of the audience agreed with me. A bit over 40% agreed with the author. Of these, most but not all rated the version with the blown-out white shirt as the worst of the three. However, more than five percent of the viewers, but less than ten percent, rated the version with the blown-out shirt as the *best* of the three. Upon inquiry, they always said that they understood that the shirt was a defect but that they were willing to forgive it because it made the rest of
the image seem contrastier.

The author, therefore, was not necessarily wrong in saying that his version was best. The final judge of that is always the client, and based on this vote, the client might agree with him. But he was certainly wrong in his apparent assumption that the entire world would agree with him that it was the best of the three.

You would think that, having seen this error, I would not make a similar one myself, but in 1995 I did exactly that, showing two classroom versions of a picture of a baby harp seal, a white animal. The student who did the second image had been so concerned to maintain a nearly pure white that he had blasted away most of the detail. I stated that this was an example of an expert-only error and that such stupidity would be beyond the ability of mere intermediates.

Unfortunately, as a huge online response showed, many people did not agree. As with the previous author's image, I did a survey of more than a thousand people, adding a third variant that I thought no one would vote for. I was right that nobody would vote for the third one, but on the other two the vote was close to 50-50. Furthermore, it was clear from the comments that (unlike the previous image) this time, people simply could not accept the image they voted against as being even acceptable. I understood, then, why many professionals think their clients are crazy for some of the decisions they make, not realizing that half the world might make the same choice.

I then took the concept further, choosing 25 sets of three images. In each case I asked, which is the best of the three, and which the worst? And I added another significant question: granted that hundreds of people are voting, what percentage of them do you think agrees with your choice of which is the best of these three?

After getting votes from a general audience, I designated seven of the 25 sets of three for the opening day of my ACT classes. For several years thereafter (I dropped the experiment roughly in 2002) every student added their vote. One of the exercises still appears in PP5E, the Chapter 1 picture of a hog. As the text indicates, Figure 1.2C is preferred by around a 2-1 margin to Figure 1.2B.

From these three cases, you might conclude that such a large audience would never agree 100% on any image question. It's not true. Every one of the hundreds of people voting chose Figure 1.2A (which isn't all that different from 1.2B) as the worst, just as 100% of viewers chose my third version of the seal as worst. Even when people split on images there is often 100% agreement on the basics. In the seal picture there *was* 100% agreement as to which version had the better contrast, and as to which had the better color. In the hog image, the vote was very close to 100%.

In my class exercises, there aren't hundreds of voters, but usually nine, occasionally more, sometimes less. We do 20 images in the basic class, 25 in the advanced. In each case the format is single-elimination until we are down to two or, occasionally three images that are clearly better than the others, whereupon the serious comparisons begin.

This means that in the typical class there are around 300 head-to-head comparisons that are discussed by a group. Over the life of the class, I've hosted maybe 10,000 such individual comparisons of images. Most of these are 100% votes, as often people make by-the-numbers mistakes that cause the voters to reject their work immediately. Even when we have cut down to two or three images out of the original eight or nine, the vote as to which is best is still 100% more often than not.  Even when it's not 100% as to which one is better overall, it's usually 100% as to the points that are considered better for one as opposed to the other. Occasionally, the vote is purely on a matter of taste. If somebody takes John's ballet picture and adds color variation and moves slightly away from blue, that would get 100% vote in favor. If there were a second version, identical for luminosity but going further away from blue, that would probably get a split vote purely as a matter of taste. But almost all the time a slight advantage in technical skill outweighs artistic preference.

Based on these votes, most of which went 100% one way or another, the following generalizations are possible.

*Failing to establish full range by setting a light point and a dark point to the extremes of what can be printed is heavily punished. It is rare to see such an image get any votes even if it is artistically superior to some other image that does set full range.

*Leaving impossible colors in the image by failing to check known areas almost always results in immediate rejection by the audience. Any version that leaves a dark-haired person's hair just slightly on the green side, for example, will lose by a 100% vote to almost any version that has no similar flaw and that uses full range.

The above two are the foundation of color by the numbers. They are not "artistic" questions. Comply, and your images compete. Don't, and they won't: the vote will be 100% against you.

Then, there are the areas in which people differ. Some of these are simple matters of taste that are difficult to generalize about: some people prefer darker, richer colors and others cleaner, brighter ones, for example. Some prefer a rather austere look and others a relatively saturated one. People's tastes notoriously vary about sharpening. And there are several areas in which preferences to move away from the art clearly exist, but the question is how far to go, and there isn't any "right" answer. These fall into two categories: the "emotional" ones that relate to how we *think* that we remember seeing things, and the "visual" ones that pertain to how humans actually see things in comparison to cameras.

The "emotional" ones:

*There is a near 100% preference for "healthy" skintones even when the original skin was sallow. This was first pointed out by research in the 1950s, but it was inaccurately described as a preference that the skin should be yellower. That's true with respect to light-skinned caucasians; it makes the skintone seem more golden. But for caucasians of medium complexion we simply prefer a more saturated color than the camera records; for dark-skinned caucasians and ethnic groups with skin at least as dark, we tend to favor adding more magenta.

*We almost always prefer greener greens than the camera records.

*Any time the sky is light, there is near 100% endorsement of the idea of making it darker (excluding the lightest parts of clouds, obviously). It is not always true that we want the sky bluer, but we almost always want it darker than in the original.

The "visual" ones:
*Human vision is self-calibrating; it neutralizes the ambient lighting. Cameras pick up all kinds of casts and mini-casts that human observers wouldn't see. As a rule, we wipe them out. However, warmish casts are tolerated to a much greater degree than coldish ones, particularly if the scene suggests warmth. Example: a swimming pool in the tropics, with palm trees. The cement lip of the pool is approximately gray in real life. Many but not all people prefer making this color redder, incorporating a cast. Some like the warming effect so much that the trees start to lose their greenness. Others prefer to stick closer to a gray. But *nobody* accepts a cool cast in this image. If the cement measures even slightly negative in the A or B channels, there is an immediate 100% vote against the image.

*Any image that is mostly of the same color, such as a portrait or a forest scene, provokes the human simultaneous-contrast mechanism. We see more color variation than the camera does. Regardless of the quality of the original photograph, a small move in this direction (often by steepening the A and B channels) gets 100% approval.

By listing all these things, I'm trying to clarify some of the confusion about "right" and "wrong" in color correction. I take the view that any move that gets near-100% approval is Right, and any that gets near-100% disapproval is Wrong. Because, as noted above, I've been through approximately 10,000 image-comparison exercises in which a group of neutral observers expressed and explained their preferences, I believe I have a much better ability to predict what will get the 100% vote than the typical person does.

When I submit my own version of an image to a class, I do not like it when 100% of the voters find that it is not as good as somebody else's. Of course, once I see that other picture, I know that they are right, but certainly I didn't know it when I finished my own. I believe most other professionals also dislike learning that 100% of observers disapprove of one of their images. The way to avoid this experience is to recognize that a few things in imaging that are truly matters of taste, but many others are absolutes. I've listed sevenfactors above. The first two are absolutes. The last five are matters of taste to the extent that people will disagree as to how far to go, but they are absolutes in the sense that it is Wrong to leave the picture the way it was found.

Dan Margulis

<*> To visit this group on the web, go to:
    http://groups.yahoo.com/group/colortheory/

ggroess
Posts: 5342
Joined: Wed May 24, 2006 2:15 am
Contact:

Postby ggroess » Tue May 29, 2007 12:46 pm

Well at least now I do not have to excuse myself from not liking an image...I can just blame Dan...

What a great article about the subjective side of the color correction process. 

Greg

-default
Posts: 1916
Joined: Thu Mar 26, 2015 1:53 am

Postby -default » Wed May 30, 2007 7:36 am

Yes, definitely food for thought.  It would be interesting to arrange for some sort of voting for this class, but have not yet figured out how to do it.


Return to “May 2007 Curvemeister 101”

Who is online

Users browsing this forum: No registered users and 4 guests