DPR Forum

Welcome to the Friendly Aisles!
DPRF is a spin-off of dpreview. We are a photography forum with people from all over the world freely sharing their knowledge and love of photography. Everybody is welcome, from beginners to the experienced professional. From smartphone to Medium Format.

DPRF is a community for everybody, every brand and every sensor format. Digital and film.
Enjoy this modern, easy to use software. Look also at our Reviews & Gallery!

SIGMA's Garish Red Problem ...

Has anyone ever tried using cyan color compensating filters to reduce the intensity of the red while preserving the blues and greens ?

I have been looking to see whats available and I'm leaning towards getting a Cokin P Series filter holder, a 77mm adapter ring to fit my Sigma 28mm EX DG Macro, and a set of Cyan P700-P709 filters.
 
Has anyone ever tried using cyan color compensating filters to reduce the intensity of the red while preserving the blues and greens ?

I have been looking to see whats available and I'm leaning towards getting a Cokin P Series filter holder, a 77mm adapter ring to fit my Sigma 28mm EX DG Macro, and a set of Cyan P700-P709 filters.

Not me . I prefer to work color and tone in software . It is easy to work color saturation, intensity and contrast independently , per channel ,in Photoshop RGB Color Mode and even more in the intrinsic Color independence of CIE Lab Color Space.

Cokin Color filters were great and the only alternative for Film, but with digital Imaging we have non destructive infinite interpretative possibilities.

Luis
 
Hi Steaphany,

that sounds at least interesting. :) The crucial question might then be, whether the camera's wight balance would accept this measure?? The camera's reaction should be to amplify the remaining red intensity hitting the sensor, to balance towards neutral white.
If this should not work within the camera, it has to be done on the computer.

Amplyfing the red channel could result in stronger picture noise.

Just try it! :)

See you with nice pictures

Klaus
 
I've found that a colour balancing filter such as an 80A or 82A helps reduce blue channel noise when the lighting is reddish, such as with incandescent light.

For sunsets you want to preserve the red or it will look rather bland. :)
 
Klaus,

You bring up a good point and I was intending to set a custom white balance to compensate. The only unknown will be to experiment to see just how strong a cyan correction would be needed and if this would vary by scene or could be relatively fixed.

Luis,

I started with analog or film based photography back in the 1970's and it has always been my view that the greatest photographic control occurs when the shutter is open. If information is lost, as being addressed by this thread as the Foveon imager saturates the red levels, whether working with analog or digital, you can't bring it back. Additionally, some effects can not be reproduced in software, the effects of a poliarzing filter is a good example.

Since we have no control over the Foveon imager, the support circuitry, or what image processing occurs in the camera, then only two options remain, either software post processing as Klaus described above, or to reduce the level of red light by use of a cyan color compensating filter. The effect on the final image may be the same, but until we can do a side by side comparison of the techniques, we're only guessing.

When I get the filters are run the experiments, I'll be posting the results. If anyone else here is inclined to try, go right ahead. As more work is done to develop techniques to maximize the potential of our cameras, everyone will benefit.
 
Klaus,

You bring up a good point and I was intending to set a custom white balance to compensate. The only unknown will be to experiment to see just how strong a cyan correction would be needed and if this would vary by scene or could be relatively fixed.

Luis,

I started with analog or film based photography back in the 1970's and it has always been my view that the greatest photographic control occurs when the shutter is open. If information is lost, as being addressed by this thread as the Foveon imager saturates the red levels, whether working with analog or digital, you can't bring it back. Additionally, some effects can not be reproduced in software, the effects of a poliarzing filter is a good example.

Since we have no control over the Foveon imager, the support circuitry, or what image processing occurs in the camera, then only two options remain, either software post processing as Klaus described above, or to reduce the level of red light by use of a cyan color compensating filter. The effect on the final image may be the same, but until we can do a side by side comparison of the techniques, we're only guessing.

When I get the filters are run the experiments, I'll be posting the results. If anyone else here is inclined to try, go right ahead. As more work is done to develop techniques to maximize the potential of our cameras, everyone will benefit.

Steaphany If you are getting any channels saturated ( Or burned out in Film ) during the taking of the photograph , you are not handling your exposure properly .

You do have control over the "Foveon imager and the support circuitry" Sigma cameras have a Good Display to instantly see the resulting images and the effect of your corrections ,plus an extremely useful Histogram that was put there just for that purpose. And of course it has IN CAMERA MENUS TO CONTROL SATURATION LEVELS PER CHANNEL AS WELL AS CONTRAST AND SHARPNESS.

As you can see you can adjust YOUR CAMERA SATURATION , SHARPNESS AND CONTRAST ,right at the Scene .This is called controlled Photography , as oposed to Snapshooting with Factory Defaults settings. , later you can Fine Tune your images during POST PROCESSING.

View attachment 892

These controls never existed in Film cameras . You are right , what you didnt capture you cannot bring back , that is why controlling the camera during exposure is so important.

Of course using absortion color filters to compensate your camera response to a given situation will do some good , but remember that sensors use only 3 colors to derive all the spectrum ,so changing one of them will afect the rendition of others that are NOT MEASURED , BUT CALCULATED , such as YELLOW, for example , or Violet , or Purple, etc etc.

When , and only when , you have exhausted the tools that were implemented in your camera , would it make sense to try other solutions. One of the major obstacles to digital Imaging is reluctance of film photographers to let go of the old analog Paradigm.

Luis
 

Attachments

  • LCD Display info.jpg
    EXIF
    LCD Display info.jpg
    85.5 KB · Views: 14
Luis,

Let me clarify what I meant by "no control over the Foveon imager, the support circuitry, or what image processing occurs in the camera".

Sigma has not, as of writing this post, published an SD14 Application Programming Interface to allow software developers and photographers to write their own versions of the SD14's internal operating firmware. Nor has Sigma published the electronic schematics to allow electronic engineers the ability to open up their SD14 and replace the electronic circuitry. We are stuck with the black box SD14 just as Sigma engineers designed. I can not change or control what they provide to me, except for the adjustments which they implement in firmware.

Also, look at the title of this thread. This thread is a discussion on the means to address the propensity of images of a subject with significantly high percentage of red hue photo sites having the red pixels pushed into saturation when green and blue are fine and when the exposure is accurately set. This phenomenon does not occur in all situations, hence the need to be aware of the potential and to find way to resolve the problem when it occurs.

As far as I'm aware, I can not control the camera image saturation level, or gain, of the individual red, green, and blue channels. If it did, I'm sure this thread would have been reduced to someone saying RTFM.

As for what you describe by "sensors use only 3 colors to derive all the spectrum", this is exactly how human color vision works. It's called metamerism, the matching of an apparent color with different spectral power distributions. You also have to remember that color does not exist in nature. A electromagnetic wave with a wavelength of 550 nm is no different than one at 2 m, both are waves possessing a polarization, a wavelength, and intensity. It is only the 550 nm photons interacting with the photosensitive proteins of the human retina that cause a signal to be sent to the brain where the conscious awareness sees green. The 2 m photons pass through the retina, completely unseen, and only detected by an appropriately tuned radio.

The design of color film (Yes, color films have red, green, and blue photosensitive layers) as well as all forms of electronic imaging, where the application is to photograph or record an image close to what the human eye sees has it's foundation in the tri-stimulus or trichromatic theory of human color vision. This was first postulated by Thomas Young in 1802, was developed further when three different retina cone cells were proposed by Hermann von Helmholtz in 1850, first standardized by the CIE in 1931, and actually proven as fact in 1983 when measurements from a single retinal cell were possible.

References:
http://en.wikipedia.org/wiki/Metamerism_(color)
http://en.wikipedia.org/wiki/Young–Helmholtz_theory
http://en.wikipedia.org/wiki/Color_vision
http://en.wikipedia.org/wiki/CIE_1931_color_space
http://en.wikipedia.org/wiki/Color_film

BACK ON TOPIC:

It is my philosophy that changes performed during exposure provide a greater level of control than by software manipulations done after the exposure was taken.

Color Compensating filters are available in primary colors, Red, Green, Blue, Yellow, Cyan, and Magenta. They come in a range of transparencies typically .05, .10, .20, .30, .40., and .50 which can be used singly or in combination to achieve what ever correction is desired.

Cyan filters reduce the intensity of photons in the band corresponding to Red, allowing the remaining visible spectrum through. By adjusting the camera's white balance to custom, it will accept the bias that the filter causes and simply reducing only the red to correct the propensity of the reds to become saturated.
 
Luis,

Let me clarify what I meant by "no control over the Foveon imager, the support circuitry, or what image processing occurs in the camera".

Sigma has not, as of writing this post, published an SD14 Application Programming Interface to allow software developers and photographers to write their own versions of the SD14's internal operating firmware. Nor has Sigma published the electronic schematics to allow electronic engineers the ability to open up their SD14 and replace the electronic circuitry. We are stuck with the black box SD14 just as Sigma engineers designed. I can not change or control what they provide to me, except for the adjustments which they implement in firmware.

Also, look at the title of this thread. This thread is a discussion on the means to address the propensity of images of a subject with significantly high percentage of red hue photo sites having the red pixels pushed into saturation when green and blue are fine and when the exposure is accurately set. This phenomenon does not occur in all situations, hence the need to be aware of the potential and to find way to resolve the problem when it occurs.

As far as I'm aware, I can not control the camera image saturation level, or gain, of the individual red, green, and blue channels. If it did, I'm sure this thread would have been reduced to someone saying RTFM.

As for what you describe by "sensors use only 3 colors to derive all the spectrum", this is exactly how human color vision works. It's called metamerism, the matching of an apparent color with different spectral power distributions. You also have to remember that color does not exist in nature. A electromagnetic wave with a wavelength of 550 nm is no different than one at 2 m, both are waves possessing a polarization, a wavelength, and intensity. It is only the 550 nm photons interacting with the photosensitive proteins of the human retina that cause a signal to be sent to the brain where the conscious awareness sees green. The 2 m photons pass through the retina, completely unseen, and only detected by an appropriately tuned radio.

The design of color film (Yes, color films have red, green, and blue photosensitive layers) as well as all forms of electronic imaging, where the application is to photograph or record an image close to what the human eye sees has it's foundation in the tri-stimulus or trichromatic theory of human color vision. This was first postulated by Thomas Young in 1802, was developed further when three different retina cone cells were proposed by Hermann von Helmholtz in 1850, first standardized by the CIE in 1931, and actually proven as fact in 1983 when measurements from a single retinal cell were possible.

References:
http://en.wikipedia.org/wiki/Metamerism_(color)
http://en.wikipedia.org/wiki/Young–Helmholtz_theory
http://en.wikipedia.org/wiki/Color_vision
http://en.wikipedia.org/wiki/CIE_1931_color_space
http://en.wikipedia.org/wiki/Color_film

BACK ON TOPIC:

It is my philosophy that changes performed during exposure provide a greater level of control than by software manipulations done after the exposure was taken.

Color Compensating filters are available in primary colors, Red, Green, Blue, Yellow, Cyan, and Magenta. They come in a range of transparencies typically .05, .10, .20, .30, .40., and .50 which can be used singly or in combination to achieve what ever correction is desired.

Cyan filters reduce the intensity of photons in the band corresponding to Red, allowing the remaining visible spectrum through. By adjusting the camera's white balance to custom, it will accept the bias that the filter causes and simply reducing only the red to correct the propensity of the reds to become saturated.


Too much ado about nothing . Just make sure you dont oversaturate any chanel during capture and restore the image likeness during post processing . Adobe Lightroom has individual control over Intensity and Saturation Individually of all the colors that you mention . The fact that the red over saturation occurs in some situations and not in others is showing a camera marginal rejection of unwanted NON VISIBLE colors , possibly by overly IR reflectivity of some organic dyes in Nature or Fluorescense of others due to excessive UV.

There are filters that combine both . The B&W 486 and 486 W is an UV/IR -cut , Interference filter that completely blocks unwanted UV and IR radiation , specially designed for digital cameras. The B&W 415 , absorbs UV and prevents Fluorescense that happens right at the LENS CEMENT ( In cheap lenses , that is )without introducing any color imbalance in the capture .

Also please note that this thread belongs to the DIGITAL DARKROOM , for practicioners and not for Engineers.

Luis
 
I just confirmed something that I suspected and want to see if anyone else has noticed this aspect of the Garish Red Problem.

Sigma Photo Pro handles Red better than IrfanView.

Here are two versions of the same X3F Sunset shot at ISO: 100, White Balance: Sunlight, Shutter: 1/10, f: 8 (No exposure compensation)

This was what SPP produced:

View attachment 895

This is what IrfanView produced:

View attachment 896

Both images show far more red than what I was able to see, but IrfanView saturated the colors where SPP was not as severe.

Has anyone else noted this type of discrepancy between SPP and their other software tools ?
 

Attachments

  • SDIM0311.jpg
    EXIF
    SDIM0311.jpg
    137 KB · Views: 6
  • SDIM0311_IrfanView.jpg
    EXIF
    SDIM0311_IrfanView.jpg
    131.8 KB · Views: 7
As you can see you can adjust YOUR CAMERA SATURATION , SHARPNESS AND CONTRAST ,right at the Scene .This is called controlled Photography , as oposed to Snapshooting with Factory Defaults settings. , later you can Fine Tune your images during POST PROCESSING.

These controls never existed in Film cameras . You are right , what you didnt capture you cannot bring back , that is why controlling the camera during exposure is so important.

Sure they did. It wasn't as easy as clicking a button on a computer, but it was possible to control them by choice of film, paper, exposure, development chemistry & temperature, enlarger type (condenser or diffused), etc.

Of course using absortion color filters to compensate your camera response to a given situation will do some good , but remember that sensors use only 3 colors to derive all the spectrum ,so changing one of them will afect the rendition of others that are NOT MEASURED , BUT CALCULATED , such as YELLOW, for example , or Violet , or Purple, etc etc.

All colours are calculated from the three primaries. That's how pretty much any colour imaging device works. Also, Foveon doesn't actually measure RGB, but calculates it from the three layers.




As far as I'm aware, I can not control the camera image saturation level, or gain, of the individual red, green, and blue channels. If it did, I'm sure this thread would have been reduced to someone saying RTFM.

True. About the best you can do is use Dave Coffin's dcraw and alter his source code. Or, you could extract the three layers with it and come up with your own transform. A couple of people on dpr have done that.
 
Back
Top