IRIS TUTORIAL
Color techniques

 
How to combine colors channel in a true colors image?

First, remember to select PIC format (Settings dialog box of File menu)!

Consider the grey level images of Jupiter planet taken through a red filter, a green filter and a blue filter respectively:

 


Image JUP_R.PIC


Image JUP_G.PIC


Image_JUP_B.PIC

Open the dialog box (L)RGB of View menu. Enter the name of the three grey level layer of Jupiter image:

Click the Apply button. Clearly, the three layers are not registered (i.e. not aligned)...

Select the Red option, enter a step of one pixel and click the paddle arrow:

For each click on an arrow the red layer is translated by one pixel along the corresponding direction relatively the green and blue layers (note, you can enter a fractional value for the step). Align the red layer by using this method. To help you, you can perfectly modify the visualization  thresholds during the operation.

Now, select the Blue channel and align interactively relatively to the red and green images:

Click OK and save the aligned true colors images (48-bits pixel coding):

>SAVE JUPITER

Now, make a dark sky. Define a region of the sky and run the BLACK command:

>BLACK

Now, adjust the white balance on a supposed white area of Jupiter, then run the WHITE command:

>WHITE

The result


The aligned and white balanced image of Jupiter.

For white balance you can use the ring of Saturn, the polar cap of Mars or predefined values (RGB balance command of Digital photo):

Now you can increase the contrast of some parts of the image. Open the Wavelet commannd of Processing menu) and act on the finest sliders:

 

Click OK for finish and for example export your image:

>SAVEJPG JUPITER 1

The file JUPITER.JPG is now present in your working directory. You can also export a 48 bits images (PNG of TIFF) from the Save doialog box of the File menu or enter a console command like:

>SAVEPNG JUPITER

Of course the (L)RGB dialog box can perform registration of the color plane for deep-sky images, compensate differential atmospheric refraction, etc.


Unaligned channels


Aligned channels

How to extract the RGB layers from a 48-bits image?


Pleiades image (EOS350D + 50 mm lens).

Run the RGB separation command of Digital photo menu:

The command produce the files R.PIC, G.PIC and B.PIC for respectively the Red, the Green and the Blue layers.


The Red image


The Green image


The Bleu image

You can also run the equivalent console command SPLIT_RGB, for example:

>SPLIT_RGB R G B

For split a set of images, use the SPLIT_RGB2 command. The syntax is

SPLIT_RGB2 [IN] [R] [G] [B] [NUMBER]

For example for split the sequence M45_1, M45_2, M45_3, run the command:

>SPLIT_RGB M45_ R G B 3

The produced set of images is R1, R2, R3, G1, G2, G3, B1, B2, B3.

For recombine a 48-bits image you can use the (L)RGB command of View menu, or the console commande TRICHRO. For example:

>TRICHRO R G B

or more compact form:

>TR R G B

HSI transformation

The RGB2HSI command convert the traditional RGB representation of a true colors image to the HSI space (Hue, Saturation, Intensity space). This convertion produce three images. The hue image correspond to the dominant colors (color tone), the saturation image correspond to the purity of colors and the intensity correspond to the magnitude of the signal in the colored image.

More precisely:

I) In the hue image, pixels that are predominantly red in the trichromatic image will be represented by high levels, pixels that are predominantly green will be represented by intermediate levels, and pixels that are predominantly blue will be represented by low intensity levels.  If the levels are represented by the angles from 0° to 360°, red corresponds to 0°, green to 120°, blue to 240°, red again to 360°.

II) In the saturation image the areas of the tri-colors image where the colors are purest will be represented by the high levels. Low saturation results in a gray aspect images, middle saturation produces pastels and  high saturation results in vivid colors.

III) The intensity image is the one that most resembles each of the monochromatic components of the tri-colors image. This image expresses the average intensity of the three fundamental colors component in a grey form.


The coordinate system for HSI model is cylindrical. The value of S is a ratio ranging from 0 to 1 on the side of the color circle. A point at the apex (R=G=B=0) is black. The point S=0 and I=1 (situated on the chromatic axis) is white. Intermediate value values of I for S=0 are the grays. Note that when S=0, the value of H is undefined.  For exemple, pure red is at H=0, S=1 and I=1, pure blue is has H=240°, S=1 and I=1. So, changing H corresponds to selecting a pure color than S=I=1. Adding white color to a pure color correspond to decreasing S without changing I. Tones are created by decreasing both S and I. Shades are created by decreasing I and fixing S at unity.

Colorimetric transformation is a powerful tool that can profoundly modify the appearance of color images.  In particular, the HSI representation is often used in science to enhance specific colored details in an image. If they are properly used, transformations between the HSI and the RGB systems allow you to reveal subtle colored characteristics in images and thus make them easier to interpret.

For example, consider this image of the horse nebula:


EOS350D + 50 mm lens

Extract the R, G and B components:

>SPLIT_RGB R G B


The "R" image


The "G" image


The "B" image

Now compute the H, S and I transform. The syntax of RGB2HSI command is:

RGB2HSI [R layer] [G layer] [Bleu layer] [H layer] [S layer] [I layer]

Here

>RGB2HSI R G B H S I


The "Hue" image


The "Saturation" image


The "Intensity" image

The inverse of RGB2HSI command is HSI2RGB command

Example, increase the saturation of the horse nébula. For this multiply, the saturation image by a coefficient and return to the RGB space:

>LOAD S
>MULT 1.5
>SAVE S
>HSI2RGB H S I RR GG BB
>TR RR GG BB


The method used for increase saturation on image processing software.

Here another example of HSI transform concerning the Messier 27 nebula:


True colors image of M27 (Takasahi Epsilon 160 telescope + Audine KAF-0400 camera).


The R image


The G image


The B image


The H image


The S image


The I image

Below, some Moon images carried through interference filters. Instrumentation: Takahashi 5-inch refractor at f/10 + KAF-1600 directly at the focus.


The
400 nm image (blue spectral band).


T
rue colors image taken with 400, 560 and 910 nm interference filters. Contrast and hue are accentuated considerably by using the mathematical property of the HSI space. Yes, the Moon is a colored object ! That gives an unique very useful frame to study the Moon geology.

Tip: You can also invoque the Saturation adsjustement command of View menu, the algorithm is very similar.

The following strategy should be used for compute an albedo image for science. Suppose a couple of color images (R&G or R&B, etc):
1. Calculate the spectral ratio of the two images (one is divided by the other).  The result is the image H.
2. The I component is one of the monochromatic images (or mean of the two images),
3. The S component is set to a constant level (1, for example),
4. Transform from HSI to RGB, then visualize the resulting tricolor images.
The result of this later processing is an image whose level represents the albedo of the object, and whose color represents the spectral signature (here a spectral ratio).

More generaly, the hue and saturation components defines the chromaticity of a color. It is important to note that chromaticity and intensity of a color can be considered independently. The very interest of HSI space is that it decorelate color tone and hue from brightness. This is why we can replace in HSI space the I component by a new high quality "Luminance" image. While returning then in the RGB domain, we preserve the original colors, but with a boost in the details, a much cleaner image and a much high SNR comparatively has the initial true colors image. It is the basic principle of the popular LRGB techniques.Click here for an Iris tutorial about LRGB.

Principal component analysis

The Principal Component Analysis (PCA) corresponds to a coordinate transformation of a color image that is represented in the space of fundamental colors (Red, Green, Blue). After the transformation, the axes are the eigenvectors of the covariance matrix of the three input images. The three resulting images are obtained by projecting the three starting axes (R,G,B) onto the three resulting axes. Without going into the mathematical details, it is interesting to choose this coordinate system because it defines three new images that are as uncorrelated from each other as possible, from the chromatic point of view.

The first axis, also called the principal axis, corresponds to the largest eigenvalue of the covariance matrix. Generally, this axis is very close to (but not coincident with) the "achromatic axis" (which is the axis of the "Intensity" image in the HSI transform). This axis contains most of the intensity information and is often close to the average of the input images..

The two other axes (ordered in decreasing eigenvalues) can thus be interpreted as linear combinations of the input images that lead to information that is not correlated to the first axis or to each other. The two corresponding images generally have much weaker dynamics, and are centered around zero. These images thus have a rather low signal to noise ratio, especially for deep sky images, and sometimes require a low pass filter (median type, for example) in order to be correctly visualized.

The interest in this transformation is:

(1) First, visualizing the three images in principal components allows a hierarchical classification of the information contained in the starting trichromatic image. This visualization can be done independently, or trichromatically (by putting the image of the first eigenvector in red, the second in green, and the third in blue). In this case, it is clear that the resulting image is not at all representative of the "true" colors of the image, nor is it very aesthetic, but it is the representation that gives the optimal visualization of the chromatic information in the image.

(2) Second, processing can be done in the space of principal components (filtering) and the results can then be brought back to the starting space (R, G, B) (with the PCA2RGB command) to obtain a visual improvement in the original trichromatic image. The transform is generally reserved for images with a good signal to noise ratio (for example, planetary images or bright planetary nebulae).

Application on a Jupiter planet image:


The color image.


The R component


The G component


The B component

Run the console command:

>RGB2PCA R G B C1 C2 C3
>LOAD C2
>VISU 300 -300

(note the use of a negative threshold because the C2 image is signed).

The produced C1, C2 and C3 images are the third principal component:


The C1 component


The C2 component


The C3 component

 

Very subtle colored details of the Jupiter atmosphere's are now highlighted.

The inverse function exist: PCA2RGB.

Bicolor images

If only two colors images of an object are available it is easy to interpolate the data for synthetize a tricolor image. For example if R and B are the only available channels, compute the mean for create an artificial G channel. The operation has a value if there is some correlation between the R and B channels, which is often the case.


OBS_R.PIC image (R channel)


 OBS_B.PIC image (B channel)

Perform OBS_G = (OBS_R + OBS_B) / 2

>LOAD OBS_R
>ADD OBS_B
>MULT 0.5
>SAVE OBS_G


The synthetic OBS_G.PIC image.


The original tricolor image.


The bicolor image.


INDEX