V. Basic Reductions for Imaging Data

TOC | previous | next

A) Image Reduction Overview

The first goal of image reduction is to correct two types of errors in the CCD data: additive and multiplicative errors. As you recollect from section III. B., we are thinking of an image as a matrix of numbers, each number represents the brightness in that pixel. Additive errors add to the values of pixels, multiplicative errors multiply the value in a pixel. To correct additive errors, we simply subtract something from the image and to correct multiplicative errors, we simply divide the image by something.

Additive errors arise from two primary sources: bias offset & dark current. In this document, I will primarily leave the detailed description of what gives rise to these effects to class discussion and focus on the IRAF methods of correcting them. There are two types of calibration images that might be used to correct these additive errors: dark and bias frames. A dark is simply an image taken by the CCD for the same exposure length as the exposure it is meant to correct (it should also be taken at the same CCD operating temperature. A dark will correct both bias offset and dark current. A bias frame is essentially a zero length dark frame, so it corrects bias offset, but not dark current (note: a bias frame is also called a zero frame). Why the two methods? Well, research CCDs are cryogenically cooled (usually to liquid nitrogen temperature -- 77 K), so they do not suffer from dark current, thus data from a cryogenically cooled CCD need only be corrected by a bias frame. A CCD which is cooled, but not to cryogenic temperatures (such as those we use at SBO), needs to be corrected for both bias offset and dark current, so we will use dark frames instead of bias frames.

Multiplicative errors can arise from several sources: differences in quantum efficiency, illumination differences (vignetting), and dust halos (aka dust doughnuts). All of these represent a difference in sensitivity from pixel to pixel in the chip, thus different pixels need to be multiplied up to larger values to match more sensitive pixels. To correct this, we take a calibration image called a flat field. The flat is simply an image of an evenly illuminated field (usually a white spot on the inside of the dome).

The mathematical representation of the basic reductions to correct additive and multiplicative errors is:

final_image =       (raw_image - dark1)
              --------------------------------
              (flat - dark2) / <flat - dark2>

There is actually one more source of errors in images: cosmic rays. Cosmic rays are high energy particles which pass through our CCD detector and deposit large amounts of energy. This deposition of energy mimics the deposition of energy by which CCDs detect photons. The best way to pick out cosmic rays is to take multiple images, then any bright pixels which show up in that position in only one image is a cosmic ray.


B) Combining the Darks

Any time we add, subtract, multiply, or divide two images, we are adding the noise in the two images. This is undesirable in that we will end up with larger noise in our science exposure if we apply the corrections described above. To combat this, we try to take many calibration exposures and combine them to make a single calibration exposure (be it a dark, bias, or flat). This reduces the noise in our calibration exposure, so that it will hopefully add only a negligible amount of noise to our science exposure. I recommend taking at least three and preferably 5 or more of each of my calibration images (the reason for taking an odd number will become clear later) and then combining those images into a single master calibration image.

When we combine these calibration images, we will not perform an average as one might expect. Instead we will median the images -- meaning that the resulting value in a pixel in the final image is the median of the values in that same pixel in the input images. The advantage of median combining the images is that when one image has a very discrepant value (i.e. from a cosmic ray) this doesn't dramatically affect the resultant value (a median is more resistant to one discrepant value than an average). If you've taken at least three images, then this will reject almost all cosmic ray events. Note that when there are an even number of input values to a median, then it will average the two middle values (thus the motivation for taking an odd number of images to combine).

The first step in our reduction scheme is to combine all of our darks of each exposure length into a single master dark. If you've completed an observing run in which you've taken science images with exposure times of 300, 180, and 60 seconds, then you should have at least three darks at each of those exposure lengths for a total of 9 darks for your science images. You will also have darks for the exposure lengths of your flat field exposures (usually a different exposure length in each filter), so if you used 3 filters and have three darks for each of those flats, the you get 9 more darks for a total of 18. Once combined, you would have 6 master dark frames.

Let's say you have used a naming convention for your dark images of ##dark_NNNs. fits where ## is the exposure number 01-99 as you go through the night, and NNN is the exposure time. For example, the 300 second darks might be named 67dark_300s.fits, 68dark_300s.fits, and 69dark_300s.fits. Let's also assume that you've made a list file (see section IV. A.) containing these image names called list_dark_3 00s. We will use the imcombine task to do the image combination. In using imcombine, we will set the parameter combine to median, to make the image combination process do a median combine rather than an average. You can either change the parameter using epar, or set it on the command line. Looking at the help page for imcombine, we see that the usage format is imcombine input output. Thus, we will run it as:

cl>  imcombine @list_dark_300s dark_300s combine=median

The task will print out to the screen various messages about the progress of the task. Our output image (the master dark frame for 300 second exposures will be called dark_300s.fits). We would then repeat this process and create a master dark of each exposure length.


C) Subtracting Darks

Flat field images also have bias offset and dark current so they need to be corrected by subtracting a dark. Let's say we're working on a set of flat field images in the V filter and that they were each 10 second exposures. Using the naming convention of ##flat_F_NNNs.fits, where ## is the exposure number, F is the filter, and NNN is the exposure time. Thus our hypothetical flat images might be called 70flat_V_010s.fits, 71flat_V_010s.fits, and 72flat_V_010s.fits and we've typed these names into a list file called list_flat_V_010s. To subtract the 10 second master dark from all of the images we will use the imarith task. Looking at the help page, we see the usage format is imarith operand1 op operand2 result. We could create a list file which contains the names of the output images and call them 70flat_V_ds.fits, 71flat_V_ds.fits, and 72flat_V_ds.fits, where ds stands for dark subtracted. You could also simply overwrite the original images by giving the input list file as the output list file (I'll do this in the example below).

cl> imarith @list_flat_V_010s - dark_010s @list_flat_V_010s

Now our flat images are dark subtracted.

While we're at it, let's subtract darks from our science images. The naming convention for our science images is ##obj_F_NNNs.fits, where ## is the exposure number, obj is the object name, F is the filter, and NNN is the exposure time. For example, if we had three images of the Orion Nebula (M 42), the images might be 01M42_V_120s.fits, 02M42_V_120s.fits, and 03M42_V_120s.fits, and we'd have a list file of these three files called list_M42_V_120s. To subtract the darks:

cl> imarith @list_M42_V_120s - dark_120s @list_M42_V_120s

D) Combining the Flats

Now we need to combine the flats from each filter. Combining flats has one additional twist: if during our flat field exposures, the lamps illuminating the white spot on the dome flickered or faded, the multiple flats we took in each filter would have slightly different average values throughout the image. This would throw off the median combine in that it would always take the pixel from the middle exposure image and thus not reduce the noise. We need to include a small multiplicative correction to each flat field. We can do this by setting the scale parameter in imcombine to mode. This means that before combining, the images will be multiplied by a factor (close to one) which will make the mode of each image the same.

cl> imcombine @list_flat_V_010s flat_V combine=median scale=mode

Now we have a single master flat for the V filter.


E) Normalizing the Flats

We are eventually going to divide the science image by the flat, however our flat image has a large number of counts per pixel (we took an image of a brightly illuminated screen), whereas our science images have relatively few counts per pixel (astronomical objects are faint). If we simply divide one by the other, all the pixels in the resultant image would have very small values which are related as much to the pixel value in the flat as the pixel value in the science image. Ideally, we'd only like to slightly modify the counts in the object frame enough to correct the multiplicative errors without significantly changing the pixel values in the object image. We do this by dividing the flat field image by a constant to make the pixels in the flat field frame close to one before dividing the object frame by the flat field.

There are different ways of choosing the number by which we'll divide the flat field image. For images with no significant vignetting, simply dividing every pixel in the flat field image, by the mode of the image works well. We get the mode from the imstat task:

cl> imstat flat_V
#               IMAGE      NPIX      MODE      MEAN    STDDEV       MIN       MAX
               flat_V    173400    49996.    49496.     1069.    45694.    52944.

Now divide the flat by the mode:

cl> imarith flat_V / 49996.0 flat_V_norm

Now we have a normalized flat in which most pixels are near to a value of one.


F) Dividing by the Flats

Now we have dark subtracted object frames and flat frames which have been combined, dark subtracted, and normalized. To correct the images for multiplicative errors we just divide:

cl> imarith @list_M42_V_120s / flat_V_norm @list_M42_V_120s

Now our science image has been corrected for additive and multiplicative errors.


G) Aligning Multiple Images for Stacking

Many times we will need to build up signal to noise using long exposures. Unfortunately there are some obstacles to making extremely long exposures, including imperfect tracking. At SBO, the longest exposure we can take without stars trailing is between 3 and 5 minutes. This is insufficient for many faint objects, especially using narrowband filters. The way to get around this is to take several 300 second exposures and combine them.

Unlike darks and flats, there is an extra step in combining several science exposures: alignment. The images will be offset slightly from one another, so we need to shift them into alignment. To do this we use the imalign task. Looking at the help page for imalign, we see that there are several inputs: imalign input reference coords output shifts=shifts.txt. We will also use an optional parameter shifts. I'll discuss these inputs one at a time.

input -- the list of images to align.
reference -- the reference image to which all others will be aligned.
coords -- a text file, containing the (x, y) coordinates of stars in the reference image, the stars should be visible in all images. These stars will be used to determine what shifts are necessary to align the images.
output -- the list of output image names.
shifts -- a text file, one line per input image, containing an estimate of the shifts between the images. Use one star, which you can identify in each image, to determine an estimate of these shifts. This extra input parameter is necessary in almost all cases because the images probably will not be very closely aligned.

H) Combining Multiple Images

Once the images are aligned using imalign. We combine them using imcombine similarly to what we did when combining flats.

cl> imcombine @list_M42_V_120s M42_V

M42_V is now our final image in the V filter.


TOC | previous | next

Copyright © Josh Walawender