SaratogaSkies Jim Solomon's Astropics

Search

Latest news

December: Test shots with new scopes/mounts

Dec 21: TMB 80/480 Arrives!

Dec 3: AP1200 Arrives!

Nov 30: TMB 152/1200 Arrives!

Links:

Printer-friendly Version

Jim Solomon's Astrophotography Cookbook

(v2.0.3, Last Updated: 6/6/06)


Is your wife an Astronomy Widow? Face it, you spend far too many nights ignoring your sweetheart and playing with your telescope. Tell her how much you love her by purchasing a unique piece of jewelry from our sponsor:


Preface

This is nearly a complete re-write of the previous version of my astrophotography cookbook. The fundamental changes between this version and the previous are:

For those of you who know and love the previous version of the cookbook, fear not, for it is still available here. I think you'll find enough improvements in this version, however, to make it well worth your while to read.

Introduction

I believe my astrophotography technique is now producing results that are at the limit of what's possible with my rather modest equipment. I therefore offer this "How To" guide to other astrophotographers who are attempting to climb the learning curve, and who would like to get the most out of similar equipment. I hope such folks will find this useful.

This is exclusively a "How To" for long-exposure astrophotography of Deep Space Objects (DSOs); i.e., anything requiring a very long, tracked, exposure to adequately capture. This is therefore specifically NOT a "How To" for planetary photography, mostly because I don't consider myself very good at it, and also because the technique is so radically different from DSO photography that it needs its own treatment elsewhere.

Note also that this document does not delve very deeply into the theoretical underpinnings of digital astrophotography, or even the "basics" of astrophotography. As such, it is assumed that the reader is already familiar with and understands the following concepts:

See some of the excellent Introductions to digital astrophotography available on the web for more information on these topics.

The sections below are broken down as follows. First, I introduce some terminology to make sure we agree on the definitions of some commonly used terms. Then I provide some theoretical background which helps explain the relation among Lights, Darks, Flats, etc. Next I give a brief synopsis of my equipment. And finally I describe in detail the three phases of my astrophotography technique; namely, planning, acquisition, and processing.


Definitions

Here are a few terms used throughout this guide, which I define here to make sure we're all in agreement on what they mean:


Background

Many newcomers to digital astrophotography are confused by the notion of Lights, Darks, Offsets, and Flats, so here I'll give a very brief background on these concepts.

The CMOS or CCD imaging chip in most Digital SLRS will very faithfully and linearly collect light from the object you're trying to image. Unfortunately, the "signal" collected from the target object will get degraded by thermal noise and other noise sources. Darks and Offsets are the means by which we try to characterize and mitigate the effects of these noise sources. Also, the telescope/lens optical train may not fully illuminate the imaging chip, depending on its size, resulting in a phenomenon called vignetting, a darkening of the image toward the edges of the field. Further, the various sensors of the imaging camera are likely to have slightly different sensitivities. Flats are the means by which we try to characterize and mitigate the effects of vignetting and uneven sensitivity, and further the means by which we mitigate the effects of dust that might have settled on the imaging chip in the camera.

The formula that relates these physical phenomenon, and the actual frames we'll collect over a night of imaging, are as follows:

(1) Light = (Signal * Flat Signal) + Dark + Offset

where Signal is the image of the target object we wish we could collect under ideal circumstances, and Light is the image we actually captured. Rearranging the terms, we have:

             Light - (Dark + Offset)
(2) Signal = -----------------------
                   Flat Signal

But realize that the Flats we capture with the camera will, in turn, be "polluted" by Darks and Offsets in their own right, and so we must subtract Flat Darks and Flat Offsets from the Flat Lights as follows:

(3) Flat Signal = Flat Light - (Flat Dark + Flat Offset)

So, plugging equation (3) into equation (2), yields this general formula:

                    Light - (Dark + Offset)
(4) Signal = --------------------------------------
             Flat Light - (Flat Dark + Flat Offset)

Here, "Dark" refers to the thermal noise signal of the imaging camera; i.e., the noise signal that varies in proportion to temperature, ISO, and exposure length. Note, however, that any exposure we take with a digital camera contains the Offset, and "Darks" are no exception. So, if we define Dark' to be an exposure of some length with the body cap in place, then Dark' = Dark + Offset, and, similarly, Flat Dark' = Flat Dark + Offset. Plugging these values into Equation 4 yields the following simplified form:

                  Light - Dark'
(5) Signal = -----------------------
             Flat Light - Flat Dark'

And just to make things even simpler, let's drop the prime indicators (the apostrophes) that we stuck on "Dark" and "Flat Dark", and just remember that by "Dark" and "Flat Dark" we mean frames captured with the body cap in place but with the same ISO and exposure length as the Lights and Flat Lights, respectively. That gives us our final form:

                  Light - Dark
(6) Signal = ----------------------
             Flat Light - Flat Dark

Equation 6 gives us our marching orders for astrophotography, providing us with a set of Frames that must be captured for each imaging session. The actual order in which I choose to capture these frames is as follows, the reasons for which will be made clear in the acquisition section below:

  1. Flat Darks
  2. Flat Lights
  3. Lights
  4. Darks

Equipment

Most of my recent DSO astrophotos (from herein I'll drop the adjective "DSO", since this entire document deals with DSOs) are acquired with a Modified Canon Rebel XT (350D) at Prime Focus of my Celestron 8" f/5 Newtonian. Here's the list of equipment that comes into play in this configuration:

Cameras:
    Imaging: Canon Digital Rebel XT (350D) Digital SLR modified by Hutech (Type I)
    Guiding: Philips ToUcam Pro II 840k webcam
Mount:
    Celestron Advanced Series with GoTo (aka AS-GT)
Telescopes:
    Imaging: Celestron C8-N: 8" f/5 Newtonian Reflector, fl=1000mm
     => upgraded focuser to JMI NGF DX3 low-profile model
    Guiding: Orion ST80: 80mm f/5 Achromatic Refractor, fl=400mm
Guide scope mounting:
    Orion 07381 Guide Scope Rings (pair), 105mm I.D.
    Orion 07382 Guide Scope Ring Mounting Bar
Adapters, filters, etc.:
    T-Ring: Orion 05224 for Canon EOS cameras
    Coma Corrector: Baader MPCC
    Barlow: Celestron "Kit" 2x Barlow (used with Guide Scope)
    Extension: Orion 05123 1.25" Extension Tube (to reach focus with webcam)
Computers:
    Guiding, Focusing, and Acquisition: Toshiba TECRA 8100 laptop
    Processing: Custom built PC with 3GHz P4, 2GB RAM, and WinXP Pro
Software:
    Guiding: GuideDog v1.0.6
    Focus and Acquisition: DSLRfocus v3.3.14 Beta
    EXIF Preview et. al.: Canon Digital Photo Professional
    Processing: IRIS v5.30, Photoshop CS2
Cables:
    Webcam to Laptop: USB cable that comes attached to webcam
    DSLR to Laptop:
    
  • Long exposure control: C300P-20 Parallel port to shutter control cable
  •     
  • Focus and framing: USB cable that comes with 350D
  •     Laptop to Mount:
        
  • Serial port to AS-GT Hand Controller RJ-22 port: Celestron 93920
  • Power Supplies:
        Mount: Celestron 18773 (Discontinued, thankfully. Use Celestron 18776 instead.)
        DSLR: Hutech EOS104 (same as Canon ACK700)
        Laptop: included A/C Adapter
    Light Box:
        Donald Goldman's Light Box design.

    Note: The Celestron C8-N (and, for that matter, the very similar Orion Skyview Pro 8) will not reach focus with the MPCC (and, likely, other coma correctors) and the stock focuser. The reason is that the stock focusers of these Newtonians come with 2" eyepiece adapters (into which the MPCC is inserted) that are too high in profile, resulting in insufficient "in travel" for the camera to reach focus. Those wishing to adapt the focusers of these scope to use the MPCC or the like have several choices, two of which are listed here:

    1. Cheap and Very Easy: Replace the black plastic ring on the stock focuser's drawtube with the 2TUA from Island Eyepiece. (Note: you might need to wrap the 2" barrel of the MPCC with Kapton tape (or equivalent) in order to make a snug fit between the MPCC and 2TUA.)
    2. More Expensive and More Work: Replace the stock focuser entirely with a low-profile one such as the JMI NGF DX3. This is what I did.

    Planning

    As will become evidently clear, my setup and acquisition are quite involved and, therefore, time consuming. So I like to have as much work done "up front" as possible before going out for the night. The better the planning, the better things will go when shooting. The activities in this stage are as follows:

    1. Pick a target. Use charting software or the many available catalogs to pick a suitable target. In particular, I try to pick large objects that fill the camera's Field of View; bright objects that have a reasonably high mean surface brightness; and objects that are well placed. Pay particular attention to when the object transits and on which side of the Meridian you'll be shooting it.
       
    2. Choose a camera orientation. Determine if the object has greater extent in the East-West direction or the North-South direction, and be sure to orient the long dimension of the camera's sensor in that direction when attaching it to the imaging scope. I prefer to orient the camera in the "North is up" direction in all cases, unless the object begs for a "North is left" orientation. Examples of the latter include M81/M82, M42, and others.
       
    3. Pick a candidate Guide Star. It will speed the process of Guide Star acquisition to have a rough idea of which Guide Star you're going to use and to know how far, and in which direction, that Guide Star lies from the center of the target. Note that the further the Guide Star is from the center of the target, the more accurate your Polar Alignment will need to be. See the section on Polar Alignment to learn why this is the case.
       
    4. Devise a plan to find the target. Do you have a GoTo mount with scary-good accuracy? Do you have Digital Setting Circles? Are you a Star-Hopping Demon? In any case, you'll need to figure out a way of centering the target in your imaging camera without removing the camera. (Why? — because you're not allowed to remove the camera between taking your first Flat Light frame and your last Light frame.) My favorite method is to use a home-brewed spreadsheet that serves a similar function to the Precise GoTo function in my mount. I home-brewed this because the AS-GT's Precise GoTo does not allow me to pick the reference object, and sometimes it, sigh, picks reference objects on the wrong side of the Meridian. Or, just as bad, it picks stars you've never heard of and can't locate. I prefer to use an unmistakable object as the reference object, center it, and then use the spreadsheet to compute the RA and DEC offsets to the actual target from the reading on the Hand Controller's "Get RA/DEC" function.
       
    5. Take an educated guess at ISO and exposure time. This can generally be done "in the field", but if you have the time, it never hurts to research what other folks have used to shoot the same target. Or to look through your own portfolio and see what seems to work and what doesn't. I tend to lean toward 4min sub-exposures @ ISO 400, as that leaves a very large dynamic range (lack of clipping of bright objects), and 4min is usually long enough to capture a decent amount of detail in each sub-exposure. Also, at 4min, if an airplane flies through and ruins a frame, like, so what?, it's only 4min. Dimmer targets will require longer exposures and/or higher ISOs. You'll need to experiment with this to see what works best for you.

    Acquisition

    My acquisition process consists of the following distinct phases:

    1. Setup
    2. Polar Alignment
    3. Acquire Flat Darks
    4. Focus
    5. Acquire Flat Lights
    6. Acquire Target
    7. Acquire Guide Star
    8. Take a Test Shot
    9. Acquire Lights
    10. Acquire Darks

    Setup

    The setup phase can and should be done during the daytime. Basic setup of your mount, scope, laptop, etc., is way beyond the scope of this document, so I'll focus on the astrophotography-specific aspects of the setup:

    0. Verify the collimation of the optics.

    1. Setup and Configure the Imaging Camera and Imaging Scope:

    2. Setup and configure the Guiding Camera and Guide Scope:

    3. Connect the cables:

    4. Balance the scopes in the mount:

    Here's a little more background on why I intentionally misbalance the scope ever-so-slightly in RA and DEC. The AS-GT mount always approaches a target in a consistent direction when doing a Go-To, in order to minimize the impact of backlash. For the Northern Hemisphere, that direction is the same as the direction the mount moves when pressing the "Up" and "Right" arrows. So, I intentionally misbalance in DEC ever-so-slightly, such that the misbalanced weight is acting in opposition to the action of the Up arrow. So, for example, if I'm shooting an object West of the Meridian, the scope will be on the East side of the mount, and the Up arrow will move the scope South. Therefore, I like the extra weight to be on the North side of the mount in DEC, so that the Up arrow is pushing against that weight. Similar arguments apply in RA, but the directions are more straightforward. Since the mount is always tracking the movement of the stars in the Westward direction, the East side of the mount should be ever-so-slightly heavier in RA in order for the gears to be pulling against that weight.

    Polar Alignment

    In a guided system, one need not worry about RA nor DEC drift, especially if the guiding software is guiding the mount in both RA and DEC. However, Polar Misalignment causes Field Rotation in addition to RA and DEC drift, and it is this rotation that is problematic. In general, field rotation gets worse as the Polar Alignment error gets larger. It also gets worse for objects close to the poles (i.e., DECs near +90° or -90°). And it becomes more problematic the further the Guide Star is from the center of the imaging camera's field. The latter is so because the field will appear to rotate around the Guide Star, and the further it is away from the center of the imaging camera's field, the more that field will tend to "slide off the imaging camera" over the course of the night's image acquisition.

    So, how accurate must the polar alignment be? Well, accurate enough for the task at hand. Certainly it must be accurate enough so that no discernible field rotation occurs over the duration of a single exposure. And it must be accurate enough to prevent the field from rotating over the course of the night to the point where there's very little intersection among all of the night's Light frames.

    Generally, I like to Drift Align to the point where I see no discernible drift over a span of 4 to 5 minutes. Many people will see this to be overkill and they're probably right. But, hey, this is my astrophotography cookbook! <grin> I've drilled indentations into the pavement so I can replace the mount's tripod legs in the exact same position each time I set up. On most nights, this gets me "close enough" that I don't even bother to Drift Align at all. Of course, I did a "scary accurate" Drift Align before I drilled those indentations in the first place. On nights where I'm shooting close to the pole (M81, for example), or on nights that I need to wait for the target to transit, I'll spend the intervening time honing my Polar Alignment with a Drift Alignment.

    Drift Alignment isn't really as scary as most people think it is. You should learn how to do it. My current method involves using the webcam with GuideDog software running (but not guiding!) so I can view a star on my laptop's screen. I turn on GuideDog's cool "double reticle" which makes it very clear if the star is drifting. Be absolutely sure to align the webcam North-South and East-West if using this method. Be accurate! Slew the mount in RA and DEC and make sure the star follows the reticle; otherwise, the camera orientation is off. Also, be sure you know "which way is North" when looking at the laptop's screen. One of my favorite tutorials on Drift Alignment can be found at Andy's Shotglass (click on the Drift Alignment link).

    Acquire Flat Darks

    Remember, Flat Darks are used to remove noise, and, particularly the Offset, from Flat Lights. As such, they must be taken at the same ISO, Tv (exposure time), and, ideally, temperature as the Flat Lights. But as we'll see in the Acquire Flat Lights section, we're not allowed to touch or even breathe on the camera from the moment we take our first Flat Light through the moment we take our last Light. Therefore, our only two choices for Flat Darks are to take them before the Flat Lights, or after the Lights (actually, after the Darks, since there are good reasons, explained later, for taking Darks immediately after Lights). But taking Flat Darks before Flat Lights requires some a priori knowledge of the ISO and Tv we're going to use for Flat Lights.

    As will become evident in the Acquire Flat Lights section, such knowledge is only practical when using a Light Box on a familiar scope. The bottom line, therefore, is as follows:

    1. If you know with certainty the ISO and Tv you'll use for your Flat Lights, then take your Flat Darks now, i.e., before Focusing and taking your Flat Lights.
    2. Otherwise, take them as the last step in your acquisition stage; i.e., after you've acquired your Darks.

    Here's my procedure for Flat Darks:

    There's no need to enable Mirror Lock for Flat Darks.

    Focus

    There are wonderful tools available to DSLR users to help them focus, and DSLRfocus is one such tool that I use for focusing. Spend some time at this stage to focus as accurately as possible. You'll be glad you did. Take an extra 5min to really nail the focus — I assure you that N perfectly focused frames will give you a better result than N+1 poorly focused ones! Here's my procedure:

    As mentioned above, Mirror Lock for focusing shots is essential on Newtonians like mine that tend to flex too much and also tend to vibrate too much. Without Mirror Lock, most focusing shots — exposures on the order of a second or so — contain stars that are a smeared mess, making it impossible to determine when optimal focus has been reached.

    Therefore, when using my Newtonian and the 350D — a camera for which Mirror Lock is not supported by DSLRfocus in Focus Mode — I use a different strategy for focussing. Specifically, I take fairly long, guided, shots of fairly bright stars, and look to see how well the diffraction spikes are resolved. By fairly long shots, I mean exposures on the order of 15 seconds or even longer. And, for a cheap mount with lots of periodic error, guiding such a "long" exposure helps eliminate the mount as a source of error and frustration. Ideally, though, you want to use Mirror Lockup and use DSLRFocus' analysis tools to determine when optimal focus has been reached.

    Acquire Flat Lights

    As hinted in previous versions of this cookbook, my process for acquiring flats has changed. Specifically, I'm now using a Light Box instead of the twilight sky for illuminating my Flats. More on this below.

    First, some background. The purpose of Flats is to characterize the optical system as accurately as possible, so that later, in processing, you can correct for things like vignetting (uneven illumination of the optics, particularly a darkening at the edges of the camera's field). A good set of Flats will also correct for dust spots on the camera's sensor. The ideal Flat is taken of a (perfectly) uniformly illuminated target, with the camera/scope focused at Infinity.

    The twilight sky makes a reasonable approximation to this uniformly illuminated target, and, in particular, the twilight sky a few hours East of Zenith makes for a fairly good approximation. But recently I've concluded that a Light Box is far more convenient than the twilight sky for taking Flats. Here are just a few reasons I prefer a Light Box:

    Due to their role in astrophotography, Flats must match the Lights as closely as possible, and so we impose the following Fundamental Rule of Flats:

    Thou Shalt Not remove, adjust, or otherwise mess with the imaging camera from the time the first Flat Light is collected through the time the last Light is collected.

    In practice what this means is that the imaging camera must be properly seated, aligned, and focused at Infinity before collecting Flat Lights. The seating and alignment we covered in the Setup section above, and the focusing part we covered in the Focus section.

    Here are the steps I use to collect my Flat Lights:

    Here are some additional notes on properly exposing Flat Lights. Underexposing a Flat Light will result in extra noise being imparted to your Lights when dividing those Lights by the Master Flat in processing. Overexposing the Flat Lights will render them completely useless. So, we need to be careful about exposing them "just right". I believe the "just right" point is that exposure time which produces a median value in the center of the camera's range. This requires further explanation.

    The Canon Digital SLRs (350D included) have a 12-bit A/D converter for each sensor, which means that each sensor will produce a digital number in the range 0–4095 when shooting in RAW mode. The mid-point of this range is 2048, and therefore a Median of 2048 is the target value for the Flat Light exposure. One way to determine the Median value of an exposure is to use IRIS' stat command.

    Thus, using the steps outlined in the Processing section below, load the Flat Light into IRIS, convert it from CFA to RGB, and run the stat command. In the Display window, IRIS will report several statistics about the image on a per-color basis, one of which is the Median value. Increase/Decrease the exposure time until the Median is in the neighborhood of 2048 — a tad higher or lower won't hurt, though my preference is for a tad higher rather than a tad lower. See also Verify Proper Exposure of your Flat Lights for some additional explanation on this.

    At risk of beating this topic to death, there's one more point that needs to be made about exposing Flat Lights. Do not use the histogram indicator on the camera's LCD screen to determine the proper exposure of your Flat Lights! In that screen the camera is reporting the histogram of the exposure after applying a Gamma (contrast stretching) function to the image data, and therefore is very much not reporting the histogram of the "linear" image data captured directly into the RAW file. In fact, through experience and through using IRIS as recommended above, you'll find that the proper exposure for your Flat Lights appears massively overexposed if you use the LCD screen on the camera as an exposure guide. This is normal, and should not deter you from properly exposing your Flat Lights.

    Acquire Target

    There are many ways to find and center the target in the field of the imaging camera. For really bright objects, I just center them using the imaging camera's viewfinder. M42 is an example of one such target.

    For more illusive targets, I use my Precise GoTo Spreadsheet to compute the RA and DEC offsets (deltas) between the Target Object and some bright, nearby, easy-to-find, unmistakable, Reference Object. The Reference Object is almost always a nearby bright star or planet. I then center the Reference Object in the imaging camera's field and then pull up "Get RA/DEC" in the mount's Hand Controller. Then I subtract the offsets from my spreadsheet, and slew the mount to the resulting coordinates. In all cases, the slews must finish with the Up and Right arrow keys, in order to deal with any backlash in a consistent fashion. In my experience, this method is amazingly accurate and easy to execute.

    Here's the procedure I use to acquire (and verify) my target:

    Acquire Guide Star

    Acquiring a Guide Star can be either simple and quick, or excruciatingly painful. The pain factor is inversely proportional to how well the Guide Scope's finder is aligned with the center of the webcam's field in the Guide Scope, and how well the Guide Scope is already focused. If it is very well aligned, then putting the Guide Star in the finder's cross hairs will land the Guide Star on the chip every time. Otherwise, prepare to be frustrated. Here's my sequence for acquiring a Guide Star, picking up where we left off above:

    At this point it's a good idea to make sure you didn't jostle the main scope too badly while adjusting the rings of the Guide Scope. To do so, enable guiding in GuideDog and take a guided exposure in DSLRfocus — still in Focus Mode — of 30sec at ISO 1600 to verify that the target is still centered as desired. If not, disable guiding, slew the mount in RA/DEC to recenter the target, and adjust the guide rings as necessary to recenter the Guide Star in GuideDog's window. You may need to iterate this procedure a number of times to get the imaging scope and the guide scope pointed exactly correctly. Once you're satisfied, it's a good idea at this point to re-verify that the webcam is properly aligned N-S/E-W.

    Take a Test Shot

    Consider this step the equivalent of a "dress rehearsal". This is the final verification that you've nailed the framing, focus, and exposure settings (ISO and exposure time). I've rescued many nights that would have been wasted by taking a test shot and verifying all of the above on my PC, and making important adjustments once finding some problems. I highly recommend doing this. Here's my procedure:

    Acquire Lights

    If the last step is the dress rehearsal, then this step is the main performance. I like to dither between exposures in order to maximize the signal to noise ratio of the stacked result. By dither, I mean slew the mount ever-so-slightly in RA and/or DEC in a random direction between each exposure, in order to move the target around on the imaging chip. This serves to mitigate the problem caused by uneven sensitivity of the individual pixels on the chip, as well as the effect of hot pixels. It will allow you to stretch the final processed image a bit more than you otherwise would, without revealing ugly-looking, checker board-like "pattern noise" in the background, and therefore allow you to reveal more detail in your target. I highly recommend it, but I acknowledge that it's labor intensive and a royal pain in the you-know-what. Controlling your laptop remotely with something like pcAnywhere can ease the pain level of dithering.

    Some more comments on dithering. You must completely disable guiding while dithering; otherwise, GuideDog will return the Guide Star to exactly where it was previously, thereby negating the dithering action. To completely disable guiding, be sure to click (uncheck) the Guide button and the Lock button before slewing the mount. Then reenable guiding by clicking the Lock button, selecting the Guide Star, and clicking the Guide button again.

    Here's my procedure for capturing Lights:

    A few more notes on dithering. The "pattern noise" on these consumer DSLRs is vertical and horizontal in nature, in sort of a fine-grained checkerboard pattern. So, a really bad dithering pattern would be to slew the mount in a linear fashion, either in RA or DEC. A much better dithering pattern is an "outward spiral" which, when you start to get too far away from the center, you can spiral back inward. Also note that a smidgeon of field rotation (from polar misalignment) can actually be a good thing in this regard, since it will further serve to randomize these vertical and horizontal patterns, once the frames are registered (aligned and derotated) in processing.

    Acquire Darks

    Now that you've acquired your Lights, it's time to (immediately) begin capturing your Darks. I like to collect at least 9 Darks, perhaps more if the Light frames had a relatively short exposure time. In any case, collect an odd number of Darks since you will more than likely be median-combining them, and the numerical median operator just likes to have an odd number of samples.

    I let DSLRfocus time the Dark frames just like with the Lights. In fact, I like to go from capturing Lights to capturing Darks "without skipping a beat" in DSLRfocus. By this I mean that if I've got 1min of inter-frame "down time" between my Lights, I like to pause for exactly that same 1min between the last Light and the first Dark, and use that same 1min spacing between each Dark frame. This procedure yields an almost identical match between the Lights and the Darks in regards to the (lack of) cool-down time of the sensor between frames.

    Here's my procedure. At the instant the last Light frame completes:

    While the Darks are being captured, I either: begin processing the Flat frames; begin putting the rest of my equipment away for the night; or begin doing visual astronomy of some of the "eye candy" that's well placed at the current time.

    Note: If you haven't yet collected Flat Darks, now is the time to do so. See Acquire Flat Darks for the details.

    Congratulations, you've finished your acquisition! Now it's time to process the result. Ok, actually, tomorrow morning is time to process your result. <grin>


    Processing

    Now that you've acquired your Flat Lights, Lights, Darks, and Flat Darks, you can begin processing your frames to bring out as much detail as possible. Doing so requires some fairly sophisticated software. My favorite application by far for this task is the freeware package named IRIS by Christian Buil. Others prefer ImagesPlus by Mike Unsold, which, as of this writing, was being sold for roughly US $180. But, again, this is my cookbook, and I use IRIS, so this processing sequence will be explained in terms of IRIS. The outline is as follows:

    1. IRIS Setup
    2. "Visualization"
    3. Create Master Flat
    4. Create Master Dark
    5. Calibrate Lights
    6. Convert CFA to RGB
    7. Register
    8. Crop
    9. Normalize
    10. Stack
    11. Remove Gradient
    12. White Balance
    13. Stretch
    14. Touch up in Photoshop
    15. Optional Optimizations
    16. Archive Your Results

    What follows is not meant as an introductory tutorial on IRIS. For that, see Christian's various tutorials linked from the IRIS home page. In particular, be sure to read the Illustrated tutorial on DSLR processing, and the more in-depth Digital camera image processing tutorial.

    IRIS Setup

    For now, let's make sure IRIS is configured correctly:

    "Visualization"

    Before diving in, and by popular demand of my devoted readers, let's discuss what is simultaneously the most confusing aspect of IRIS and one of its most powerful features: namely, the concept of Visualizing an image in IRIS.

    The beauty of IRIS is that it allows you to see — i.e., to visualize — an image, in a variety of ways, without changing the data (i.e., the brightness values) of the image. Think about this for a minute, since it is contrary to the behavior of most image-processing programs. In Photoshop, GIMP, Paint, and many other such programs, to make an image, say, brighter, you have to bring up Levels, Curves, or the equivalent, and actually change the image data. However, in IRIS one can arbitrarily set the Black Point (the point below which all image data will be rendered on the screen as black) and the White Point (the point above which all image data will be rendered on the screen as white), thereby viewing or visualizing the values between these Black and White points on the screen.

    In essence, therefore, one is "zooming in" on the interesting range of brightness values in the image in order to "see" the image in some convenient fashion. This is particularly valuable for "linear" image data captured by DSLR cameras in RAW mode, which would be very difficult to "visualize" without first applying an aggressive contrast stretch. What's cool about IRIS is its ability to visualize such data without applying that contrast stretch. The White Point and Black Point in IRIS is controlled by the "visualization sliders" in the Threshold window. They can also be set to a particular set of values with the visu console command.

    This "visualization" concept is probably still confusing to you. Don't worry if that's the case. With a little bit of experience and use it will become second nature. The important thing to take away at this point is the following, which I like to think of as the Fundamental Theorem of Visualization in IRIS:

    If an image in IRIS doesn't look like you expect it to look, there's an extremely good chance that the Visualization Thresholds just need to be adjusted until it does look like you expect it to look.

    Many times, but not always, a "reasonable" visualization of an image — particularly, a linear (unstretched) image — can be achieved by clicking the Auto button in the Threshold window. If that doesn't work, try moving the top slider to the right, and/or the lower slider to the left, after clicking the Auto button.

    Now that we've configured IRIS and we understand the concept of visualization, we're ready to get started …

    Create Master Flat

    1. Convert RAW Flat Lights to PIC (CFA). Select Digital Photo » Decode RAW files... which, somewhat disarmingly to first-time IRIS users, will push IRIS to the background of the Windows interface; i.e., behind all of the other windows open on the screen. It does this so you can pull up Windows Explorer and navigate to the directory where your RAWs are stored. Do so, select all of the Flat Lights (the .CR2 files in the case of the 350D; .NEF, .PEF, etc., for other camera types), and drag them into the IRIS Decode RAW files dialog. Give the sequence a name (I use fl for flat lights), and then hit the ->CFA... button. IRIS will convert the selected RAW files to its native PIC file format, and each image will be a grayscale CFA image.
       
    2. Convert RAW Flat Darks to PIC (CFA). With that window still open, hit Erase list and then drag all of your Flat Darks (again, the .CR2 files for the 350D; .NEF etc. for other camera types) into the main portion of the window. Give this sequence a different name than the one you used for Flat Lights (I like fd for flat darks), and then hit the ->CFA... button. IRIS will convert the selected RAW files to PIC. Click the Done button when finished.
       
    3. Create the Flat Master Dark. My current process involves median-combining the individual Flat Darks to make a Flat Master Dark. Here's the simple way to accomplish this in the command window:
        >smedian fd N
        >save flat-master-dark

      where N is the number of Flat Dark frames (typically 19 in my case).
       
    4. Identify hot pixels. Use the find_hot command on the flat-master-dark to identify hot pixels. The trick is to choose an appropriate threshold value over which IRIS will deem a pixel to be a "hot pixel". One method that works for me is to set that threshold at "Mean + (16 × Sigma)". However, I know for sure that this does not work for a friend of mine's 300D, as it produces far too many hot pixels. In the case of Flat Darks, there should be very few hot pixels that reveal themselves at such short exposures. Try using different thresholds with the find_hot command until you get somewhere in the neighborhood of 10 to 20 hot pixels. The extremely useful stat command is used to get statistical data for the image in memory. (Note: Recent versions of IRIS place the output of such commands in the Output window, rather than directly into the Command window as shown in the example below.) Here's an example:
        >load flat-master-dark
        >stat
        Mean: 125.0         Median: 125
        Sigma: 2.1
        Maxi.: 274.0       Mini.: 114.0
        >find_hot flat-cosmetic 158.6
        Hot pixels number: 3

      where flat-cosmetic is the name of the file into which IRIS stores the list of hot pixels, and the supplied threshold (158.6) was computed as the Mean + (16 × Sigma); i.e., 125 + (16 × 2.1).
       
    5. Verify Proper Exposure of your Flat Lights. This is something you should have done at the time you collected your Flat Lights, or when testing your Light Box, etc., because now it's likely too late to reacquire them (you've probably removed the camera or otherwise messed with it from the time you collected your Lights). Anyway, you can use the following steps to verify your Flat Lights at the time you collect them. The simple way is to compute a set of statistics for the entire sequence of Flat Lights (fl if you've followed steps 1–4 above to the letter), but on a per-color basis since your camera has varying sensitivity to different colors, and because your Light Box or the twilight sky is not perfectly "white". To do so, execute the following commands:
        >cfa2pic fl flrgb N
        >stat3 flrgb N
      where N is the number of Flat Light frames (typically 19 in my case). The stat3 command will automatically run the equivalent of the stat command on each frame in the sequence, storing it's output in a tab-separated text file in your IRIS working directory called stats.lst. This file can be inspected with any text editor (Wordpad, Notepad, Emacs). The columns are (left to right): Color/Image Number, Mean, Max, Min, Sigma, and Median. Note that there are three rows for each file: one for red, green, and blue, respectively. Ideally you want the Median value for each color to be somewhere in the neighborhood of 2048. If you're using an unmodified DSLR, or you're using sky flats, or both, then it's likely that the blue and green median values are significantly larger than the red median value. In such a case, you'll have to pick a good "compromise" exposure that overexposes blue and green slightly (e.g., a median of 2500 or so) and that underexposes red slightly (e.g., a median of 1000 or so). In all cases, you must make sure that the pixels in the center of your Flat Lights — i.e., the brightest portion of your Flat Lights — are not clipped (saturated). To do so, load one or more of your color Flat Lights (e.g., >load flrgb1), adjust the visualization thresholds until you can clearly see the brightest point of the image, and hover the mouse around some of the pixels therein. In the lower-right portion of its main window, IRIS will report the red, green, and blue intensities of the the pixel directly underneath the mouse. Make sure the brightest portion of the image comes nowhere near the 12-bit maximum value of 4095. In fact, try to keep the maximum value under 3000 or so.
       
    6. Calibrate the Flat Lights against the Flat Master Dark. The easiest way to do this is to use the Preprocessing... item in the Digital Photo menu. However, this function is really designed to calibrate your "actual" Lights against a Master Dark, Master Flat, Master Offset, etc., so we need to trick it by putting in a few dummy values. First, we need to create a "dummy flat", because the last thing we want to do is divide our Flat Lights by an actual Flat field. We also need a "dummy offset" since we're not using Offsets in this processing flow. The easiest way to create these "dummy" files is to load any of our existing files (which have the desired width and height) and then "fill" the image with a constant value. Here's how:
        >load fd1
        >fill 0
        >save dummy-offset
        >fill 1
        >save dummy-flat

      Now bring up the Digital Photo » Preprocessing... menu, and enter the following values: Input generic name = fl, Offset = dummy-offset, Dark = flat-master-dark (Optimize = not checked), Flat-field = dummy-flat, Cosmetic file = flat-cosmetic, Output generic name = fld (flat light with dark applied), Number = 19 (i.e., the number of Flat Lights in your sequence). IRIS will then subtract the flat-master-dark from each selected Flat Light and also "fix" the hot pixels.
       
    7. Create the Master Flat. Select Digital Photo » Make a flat-field... and fill in the fields as follows: Generic name = fld, Offset image = dummy-offset, Normalization value = 20000, and Number = 19 (i.e., the number of Flat Lights in your sequence). IRIS will subtract the (dummy) offset from the calibrated Flat Lights, normalize them so they're all the same intensity (brightness), and then median-combine them. The result is in memory but not yet on the disk, so be sure to save it:
        >save master-flat
       
    8. (Optional) Disk cleanup. At this point, if you need to reclaim space on the disk, you can delete all the files in your IRIS working directory except for your RAW camera files, master-flat.pic and dummy-offset.pic.

    Create Master Dark

    1. Convert RAW Darks to PIC (CFA). Use the same procedure as described in the Create Master Flat section. Call this sequence d for dark.
       
    2. Create the Master Dark. My current process involves median-combining the individual Darks to make a Master Dark. Here's the simple way to accomplish this in the command window:
        >smedian d N
        >save master-dark

      where N is the number of Dark frames (typically 9 in my case).
       
    3. Identify hot pixels. Use the same procedure as described in the Create Master Flat section to identify hot pixels in the Master Dark. Here we're aiming for hot pixels on the order of several hundred. My algorithm of "Mean + (16 × Sigma)" works for my 350D here too. You'll have to experiment with your camera. Here's an example:
        >load master-dark
        >stat
        Mean: 120.3         Median: 119
        Sigma: 9.5
        Maxi.: 4008.0       Mini.: 91.0
        >find_hot cosmetic 272.3
        Hot pixels number: 82

      where cosmetic is the name of the file into which IRIS stores the list of hot pixels, and the threshold (272.3) is computed via Mean plus (16 × Sigma); i.e., 120.3 + (16 × 9.5).
       
    4. (Optional) Disk cleanup. At this point, if you need to reclaim space on the disk, you can delete all the files in your IRIS working directory except for your RAW camera files, master-flat.pic, dummy-offset.pic, master-dark.pic, and cosmetic.lst.

    Calibrate Lights

    1. Convert RAW Lights to PIC (CFA). Use the same procedure as described in the Create Master Flat section. Call this sequence l for light.
       
    2. Calibrate Lights against Master Flat, Master Dark, and Hot Pixel Map. Bring up the Digital Photo » Preprocessing... menu, and enter the following values: Input generic name = l (that's l as in light, not the number one), Offset = dummy-offset, Dark = master-dark (Optimize = not checked), Flat-field = master-flat, Cosmetic file = cosmetic, Output generic name = ldf (light with dark and flat applied), and Number = N, where N is the number of Light frames you have. For each Light frame, IRIS will subtract the master-dark, divide by the master-flat, fix the hot pixels, and save the result in a new file.

    Convert CFA to RGB

    Enter the following command:

      >cfa2pic ldf ldfrgb N

    where N is the number of Light frames. IRIS will interpolate the missing color data to convert the calibrated Lights — still in CFA format — to full-color (RGB) images.

    Register

    As of IRIS v4.34, use the following, automated procedure to automatically shift (translate), rotate, and, if necessary, scale the calibrated Light frames in order to line them up for stacking:

      >setspline 1
      >coregister2 ldfrgb ldfrgbreg
    N

    where N is the number of Light frames. This will take a while, so sit back, relax, and go grab a snack. If IRIS does not report any errors, then you're done and you can skip to the next step (Crop).

    On the other hand, if that fails for any reason, try instead the "Three matching zone" method:

      >setspline 1
      >coregister4 ldfrgb ldfrgbreg 512
    N

    If that fails for any reason, you might need to tune IRIS' star-matching algorithm(s) by adjusting the number and brightness of stars it uses for pattern-matching. Here is the relevant command:

      >setfindstar sigma

    where higher values of sigma tell IRIS to use only the brightest (but unsaturated) stars, and lower values of sigma tell IRIS to use fainter stars. The default value of sigma is 7.0. For shots with tons of stars, such as well-exposed shots of targets in/near the Milky Way, try increasing sigma to something like 8.0 or 10.0. For shots with fewer stars, try decreasing sigma to something like 5.0. Then try moving sigma in the opposite direction if that didn't work. In any case, once running the setfindstar command, rerun the coregister2 or coregister4 command as specified above. If IRIS completes these commands successfully, then you can move on to the next step (Crop).

    If IRIS continues to have problems automatically registering your sequence, then, as a last resort, you can use the much simpler "one star" registration method. Unfortunately, though, this method will not automatically "derotate" your calibrated Lights to undo the effect of field rotation (which in turn is caused by polar-alignment error). To do so:

    Note: a better "last resort" than the "one star" method described above is to use the rregister command, which can handle translation (shift) and field rotation. See Christian's web site for more details. In particular, see Compensate fied rotation [sic] in IRIS Tutorial and Command RREGISTER in the v3.54 Release Notes.

    If none of the above works, post a question to the Iris_software Yahoo Group and we'll try to help you.

    Crop

    Normally at this point we'd be ready to stack our registered images. But my preferred method of stacking — so-called Kappa-Sigma stacking — requires each image in the sequence to be Normalized such that the background levels are equal. But in order to Normalize each image in the sequence, we need to crop out the (here comes a technical term <grin>) "crud" around the border of each image. That "crud" is the pixel values that IRIS had to invent (because they were outside of the original image boundaries) when it shifted, rotated, and scaled each image in the registration step. Our goal at this point of the process is to crop the entire sequence to the intersection of all of the registered images. Here's how:

    1. Do a quick-n-dirty simple summation of the registered images:
        >add_norm ldfrgbreg N
      where N is the number of images in the sequence.
       
    2. Neutralize the background as an aid in visualizing this summed result. To do so, with the mouse draw a rectangle around a region of the image near the center that has mostly sky background (i.e., avoid galaxies, nebulae, bright stars, etc.) Then execute the following command:
        >black
       
    3. Visualize the image by hitting the Auto button in the Threshold window. Now use the main window's scroll bars to move to the lower-left corner of the image. It should be very clear where the "good" image data ends and the "crud" begins. The "good" image data will be relatively bright; i.e., similar in brightness to the vast majority of the image. The "crud" will be noticeably dimmer, perhaps even totally black. With the mouse, click comfortably within the "good" region (i.e., a handful of pixels up and to the right of where the "crud" ends). IRIS very graciously reports in the Output window the coordinates of the location where you clicked the mouse. Now do the same thing in the upper-right corner of the image. Click comfortably inside the "good" region (i.e., a handful of pixels down and to the left of where the "crud" ends). Once again, IRIS reports the coordinates in the Output window. Now that you know the lower-left and upper-right coordinates to which to crop the image sequence, enter the following command:
        >window2 ldfrgbreg ldfrgbregcrop x1 y1 x2 y2 N
      where N is the number of images in the sequence; {x1, y1} is the coordinate where you clicked in the lower-left of the image (see the Output window), and {x2, y2} is the coordinate where you clicked in the upper-right of the image (see the Output window for this too).

    Note that if there is severe field (de-)rotation in your image sequence, you may have to click further inside the "good" region of the summed image, in order to crop out the "crud" in the upper-left and lower-right of the image. You may need to experiment with this to get just the right amount of cropping.

    Normalize

    We now normalize the background level of each image; that is, automatically set the median level of each image to zero. This has two positive effects: first, it increases the dynamic range available to the (stacked) result, and second, it allows Kappa-Sigma stacking to work properly. Here's how:

      >noffset2 ldfrgbregcrop ldfrgbregcropnorm 0 N

    where N is the number of images in the sequence.

    Stack

    There are multiple ways to "stack" (i.e., add) the individual frames in a sequence. One way is a straight summation, which has the advantage of producing a result with very high signal to noise ratio. The problem with straight summation is that things like cosmic rays, which look like (very) small isolated streaks, airplane and satellite trails, and other "aberrant" data shows up in the stacked result. At the other extreme, one can compute the Median of all images in the sequence. While the Median operator is superb at removing such "aberrant" data, the Median operator has the disadvantage of producing a final result that has a much lower signal to noise ratio than summation. Wouldn't it be great if there was a "hybrid" algorithm that combined the best properties of straight summation with the best properties of median stacking!? Well, there is.

    That algorithm is called Kappa-Sigma stacking, and briefly works as follows. Consider a single pixel location (x,y) in the image. The algorithm examines each image in the sequence for its intensity value at that location, then computes the Mean and Sigma of those values. Any individual value that falls some constant multiple of the Sigma away from the Mean is deemed to be "aberrant", and is excluded from the stack. The algorithm then computes the sum of the remaining values, making sure to scale the result appropriately based on how many values were excluded. This some constant multiple of the Sigma is called Kappa.

    Restating, then, the Kappa-Sigma algorithm excludes from the stack any values which lie Kappa × Sigma units away from the Mean. Kappa is one of the parameters that must be supplied to the algorithm. Another parameter is the number of iterations of the algorithm, which I now describe in further detail. In some cases the algorithm is unable to discard all of the truly "aberrant" data in the first round, though it may correctly discard some of it. At this point, another round of the algorithm can be run to reject data that is deemed to be "aberrant" with respect to the new Mean and Sigma, which is (re-)computed on the remaining data. Each such round is called an iteration. In my experience, one iteration of this algorithm is sufficient for automatically eliminating cosmic ray hits; airplane, satellite, meteor trails; and even "hot pixels" that sneak through the calibration phase, especially if the Lights were dithered during acquisition. (Have I made the case yet for dithering!? <grin>)

    While the above theory is a bit involved, the actual processing steps in IRIS are extremely simple. Here's how to do a Kappa-Sigma stack. We save the result in a file named stack:

      >composit ldfrgbregcropnorm Kappa Iterations Normalize N
      >save stack

    where Kappa and Iterations are as described above, Normalize is a flag that tells IRIS to prevent clipping (numerical overflow) of the stacked result, and N is the number of images in the sequence. I almost always use a Kappa of  3, an Iterations of 1, and a Normalize flag of 1. If you find that airplane trails etc. are sneaking into the stacked result, consider decreasing Kappa to 2 or increasing the Iterations to 2 or more. In almost all cases you'll want to prevent numerical overflow (clipping) of the stacked result by setting the Normalize flag to 1. (However, you may instead decide that it's ok to let the bright stars saturate, leaving more dynamic range available to the faintest details in the image. In such a case, the Normalize flag can be set to 0.)

    To summarize, then, my typical usage of this command looks as follows:

      >composit ldfrgbregcropnorm 3 1 1 N
      >save stack

    where N is the number of images in the sequence.

    Let's take a quick moment to reflect on what we've done so far. To the best of our ability, we've mitigated the effects of noise and limitations in our optical system by calibrating the Lights against a Master Dark and a Master Flat. We then converted the calibrated Lights, still in the form of grayscale CFA images, into full color (RGB) images. Then we registered (aligned) the images, removed the crud around the border, normalized them, and stacked them. To this point we've been working strictly in the realm of "science" to get the cleanest possible stacked result from our Lights. Now it's time to enter the realm of "art", with a little bit of residual "science", to bring about an aesthetically pleasing, final, result. This "art" is explained in the next several steps of the process.

    Remove Gradient

    If there's any light pollution at your site, especially if that light pollution is uneven throughout the sky, it's likely that your image has a rather ugly-looking background gradient at this point. Another possible cause of that ugly-looking background is a poor match between the Flats and the Lights. Load your stacked result into IRIS (>load stack) and adjust the sliders in the Threshold window until you can clearly see the background. Hitting Auto in the Threshold window should work. It's also helpful to "zoom out" at this point so you can see the entire image. If you're lucky and the background of your image looks uniform, you can skip this section entirely. Otherwise, read on.

    IRIS has a powerful mechanism for removing background gradients, but here I'll describe only the simple mechanism, because the advanced mechanism would require many pages of text and examples, and Christian has already provided such at his web site. Even for the simple mechanism you'll need to experiment a bit to get the desired result. Repeat the following commands until the background looks as uniform as possible, adjusting the visualization sliders and, perhaps, zooming out at each iteration so the whole image fits on the screen (use the zoom out button on the toolbar):

      >load stack
      >setsubsky
    sigma poly_order
      >subsky

    I usually try sigma = 4 and poly_order = 1 at first, hoping that there's a simple "linear" gradient in the background. Typically, the background gradient is more complicated, and therefore getting rid of it requires a higher-order polynomial. In such a case, try a poly_order of  3, 4, or higher. Also try adjusting the sigma parameter up and down. Sorry, but you'll just have to play around with this until you get the desired result. When satisfied, be sure to save image in memory:

      >save stack-subsky

    White Balance

    Adjust the visualization sliders to get a decent view of your stacked, gradient-removed result. Find as large an area as possible, with as few bright stars and as few "features" (galaxies, nebulae, etc.) in it as possible, and draw a rectangle around that area with the mouse. Then execute the following command:

      >black
      >rgbbalance
    R G B
      >save stack-subsky-wb

    where R, G, and B are weights used to compensate for the varying sensitivity of the camera to red, green, and blue light, respectively. For unmodified Canon DSLRs, I use Christian's suggested weights of R = 1.96, G = 1.00, and B = 1.23, although my sense is that this gives a tad too much red and a tad too little blue. But some of this is personal preference, and you can always salt-n-pepper to taste in Photoshop at a later stage. For modified Canon DSLRs (those with their "IR block" filter removed), RGB weights of {1.38, 1.00, 1.23}, respectively, are more appropriate.

    Note that these R, G, and B weights are scaling factors that will be multiplied by every red, green, and blue pixel value, respectively, when executing the rgbbalance command. When any of these weights are greater than 1.0, as indeed are many of the suggested weights above, there is always the potential of clipping (saturating) some details in the image if those details are already very near clipping. It turns out that for color balancing, only the relative weights are important. Therefore, you may wish to normalize these weights by dividing all of the individual weights by the largest of them, to ensure that no single weight is greater than 1.0 and, in turn, to ensure that no clipping occurs. For example, we could divide {1.38,1.00,1.23} by 1.38, the largest of the three weights in this set, and use the resulting normalized weights of {1.00,0.72,0.89} if we wanted to prevent any clipping.

    Some additional comments on the black command are also in order. This command figures out the median {R,G,B} values within the selected region, and then subtracts those constant, median, values from the entire image, thereby making the median value be {0,0,0} within the selected rectangle. In that sense, the black command removes any “DC Offset” remaining in the image that didn't get removed by the noffset2 or subsky command. The source of this offset is “Sky Fog,” i.e., Light Pollution, which generally has a strong color cast to it, and hence the {R,G,B} values computed by the black command can and usually will be different for the various colors. The bottom line is that removing this “DC Offset” is absolutely essential in order to produce an accurate color balance.

    Stretch

    I absolutely love the Hyperbolic Arc Sin (asinh) stretching function in IRIS, as I find it provides a much more pleasing result than the Digital Development Process (DDP) available in most image-processing programs. Interestingly, a variant of the asinh stretch is used by JPL to process Hubble photos. Finding the right alpha parameter (the aggressiveness of the stretch), and the right intensity parameter (a post-stretching scaling factor to prevent clipping or to make the result brighter), is largely a matter of trial and error. So, I iterate among the following commands until I get the desired result:

      >load stack-subsky-wb
      >asinh
    alpha intensity
      >visu 32767 -5000

    I usually try alpha = 0.005 and intensity = 30 at first. For each value of alpha, you want to find the right intensity value to make sure the brightest feature that you don't want clipped will have an intensity value of 32767. As for the alpha parameter, changing it by tiny amounts can have a huge effect. So, if 0.005 is a pretty middle-of-the-road stretch, 0.010 is a very aggressive stretch, and 0.001 is a fairly benign stretch. Again, play with the values until you get a result that shows as much detail as possible, but without bringing up the background noise to unacceptable levels. You might also want to increase the low threshold in that visu command from -5000 to -4000 or even higher. Experiment, experiment, experiment! When satisfied, be sure to save the result:

      >save stack-subsky-wb-asinh

    Congrats! For the first time since you began this odyssey of image processing, you're probably looking at an image that actually looks something like the result you expected! What remains is some final touch-up in Photoshop.

    Touch up in Photoshop

    At this point IRIS has proven to be a useful and effective soldier. It's now time to export the data to Photoshop for final touchup. Unfortunately, IRIS uses "signed 16-bit integer math" and Photoshop uses "unsigned 16-bit integer math", and so one must jump through a couple of hoops in Photoshop to make the image "look right".

    The first step is to save the image in Photoshop format from within IRIS:

      >savepsd2 stack-subsky-wb-asinh

       Note: use savepsd2, not savepsd!!

    Now open stack-subsky-wb-asinh.psd in Photoshop. It probably looks horrible, but don't sweat it! The reason is that the IRIS' notion of "black" and Photoshop's notion of "black" are numerically very different. Fixing this is a simple matter of bringing up the Levels command and setting the black point to around 110 or so. Of course, you might want to play around with the stretch parameter and the white point parameter at this point too. Depending on your Photoshop settings, sometimes the Auto levels command will take care of this automatically and give the desired result.

    Once you've adjusted the Levels in this way, the image should look very similar to how it looked in IRIS. In fact, it should look nearly identical. At this point I'll also rotate the canvas if the camera was "upside down" to make North=up (or North=left, depending on the camera orientation).

    Congratulations!!! You're probably now staring at a killer astrophoto. If it looks good, or even if it doesn't, shrink it, convert it to JPG, and send me a copy of it! Also, if you have feedback or questions about this document, please send those comments as well. My E-mail address is solospam at comcast dot net.

    Optional Optimizations

    Here are a few of my "tricks of the trade" to further improve upon the results that can be accomplished above:

    Archive Your Results

    Burn a CD or DVD now with all of your RAWs (.cr2), your stacked result (stack.pic), and your final, full-resolution, full-size processed result — both the IRIS file (.pic) and the Photoshop file (.psd). If there's space, include the Master Flat, Master Dark, and Cosmetic file as well.


    Questions and Answers

    My Goodness, you're anal. Do you really go through all of this for each image?

    Yes. Astrophotography is one of the most difficult flavors of photography there is, and there are a whole lot of things that can go wrong. So, I try to minimize the chances of blowing it, since even the non-anal version of setup, acquisition, and processing takes a ton of time. A lot of the steps I take are specifically there to correct a previous "blown night" of imaging. I think my results speak for themselves, especially when considering that I'm using a scope/mount which together cost US $950, and a camera that sells for about US $900.

    Can you explain this CFA business a little better?

    I can try. <grin> Every sensor location on a DSLR has a tiny color filter on it. Each such filter passes only red, green, or blue light. If you think about it, then, each sensor location only records a single color. The other two colors at each location have to be interpolated from the neighboring locations. The collection (i.e., array) of these filters is called a Color Filter Array (CFA for short). Here's an illustration of a CFA, from the upper-left corner of the sensor:

                           
                           
                           
                           
    : : : :

    The above array is a fairly accurate representation of what gets recorded in the camera's RAW file format; namely, a single 12-bit intensity value at each sensor location. The RAW file has some (lossless) compression applied to it in order to make more such files fit on a flash card. When we convert a RAW to a CFA in IRIS, the lossless compression is, well, uncompressed, resulting in a grayscale image — with one 12-bit number per sensor location. If you zoom a "CFA" image in IRIS, and adjust the thresholds properly, you'll be able to see the effects of the color filter array. Give it a try!

    But, alas, what we ultimately want for a color image is a red, a green, and a blue value at every location. Figuring out what the "missing" values should be — e.g., the blue and the green values at the "red only" location in the upper-left corner of the sensor shown above — requires some fairly sophisticated algorithms. One or more such algorithms are built into every digital camera, and are employed whenever a "color" image is created (e.g., JPG). IRIS has several algorithms too, which can be selected in the Camera settings dialog under RAW interpolation method. I prefer the Gradient method, as I believe that provides the best detail in the final image. Feel free to experiment with the Linear and Median methods, which are designed to trade off a small bit of detail (i.e., resolution) for lower noise.

    Why do you align the 350D North/South or East/West?

    Aesthetically, because I like my pictures to be in the de facto standard orientation of North = Up. Also, this allows a simple and direct comparison of my images with a sky chart or the Digital Sky Survey, etc. And, more practically, because if the framing happens to be off when I take my test shot, I know immediately in which direction (RA or DEC) I need to slew the mount to fix the framing.

    Why do you Drift Align? Can't I just use this cool Polar Axis Finderscope I bought?

    Good luck. I find such tools to be a waste of money, because if I'm doing visual astronomy, the mount can handle a huge polar alignment error, and if I'm doing astrophotography, those tools don't provide nearly enough accuracy. I don't understand why so many people are terrified of Drift Aligning. Practice it on a cruddy night with cruddy transparency, when you can only see the brightest of the bright stars and can't do anything else anyway. It becomes second nature after a little bit of practice.

    Why do you use IRIS? I like <some image processing program> better.

    Because IRIS rules. I love it and it produces tremendous results. Yes, it takes a while to walk IRIS' learning curve, but once you do it makes perfect sense and, well, it is astonishingly powerful. The price is hard to beat too (free!).

    Why don't you collect Offsets and use them in processing?

    Actually, I usually don't collect Offsets or use them, but on a rare occasion I do. As explained in the Background section, Dark frames captured by the camera actually contain the Offset, and if you median-combine the Darks (that haven't had the Offset subtracted), then you're really ending up with a "Master Dark plus Offset", and it's that very beast which we're subtracting from the Light's in the Calibrate Lights step. Thus, in effect we are subtracting an Offset from the Lights, just not an explicit set of collected Offsets.

    However, sometimes I notice that the calibrated Lights "look too noisy" to me, even after dark subtraction, etc. The cause of this is usually a large temperature difference between the time when (most of) the Lights were captured and the time when the Darks were captured, which can be several hours in time and 10 to 20 degrees Fahrenheit in temperature. In such a case, the Darks aren't as well matched to the Lights as I'd like. In that case, I'll collect a set of Offsets (usually 19 of them at Tv = 1/4000, ISO = 100); create a Master Offset (see Digital Photo » Make an offset...); and then use that Master Offset (instead of the dummy, zero-filled file) in the creation of the Master Dark (for which I use Digital Photo » Make a dark... instead of the smedian command). Then, in the Calibration stage, I'll supply the actual Master Offset (instead of the dummy, zero-filled file) and also check the Optimize box. This asks IRIS to scale the Master Dark to an optimum intensity before subtracting it from each Light, in order to maximize the Signal to Noise ratio of the result. But this function of scaling the Master Dark before subtraction only works if the Master Offset has been subtracted from each Dark in creation of the Master Dark.

    Why do you collect Flats every night? I heard you only need to do this once per optical configuration?

    You heard right. Maybe. If your optics are perfectly collimated; if that collimation never needs adjustment; if the illumination of the imaging chip is perfectly symmetric with respect to the center of the chip and/or you put the camera in the exact same orientation every time; if the number and location of dust spots on the sensor never change ... then yes, you only need to take one set of Flats for each such configuration. But, in my opinion, these are some extremely big "if's", and so I've decided to take Flats every time I set up. It's easy to do and doesn't take very long. Just do it.

    What GuideDog/camera/mount settings do you use for guiding your mount?

    With the standard caveat that Every mount is different, and These settings might not work well for you, here are the settings I've found work well for my mount:

    ParameterValueUnits
    Autoguide Rates 199.0% of Sidereal
    Radius24.0pixels
    RA Guide Rate15.0"/s
    DEC Guide Rate15.0"/s
    Maximum Error15.0"
    Minimum Error 21.5"
    RA Backlash0.0ms
    DEC Backlash0.0ms
    Declination Correctionsyes(checked)
    Use Autostar Pulseguideno(unchecked)
    Aggressiveness 3100.0% of computed correction
    GuideScope Focal Length 41026.0mm

    Notes:
    1  The Autoguide Rates (AZM RATE and ALT RATE) are set on the AS-GT mount's Hand Controller, in the Scope Setup menu. These rates must be consistent with the RA and DEC Guide Rates configured in GuideDog. (Note: 100% of Sidereal equals 15"/s.)
    2  On nights of poor seeing, I increase the Minimum Error parameter to 2.0 or even 2.5 so that GuideDog isn't chasing the seeing, and is really only correcting for errors in the mount's tracking.
    3  You'll have to experiment with the Aggressiveness parameter. I've found that somewhere between 75% and 125% works well for me.
    4  You might wonder why a 400mm GuideScope (the ST80) when fitted with a 2x Barlow yields an effective focal length of 1026mm, not 800mm. That's because a Barlow really only provides its rated focal-length multiplier nominally, and the actual factor depends on the exact spacing of the eyepiece, or in this case, the webcam's CCD from the Barlow's lens elements. In my setup, the actual factor ends up being 2.565x, and hence the effective focal length of 1026mm. The easy way to measure the effective focal length, is to shoot the same, distant, daytime target with the webcam through the GuideScope, with and without the Barlow. Then figure out in Photoshop or some other program by what factor you need to shrink the Barlowed image so that the features in the image have the same size as in the un-Barlowed image. Then take that factor and multiply by the GuideScope's focal length.

    As for the webcam, I adjust the ToUcam's settings to the mid-point on the first page of parameters (i.e., Brightness, Contrast, Saturation, etc.), and I use the slowest possible frame rate (5 fps). On the second page, I use the longest exposure time (1/20sec) and then adjust the Gain until the star is nice and bright but not clipped in the preview window. With very bright stars, I back off the exposure time to prevent clipping.

    I think your procedure is seriously flawed. Here's why ...

    Then, by all means, please send your comments to me! I'm always looking for a good tip or a better method than the one I'm currently using. My address is solospam at comcast dot net.


    Revision History

    v2.0.36/6/06
  • Incorporated reviewer comments. (Thanks Jared, Rich, Howard, Mark.)
  • v2.0.15/24/06
  • Fixed some typo's and grammo's. ;-)
  • v2.0.05/23/06
  • Complete rewrite. Major revision.
  • v0.45/5/05
  • Fixed some typos.
  • v0.35/4/05
  • Clarified dithering procedure: one must (un)click the Lock button in GuideDog before slewing the mount. (Thanks Anthony.)
  • Added a Background section on Lights, Darks, Offset, and Flats
  • v0.25/4/05
  • Incorporated comments from initial reviewers. (Thanks Jared, Jim, and Rich.)
  • v0.15/3/05
  • Initial release
  •