Understanding Sensors and Exposure


Sensor noise.

UNDERSTANDING SENSORS AND EXPOSURE.

 

Sensors, exposure, and calibration are inextricably linked. It is impossible to explain one of these without referencing the others.

Electronic sensors are the enabler for modern astrophotography; without them, it would be a very different hobby.

Putting the fancy optics and mounts to one side for a moment, it is a full understanding of the sensor and how it works (or not) that shapes every imaging session.

We know that astrophotographers take many exposures but two key questions remain, how many and for how long? The answer is not a simple meter reading as you’d generally have in the accepted ways of film photography.

 

The Science of Camera Sensors.

Video by Filmmaker IQ

 

Each individual session has a unique combination of conditions, object, optics, sensor and filtering. Each requires a unique exposure plan. A list of instructions without any explanation is not useful.

It is more valuable to discuss exposure after we understand how sensors work, the nature of light and how to make up for our system’s deficiencies.

Some of that involves the process of calibration, which we will touch upon here but is beyond the scope of this article to be discussed at length.

The discussion will get a little technical but essential for a better understanding of what we are doing and why.

 

Sensor noise.

 

Both CMOS and CCD sensors convert photons into an electrical charge on the individual photosites and then use complicated electronics to convert the accumulated electrical charge into a digital value that can be read by a computer.

Each part of the process is imperfect and each imperfection affects our image quality.

 

What is Image Noise??

Video by Wex Photo Video

 

With care, however, we can control these imperfections to acceptable levels. Working systematically from input to output, we have incident light in the form of light pollution and the light from a distant object passing through the telescope optics.

The light fall-off from the optics and the dust on optical surfaces will shade some pixels more than others. The photons that strike the sensor are converted and accumulated as electrons at each photosite.

It is not a 1:1 conversion; it is dependent upon the absorption of the photons and their ability to generate free electrons. (The conversion rate is referred to as the Quantum Efficiency).

 

Why larger sensors – Less Noise.

Video by Michael The Maven

 

During the exposure, electrons are also being randomly generated thermally; double the time, double the effect. Since it occurs in the dark, astronomers call it dark current.

These electrons are accumulated along with those triggered by the incident photons. The average dark current is also dependent on the sensor temperature and approximately doubles for each 7°C rise. (By the way, you will often see electrons and charge discussed interchangeably in texts.

There is no mystery here; an electron has no mass, only charge: 1.6 x 10-19 coulombs to be exact).

 

Signals, Noise and Calibration.

 

So what is noise? At its simplest level, noise is the unwanted information that we receive in addition to the important information (signal). In astrophotography, noise originates from several electronic sources and from light itself.

For our purposes, the signals in astrophotography are the photons from the deep space object that are turned into electrical charge in the sensor photosites.

Practically, astrophotography concerns itself with all sources of signal error. These are broadly categorized into random and constant (or consistent) errors.

So long as we can define the consistent errors in an image, they are dealt with fairly easily. Random errors are more troublesome: Image processing inevitably involves extreme stretching of the image tones to reveal faint details.

The process of stretching exaggerates the differences between neighboring pixels and even a small amount of randomness in the original image appears objectionably blotchy after image processing.

The random noise from separate light or thermal sources cannot be simply added but their powers can.

Dealing with unwanted errors involves just two processes, calibration and exposure. Calibration deals with consistent errors and exposure is the key to reduce random error.

For now, calibration is a process which measures the mean or consistent errors in a signal and removes their effect. These errors are corrected by subtracting an offset and adjusting the gain.

Since no two pixels on a sensor are precisely the same, the process applies an offset and gain adjustment to each individual pixel.

The gain adjustment not only corrects for tiny inconsistencies between pixel’s quantum efficiency and amplifier gain but usefully corrects for light fall-off at the corners of an image due to the optical system, as well as dark spots created by the shade of a dust particle on an optical surface.

Briefly, the calibration process starts by measuring your system and then during the processing stage, applies corrections to each individual exposure.

These calibrations are given the names of the exposure types that measure them; darks, read and flats. Unfortunately, these very names give the impression that they remove all the problems associated with dark noise, read noise and non-uniform gain.

They do not. So to repeat, calibration only removes the constant (or mean) error in a system and does nothing to fix the random errors.

Calibration leaves behind the random noise. To establish the calibration values we need to find the mean offset error and gain adjustment for each pixel and employs the same principles to reduce random noise.

 

Auto guiding and tracking.

 

One way or another, successful imaging requires the telescope to track the star’s motion, over the duration of each exposure, to an RMS accuracy of about ± 1/8,000th arc second.

This is achieved through precise tracking models and or autoguiding. Autoguiding issues reappear frequently on the forums and it is easy to see why; it is a complex dynamic interaction of image acquisition and mechanics, both of which differ from one user to another and even between imaging sessions.

One of the frustrating aspects is that autoguiding can perform one night and play up on another, without any apparent reason. To understand why and what can be done about it, we need to understand what is happening.

In a perfect system, there is already a lot going on and when you add in all the error mechanisms, it is a wonder that autoguiding works at all.

 

Sky – Watcher Star Adventurer – Basic Overview.

Video by Peter Zelinka

 

Mount manufacturers have started to address the issue using closed-loop position and feedback systems and using a computer model to track stars using both RA and DEC motors.

This new phenomena has already started in the high-end mounts and is becoming available at lower price points too.

Let us start with autoguiding and understand what is meant to happen. We can then add in all the real-world effects, see how they affect performance and develop some coping strategies.

The first question should be, do we need autoguiding in the first place? After careful polar alignment and using a mount with no periodic error (say less than 1 arc second) do you need autoguiding? Well, maybe.

There are a few mounts with shaft encoders that achieve amazingly low PE but for the rest of us PE is typically something in the range of ±4 to ±15 arc seconds peak to peak over a gear cycle.

(Even the best periodic error correction will not entirely remove this.) Theoretically, a mount can be perfectly aligned to the celestial pole but this is time consuming (especially in a portable setup) and the effect of a slight movement or sag in the mount or support can ruin the alignment: At a declination of 30°, if a tripod leg sinks by 0.5 mm, the misalignment will cause a star to drift by about 4 arc seconds during a 10-minute exposure.

If we want to be picky, at low declinations standard tracking does not take into account increasing atmospheric refraction that affects a star’s apparent position.

Image stacking can account for slight shifts between exposures, but over a long exposure it may have an additional effect. For a 10-minute exposure at a declination of 30°, the effect of increasing atmospheric refraction creates a tracking error of about 7 arc seconds.

Similar considerations apply to any optical flexure, especially those in moveable mirror systems.

The outcome therefore is that some form of autoguiding is needed in most situations, either to make up for alignment or mount mechanical errors or as a safety net in case there is some unexpected event that shifts the image.

 

We recommend the Sky Watcher S20520 seen bellow.

 

You can find our full review of the Sky Watcher S20520 HERE.

You can also find the latest price and customer reviews HERE.

 

Guider calibration.

 

The calibration process calculates how a star movement relates to a RA and DEC angle.

The guide camera will have a certain pixel pitch and orientation, by which we mean angle, to the RA and DEC axis. If the guide camera is attached to an off-axis guider, or imaged via a mirror, the image will be flipped too.

Guiding commands are specified in terms of duration and guiding rate (specified as a fraction of the sidereal tracking rate). Calibration determines the pixels per second movement at the guiding rate and the sensor orientation.

 

Post exposure.

 

The glamorous part of astrophotography is the bit we see and touch. An enormous amount of energy and attention is paid (and not a little money) on cameras, telescopes, mounts and their location.

It is always enjoyable to compare equipment, problem-solve system issues and capture images. Just like a photographic negative, these fuzzy frames are only half the story.

Indeed, astrophotography has a great deal in common with traditional monochrome photography; it is easy to become diverted by camera and lens choice and ignore the magic that goes on in the darkroom as it is in effect a world apart.

 

After all, it is the end result that is appreciated.

 

On reflection, these two hobbies are very similar indeed: Just as the darkroom techniques transform a plain negative into a glowing print, so the journey begins to transform our deep space exposures into a thing of beauty.

There is no single interpretation of a negative that is “right” and the same is true of deep space images. These are distorted in color and tonality in ways to purely satisfy an aesthetic requirement or scientific analysis.

In both hobbies, the steps taken to enhance an image require technical knowledge applied with artistic sensitivity. There is seldom a fix for a poorly executed negative and it is easy to spend a whole day in the darkroom perfecting a print.

The demands of image processing in astrophotography deserve no less. It takes many hours of patient experimentation to become proficient at image processing. As our skills will undoubtedly improve over time, an archive of our original files gives the opportunity to try again with better tools and techniques.

There are many ways to achieve a certain look on a print (beyond the basics) and the same is true with image processing. After a little research you quickly realize this and there is no right way.

In many cases, the image dictates what will work or not. In the end, the proof of the pudding is in the eating.

In fine art circles, monochrome photographers practice exposure, development and printing controls, carefully translating subject tonalities to the print.

Some of those concepts are relevant to astrophotography too but the analogy is wearing out. Certainly, the most successful images apply different adjustments to the highlights, mid tones and shadows and importantly distinguish between subtleties in nebulous areas and true deep space nothingness.

 

What makes a good astrophotograph.

 

It is a good question and perhaps one that should have been addressed at the very beginning of the article. Art is certainly in the eye of the beholder and although astrophotography is essentially record-taking, there is still room for interpretation to turn multiple sub-exposures into photographic art.

These include both technical and aesthetic attributes. Most can agree on some general guidelines but it is important to note more original interpretations that break rules can also work pictorially.

 

How to take great Astophotography and night sky picture.

Video by Canon Australia

 

A good part of photography is knowing what you want to achieve before you press the button. It certainly is the discipline that was adopted by photographers in the last century.

They had no other choice; with roll film or sheet film and no Photoshop to correct for errors, the photographic artist had to be very particular about the craft of exposing; the framing, lighting, focus, filtration and exposure had to be just right.

That was before they got into the darkroom to develop and print the negative.

As an aside, although digital cameras have made astrophotography what it is today, I believe their immediacy and the ability of image manipulation to correct mistakes, encourages a culture to neglect the craft of composition and exposure. I feel something has been lost.

 

Technical considerations.

 

The technical aspects are probably the easiest to cover as there is less room for interpretation.

If we firstconsider stars, they should be tightly focused and round, all the way into the corners of the image. Stars come in different colors from red through to blue, and a well exposed and processed image should retain star color.

Bright stars always appear larger in an image and the exposures required to reveal faint nebulosity often render bright stars as a diffuse white blob.

Poor image processing will cause further star bloat and wash out the color in stars of lesser magnitude. As we know what a star should look like, a star image ruthlessly reveals poor focusing, tracking and optical aberrations.

The quality of a star’s image also reveal any image registration issues between sub-exposures or RGB frames. Although there are some processing techniques that reduce star bloat and elongation, these do not cure the problem. It is always preferable to avoid these issues in the first place.

The sky background is another area of image presentation with a general consensus on best practice. It should be neutral and very dark grey but not black.

Ignoring nebulosity for the moment, it should be evenly illuminated throughout and also have low noise. By now we know that image processing increases the visual appearance of noise in the darker areas.

There are some very clever algorithms that can minimize noise. If these are taken too far it creates a plastic look to an image. There is a degree of subjectivity here and just like film grain, a little noise adds a touch of reality to images.

(At the same time there is a steady demand for film emulation plug-ins for Photoshop that add grain-like noise to an otherwise smooth digital image.)

The “right amount” is something that can only be determined by the display medium, scale and your own viewpoint.

In addition, green is not a color that appears naturally in deep space and should be removed from images. (The exception to this is false color-mapping of narrowband imaging to red, green and blue channels.)

Sharpness, resolution and contrast are interrelated in a complex tangle of visual trickery. A high contrast image can give the appearance of sharpness and conversely a high resolution image may not look sharp.

For a long while it was a long-running debate between the small format film and digital photographers. Fine grain monochrome film has over 3x the spatial resolution of a 12 Megapixel DSLR, yet the digital images look “sharper”.

The sharpening tools in general imaging programs are not optimized for astrophotography; a well-processed image needs careful sharpening at different scales to tighten stars without creating “Panda eyes” as well as to emphasize structures within galaxies and nebulosity without creating other unwanted artifacts.

(Sometimes gas structures are also enhanced by using narrow band images assigned to complimentary colors.)

Image resolution is most often limited by seeing conditions and imaging technique rather than optics.

Good image processing makes the most of what you have, with local contrast enhancement techniques and in the case of under sampled images, using drizzle techniques to actually increase spatial resolution.

The trick to successful imaging is to trade-off sharpness for noise and resolution to arrive at an outcome that does not shout “look at me, I have been manipulated”. It is easier said than done.

 

Related questions.

 

  1. What is ISO?

In Digital Photography ISO measures the sensitivity of the image sensor. The same principles apply as in film photography – the lower the number the less sensitive your camera is to light and the finer the grain. By choosing a higher ISO you can use a faster shutter speed to freeze the movement.

Recent Posts