Digital Cameras always produce a certain amount of noise in their sensor signal. Whether this noise is visible depends on how much the camera has to amplify the sensor data (ISO-value), which again depends on how much light is entering the lens and how sensitive the sensor itself is. That’s why cheap, small cameras produce more noise than expensive cameras: they have larger, more sensitive sensors and usually better optics.
Scenario 1: I want to shoot at night, I have plenty of time, but only a relatively cheap camera. The object I take photos of does neither move nor change in any other way. I want a great image without noise.
Scenario 2: I have a great camera, but even when shooting with ISO 100 (Canon etc.) or 200 (Nikon), the image noise is visible because I only want to use a very small part of the tonal range (e.g. in foggy situations) and therefore the small amount of noise is heavily amplified.
Astrophotographers (and some other photographers) use a great technique to reduce noise: They take several photos of the same object, on a tripod for the same position and perspective. These images will have a certain amount of usable information and a certain amount of unusable information (noise) added together to a noisy image. While the usable information stays the same throughout several shots, the noise varies with relatively good randomness. If you take the average value for each pixel from many of these shots, the usable information is preserved while the noise is lost (the average of random noise is some gray value). Now we normalize the image (to get rid of the noise-based gray) and we have a perfect image. That’s the idea.
In reality, of course, I can only get rid of some of the noise because I can’t take an infinite number of shots. Doubling the number of shots will about halve the amount of noise. Then there might be some noise that is not perfectly random, probably some pixels of the camera sensor will tend to give a noise that is slightly different from the noise of others. But it surely improves the image a lot compared to just on exposure.
Now to the software part. I’m a blender enthusiast, so I first tried the stacking inside the Blender Compositor. It works, but it’s a lot of work adding image inputs for all the images, mixing everything together, and in the end, my 4GB of RAM weren’t enough. Not even close. So I googled and found ALE, short for Anti-Lamenessing Engine (what a name) by David Hilvert. It can do a lot I don’t really understand, but it can also stack images to reduce noise. It is available as a Debian package, I just installed it from the Software Center.
It does the stacking amazingly well, relatively fast and with minimal RAM usage. Since it works by command line, I wrote a short script, basically trying to imitate what is done in the examples on the official website of ALE. You will also need ImageMagick (also available inside the Software Center) for handling the image-files. Take a look:
mogrify -resize 2048 *.tif
echo “resized to 2048px for faster calculation. Change value for bigger images (takes a lot longer)”
mogrify -format ppm *.tif
echo “formatted as .ppm-files”
ale –md 64 *.ppm stacked.png
echo “stacked 64 files”
convert -normalize stacked.png stacked.norm.png
echo “final image is: stacked.norm.png”
CAUTION: Destroys original files, run in separate directory with duplicates of your original images.
It’s really only 4 lines of code and some useless comments between them.
What it does:
- It scales the images. This is not necessary, but it helps a lot, because with large images, the algorithm takes forever. Also, although my camera produces 9 Megapixel images, I’m really happy with getting a sharp image with my full screen resolution – the lens and sensor won’t give a lot more sharpness than in the 1.3 Megapixels, anyway. For better results and for usage with better cameras, increase this value.
- It converts the .JPG-files into .ppm-files. That’s simply because ALE wants .ppms. If you find a way to convert your camera-Raws into ppms, that would be even better. Be careful: The file type is case-sensitive, so check whether your camera probably gives you .jpg-files instead of .JPG-files. Change if necessary.
- It stacks the images. This can take some time – about 30 seconds per Megapixel of input on my system.
- It normalizes the result. Should only take a few deciseconds.
Any suggestions on improvements and more detailed explanations on ALE are very welcome.
Additional options: (from ale --hu) --8bpc gives 8 bit per channel output (16 is default) --translation only adjusts the position of images --euclidean adjusts position and orientation of images (default) --projective uses projective transformations (slow) --follow alignment to previous file (default) --identity alignment to original file (first) --fail-default frames beneath match threshold keep their alignment (default: optimal alignment) --threshold=x (x=-1 disables check, default; minimum percentage for match) --cache x Image data cache size in megabytes, x=256 is default.
Have you some some comparison screenshots or images of this tutorial? This is interest nonetheless.
Thanks. I will add a few examples, if that helps.
Interesting! When I tried this, however, the result of the ‘ale’ command was a ‘ppm’ file:
netpbm PPM “rawbits” image data
Also, do you know how this compares to using ‘DeepSkyStacker’, especially considering that no dark/flat/bias/etc frames are used as input? From what I can tell, one would need additional steps (GIMP?) to make an attractive final image (level, etc).
If you get a .ppm-File, you probably have to convert it back to png with
mogrify -format png *.ppm
and continue with normalizing.
And yes, an attractive result will need some postprocessing. Compared to the DeepSkyStacker, this is just a more simple way to do it – and it runs on Linux without Wine.
I use ALE for all my image stacking. Very nice program. Thank you David Hilvert.