Astrophotography With The 8MP Raspberry Pi Cameras
You know… There is something that I don’t have the time to try, and I think it should work. So someone should take my idea and implement it. 🙂
With astrophotography, there is always the problem that the stars tend to move during your (long) exposure.
The standard trick is to make your camera move along with the rotation of the earth. (or actually to counter the rotation of the earth).
My plan is to simply take say 1s exposures, but then say 300 of them to get a combined 5 minute exposure. Then in software, you KNOW that each image is rotated by 1/240th of a degree.
It’s easiest if you start out by telling the software where the rotation point is. But you can optimize (with sub-pixel accuracy) for the rotation point if you have a few hundred images.
Next, instead of taking a canvas of say 8 Mpixels, you use 128Mpixels. Each pixel is blown up 4x in each direction. But now after rotation by those tiny amounts, you get some sensor pixels that overlap a different set of about 16 pixels in the canvas.
Next, instead of simply adding the results together you can run an optimization step (i’m explaining this in one dimension but of course it is easy to implement for two): suppose the REAL canvas has a single hot pixel. lets say 100. Now on the first image that means you get some value in pixel 25 of the sensor. Now suppose the sensor moves 1 canvas-pixel compared to the canvas. So next time you’ll still get the value in pixel 25, only after 4 images does the measured value end up in the camera-pixel 24. So with the previous variant of the algorithm you’d be spreading the measured value out across 100-103 for the first image, and 101-104 for the second image, and only at the fourth image the 24-th pixel of the image would end up back on 100-103. So you would end up with a triangle of image intensity centered around pixel 103. Suppose we measure 100 in pixel 25 of the first image, and we spread that as 25-25-25-25 over 100-103 and add 8 images this way we’ll get 50-100-150-200-150-100-50 in canvas locations 100-106.
But if we assume simply that canvas pixel 103 has intensity 100 then all measurements would be precisely identical. So we can sharpen the resulting image. This can be done by deconvolving or by an iterative algorithm.
So when we see pixel 25 with a value of 100 on the first image, we increase the values of 100-103 in the canvas, but when we measure 0 in pixel 24 in the second image this maps to canvas pixels 97-100 on the canvas and we’ll decrease the running guesstimate of pixel 100 again. You should be able to reconstruct the original canvas with only pixel 103 bright much better than with the first iteration of the algorithm.
Some complications prop up in practice. So if one second of exposure is sooo little that when digitizing the digital version only has black pixels, then this of course won’t work. You need some noise in the captured images for this to work. So turn up the ISO as far as possible without getting saturated pixels. The individual images will look like *** but the resulting canvas should be fantastic….