My home page
Sunday, 12 April 2015
BBC Programme Stargazing Live on 20 March 2015. No telescopes were involved.
The full image (click this link) is 40MiB, which is rather high-resolution for a cellphone. So how was it done?
Back in the age of Carl Sagan and Patrick Moore astronomers would point a telescope at a patch of sky and open a camera shutter. The telescope would be on an equatorial mount driven by a clock, so the star images would stay in the same place on the photographic film as the Earth rotated under the sky. Stars, and even worse nebulae such as those depicted above, don't give off a lot of light at this distance, so they waited for hours before closing the shutter. Sometimes they would even come back the next night and the next for weeks, opening the shutter each time to catch a few more photons. The result was an average - enough light had hit the film to form an image, but it had all been smeared out by atmospheric interference and any slight shake of the telescope.
Then Richard Gregory made the exquisitely beautiful invention of photographing the heavens through a previous negative of the same starfield. A moment's thought shows that this will tend to cancel out the random noise, while emphasising the true data. The trick can be repeated as often as desired. (My daughter and I once went to a public lecture by Gregory at Bristol University. He expounded more wonderful ideas in an hour than most people have in their entire lives. And just think of the skill required to hold both an eight-year-old child and a forty-one-year-old engineer and mathematician equally entranced by those ideas.)
However, in contrast to the original smeary photographs and also Gregory's, the picture above was only exposed for, perhaps, one fiftieth of a second. But it was not taken by just one cellphone - hundreds of viewers of the programme had gone out and taken a picture of the same patch of sky with their phones and cameras, and had then uploaded them to the programme's website. Each individual picture was just a black rectangle - not enough starlight had gone through the lens to make an image that could be seen. But some had gone through, and registered in the camera's pixels as a slightly less-dark patch of black.
All the images were then stacked. A computer first matched them up by making sure that the centres of the prominent stars were all in the same place, and then added up the slightly-less-black bits to make the picture. Of course the pixels in all the cameras were not in the same place relative to the stars, which means that each camera pixel could be split into thousands of final-image pixels, which gives the fabulous resolution, a tiny bit of which you see above.
Long-exposure averaging loses information as a cost of getting enough light. Stacking preserves and integrates the information and gets enough light by - effectively - having a camera aperture the size of all the camera lenses added together. And all those lenses see different atmospheric interference, have different lens errors, and have different noise patterns in their phone's image sensors, so those can all be compensated for as well.
The human race is a species on which the stars never set. So let's make the Human Telescope. Set up a website to which anyone anywhere in the world can upload any sky images that they have taken with any digital camera, phone or telescope. The images will have a timestamp and a GPS location, and will be continually stacked by a computer in the background to give an exquisitely detailed evolving picture of the whole vault of the heavens.
The world would become a great spherical insect eye looking at every star, galaxy, planet and nebula all the time. We would be automatically finding comets, supernovae and near-Earth asteroids. We would never miss an astronomical trick.