I have a long-standing fascination with metrology, the science of measuring things.* One of my favorite classes as an undergraduate was on chemical instrumentation and computers, where a major topic of discussion was how you take some chemical or physical property and change that into a voltage. That voltage is then converted to a digital format with an analog-to-digital converter.
Think for a moment about a digital camera: how does that work? Light goes in, and when you press the button it saves an image file, but what is happening in between those steps?
Inside the camera, light hits a photosensitive semiconductor, charges are separated, accumulated, and through electronics and circuitry too complicated for this post, converted into voltages. There’s another little tricky thing, though: your camera is usually has four types of photosensitive semiconductors for each pixel. One is sensitive to red light, another green, another blue, and the fourth is panchromatic—sensitive to all colors of visible light.
Satellite imagery can be fascinating, and is often freely available if you can figure out where to find it (free registration may be required). However, unlike getting pictures from your digital camera, by going directly to the source some additional work may be required to turn the images into what you’re looking for. What needs to be done is determined by the instrument and the type of imagery you want.**
For my purposes, I’m generally interested in true-color imagery (or something reasonably close to true-color) of scenes on Earth. Terra MODIS takes images of most areas every day, but the resolution is only 250 m/px at its best. Other satellites, such as NASA’s EO-1, have instruments with better resolution, but they cover much less area—it may be a few days or weeks between images of a given spot.
Today I am interested in images from the Advanced Land Imager (ALI) on EO-1, which can be found through Earth Explorer. I’ve posted images from ALI in the past, which is how I know the images I want are ones I should be able to make.
Earth Explorer has an option to download the data as a GeoTIFF, which imports easily into QGIS. Using the layering features in QGIS, the 630–690 nm band (Band 5, red) can be made to grade from black to red, the 525–605 nm (Band 4, green) to grade from black to green additively on top of the red layer, and the 433–453 nm (Band 2, blue) to grade from black to blue on top of the other two layers. Now we have a composite RGB image.
There’s a problem (sort of) with this RGB image, and you’ll see it quickly if you do your processing on the Heard Island imagery from April 20, 2013 shown above: the resolution of your image isn’t as high.
In this case, the three layers used for the composite image have a resolution of 30 m/px. The NASA image, though, has a resolution of 10 m/pixel. Where does this higher resolution come from?
Remember how I mentioned that digital cameras have a fourth sensor, sensitive to panchromatic light? Well, ALI also has a panchromatic band (Band 1), with resolution of 10 m/pixel.
In order to merge the color layers and the panchromatic layer, the color image needs to be scaled up by a factor of 3 in each dimension, making 3×3 pixel areas of the same color. Then some not-that-complicated steps (which I have yet to fully figure out) are needed to adjust the lightness of those pixels—but not the hue—to match the higher-resolution panchromatic image.
* Also meteorology, the science of weather.
** The same holds true for imagery from other instruments or spacecraft, be it New Horizons, the Curiosity rover on Mars, or the Solar Dynamics Observatory.