Category Archives: Science Miscellany

Mapping the Eclipse for a Citizen Science Project

Map of the continental United States showing the amateur radio grids and path of the eclipse.  Image credit: Bill Mitchell (CC-BY).
Map of the continental United States showing the amateur radio grids and path of the eclipse. Image credit: Bill Mitchell (CC-BY).

During the solar eclipse next week, I will be at the Science Museum of Minnesota with a citizen science project studying the effects of the eclipse on radio propagation. While there are many radio-related projects going on—the most accessible being a study of AM radio reception—I will be using amateur radio to make contacts and provide reception reports during the eclipse. One of the important pieces of information that will be exchanged with other amateur stations is a “grid”, which is a shorthand for rough latitude and longitude.

Amateur radio grids are 2° longitude by 1° latitude, and represented with pairs of letters and numbers. For instance, the Science Museum of Minnesota is located in EN34. Fields (20°x10°) are designated with letters, and increase from -180 longitude and -90 latitude (AA) to 160 longitude and 80 latitude (RR). Fields are further subdivided into grids using numbers, which increase from 00 at the southwest corner to 99 at the northeast. Looking again at our example, the first character, E, indicates a location between 100° and 80° W longitude, and N indicates a location between 40° and 50° N latitude. The numbers provide further refinement on that range. The 3 means the longitude is between 6° and 8° east of the west edge of the field (i.e. 94°–92° W), and the 4 after it means the latitude is 4°–5° north of the south edge of the field (i.e. 44°–45° N). Further letters (A-X) and numbers can be used to specify locations more precisely in a similar fashion. Longitude is always indicated first, and increases west-to-east; latitude is indicated second, increasing south-to-north.

For the event, I want to have a map of the continental US and southern Canada with the grids outlined on it. During the event as we hear which grid other stations are in, we can mark their location on the map. Unfortunately, I was not able to find a map that I wanted to use for this purpose, so I decided to make my own with QGIS.

For my eclipse map, I needed to gather a few datasets. First and foremost, I needed a US state map. Canadian provinces were also a high priority. Once I had those, I was still missing Mexico and other North American areas, so I found a world map as well. That covered the basics, but as long as I’m making a special map for the eclipse, I wanted to have the path of totality, which I found from NASA. I unzipped each of those files into a folder for my eclipse grid map project.

In QGIS, I loaded all the datasets (vectors). The Canadian provinces were in a different projection, so I saved (converted) it to the projection I wanted (EPSG:4269), which is a simple latitude-longitude projection. I found that the Canadian provinces included detailed coastlines and islands, so I simplified it (Vector | Geometry Tools | Simplify Geometry) using a tolerance of 0.01 or something like that. The islands cleaned up a little, but the overall shapes didn’t change much.

With the datasets loaded, I needed to make my field and grid boundaries. Using the grid tool (Vector | Research Tools | Vector Grid) I created the field grid (xmin=-180, xmax=180, ymin=-90, ymax=90, parameter x=20, parameter y=10) and the fine grid (same except parameter x=2, parameter y=1).

I looked up the coordinates for the Science Museum of Minnesota, and put them into a CSV text file. By loading in that CSV file, I put a star on the map where I will be located.

From that point, it was just a matter of adjusting colors and display properties. I gave reasonable, light colors to the US and Canada, and thickened the borders for the US states. I used a dashed line for the field lines, and a lighter grey dotted line for the smaller grids. The eclipse path I made a partially-transparent grey.

That’s about all there was to it! In the print composer I added in some of the labels for a few grids to help demonstrate the letter/number scheme.

Results (PDF): 8.5″x11″, 11″x17″.

When Counting Gets Difficult, Part 2

Prion sp., March 22, 2016, seen just west of Heard Island.  Image credit: Bill Mitchell.
Prion sp., March 22, 2016, seen just west of Heard Island. Image credit: Bill Mitchell.

Earlier I posed a question: suppose a group of 40 birds are identified to genus level (prion sp.). Four photographs of random birds are identified to species level, all of one species that was expected to be in the minority (fulmar prion) and likely would be present in mixed flocks. How many birds of the 40 should be upgraded from genus-level ID to species-level ID?

Clearly there is a minimum of one fulmar prion present, because it was identified in the photographs. With four photographs and 40 birds, the chance of randomly catching the same bird all four times is quite small, so the number of fulmar prions is probably much higher than 1. At the same time, it would not be reasonable from a sample of our photographs to say all 40 were fulmar prions.

If we have four photographs of fulmar prions (A), what is the minimum number of non-fulmar prions (B) needed in a 40-prion flock to have a 95% chance of photographing at least one non-fulmar prion?

To answer this question, I used a Monte Carlo simulation, which I wrote in R. I generated 40-element combinations of A and B ranging from all A to all B. Then for each of those populations, I ran 100,000 trials, sampling 4 random birds from each population (with replacement). By tracking the proportion of trials for each population that had at least one B, it becomes possible to find the 95% confidence limit.

pop_size <- 40  # Set the population size
sample_n <- 4  # Set the number of samples (photographs)
n_trials <- 100000  # Set the number of trials for each population

x <- seq(0:pop_size)  # Create a vector of the numbers from 0 to pop_size (i.e. how many B in population)

sample_from_pop <- function(population, sample_size, n_trials){
	# Run Monte Carlo sampling, taking sample_size samples (with replacement)
                # from population (vector of TRUE/FALSE), repeating n_trials times
	# population: vector of TRUE/FALSE representing e.g. species A (TRUE) and B (FALSE)
	# sample_size: the number of members of the population to inspect
	# n_trials: the number of times to repeat the sampling
	my_count <- 0
	for(k in 1:n_trials){  # Repeat sampling n_trials times
		my_results <- sample(population, sample_size, replace=TRUE)  # Get the samples
		if(FALSE %in% my_results){  # Look for whether it had species B
			my_count <- my_count + 1  # Add one to the count if it did
		}
	}
	return(my_count/n_trials)  # Return the proportion of trials detecting species B
}

create_pop <- function(n,N){  # Make the populations
	return(append(rep(TRUE,N-n),rep(FALSE,n)))  # Populations have N-n repetitions of TRUE (sp. A), n reps of FALSE (sp. B)
}

mypops <- lapply(0:pop_size, create_pop, pop_size)  # Create populations for sampling

# Apply the sampling function to the populations, recording the proportion of trials sampling at least one of species B
my_percentages <- sapply(mypops, sample_from_pop, sample_size=sample_n, n_trials=n_trials)

My simulation results showed that with 22 or more birds of species B (non-fulmar prions), there was a >95% that they would be detected. In other words, from my photographic data, there is a 95% probability that the flock of 40 prions contained no fewer than 19 fulmar prions.

Let’s take a look at it graphically.

library(ggplot2)

mydata <- data.frame(my_percentages, 0:pop_size)  # Make a data.frame with the results and the # of species B
names(mydata) <- c("DetProb", "B")  # Rename the columns to something friendly and vaguely descriptive

p <- ggplot(mydata2, aes(x=B,y=DetProb)) + geom_point() # Create the basic ggplot2 scatterplot
p <- p + geom_hline(yintercept=0.95)  # Add a horizontal line at 95%
p <- p + theme_bw() + labs(x="# of species B (pop. 40)", y="Detection probability of B")  # Tidy up the presentation and labeling
print(p)  # Display it!
Results of the Monte Carlo simulation.  At left is all A, while at right is a population with all B.  The horizontal line is the 95% probability line.  Points above the line have a >95% chance of detecting species B.
Results of the Monte Carlo simulation. At left is all A, while at right is a population with all B. The horizontal line is the 95% probability line. Points above the line have a >95% chance of detecting species B.

With 22 or more non-fulmar prions, there’s a >95% chance one would be photographed. With 19 fulmar prions and 21 non-fulmar prions, there’s a >5% chance the non-fulmar prions would be missed. So our minimum number of fulmar prions is 19. I may have seen a flock of 40 fulmar prions, but there aren’t enough observations to say with statistical confidence that they were all fulmar prions.

When Counting is Difficult

A fulmar prion glides swiftly over the swell of the Southern Ocean.  Image credit: Bill Mitchell (CC-BY)
A fulmar prion glides swiftly over the swell of the Southern Ocean. Image credit: Bill Mitchell (CC-BY)

During the Heard Island Expedition, including the nearly three weeks at sea on the Southern Ocean, I made a few observations for a citizen science project: eBird. It’s a pretty simple system: identify and count all the birds you see in a small area and/or time period, then submit your list to a centralized database. That database is used for research, and keeps track of your life/year/county lists. With so few observations in the southern Indian Ocean in March and April (and no penguins on my life list before the expedition), I figured I would make a few counts.

On its face, identifying and counting birds is straightforward. Get a good look, maybe a photograph, and count (or estimate) the number present of that species.

It gets more difficult when you go outside your usual spot, particularly when the biome is much different. Although I have some familiarity with the Sibley Guide for North American birds, I’ve never payed very close attention to the seabird section, and have never birded at sea before. All the birds I expected to see on this expedition would be life birds, and that changes things a bit. I would have to observe very closely, and photograph where I could.

Before the expedition, I read up on the birds I would likely find on the island. In addition to four species of penguins, there were three species of albatross (wandering, black-browed, and light-mantled sooty) and two species of prions (Antarctic and fulmar). Albatrosses are large and the species near Heard are readily distinguished. Prions, however, can be quite difficult even with good observations. They’re not quite to the level of the Empidonax flycatchers, but close.

At sea, we usually had prions flying near the ship. I took pictures, knowing that I might be able to get help with ID if I needed it—and of course I needed it.

That’s where the problem started: I had a count where I had observed 40 prions flying around the ship, which I identified only to genus level. From my reading on Heard Island, I knew that breeding populations for prions on Heard Island were generally larger Antarctic prions than fulmar prions, with an estimate of a 10:1 margin. I had four clear pictures of individual birds, which my helpful eBird reviewer was able to get to an expert for further identification. All four were fulmar prions.

With 40 birds identified to genus level, and four photos of random birds identified to species level as a species expected to be a minor proportion, how many of the original 40 birds can I reasonably assign as fulmar prions?

I have an answer to this question, which I will post next week.

Science on a Plane

Temperature profile flying in to MSP around 2120 UTC on April 25, 2016.  Image credit: Bill Mitchell (CC-BY).
Temperature profile flying in to MSP around 2120 UTC on April 25, 2016. Image credit: Bill Mitchell (CC-BY).

One of my favorite things to do on an airplane, when I can, is to take a temperature profile during the descent. Until recently, this could generally only be done on long international flights, when they had little screens which showed the altitude and temperature along with other flight data. However, I found on my latest trip that sometimes now even domestic flights have this information in a nice tabular form.

To take a temperature profile, when the captain makes the announcement that the descent is beginning, get out your notebook and set your screen to the flight information, where hopefully it tells you altitude (m) and temperature (°C). Record the altitude and temperature as frequently as they are updated on the way down, though you might set a minimum altitude change (20 m) to avoid lots of identical points if the plane levels off for a while. When you land, be sure to include the time, date, and location of arrival.

When you get a chance, transfer the data to a CSV (comma-separated value) file, including the column headers like in the example below.


Alt (m),Temp (C)
10325,-52
10138,-51
9992,-48
...
250,17

You can then use your favorite plotting program (I like R with ggplot) to plot up the data. I’ve included my R script for plotting at the bottom of the page. Just adjust the filename for infile, and it should do the rest for you.

At the top of the page is the profile I took on my way in to Minneapolis on the afternoon of April 25th. There were storms in the area, and we see a clear inversion layer (warmer air above than below) about 1 km up, with a smaller inversion at 1.6 km. From the linear regression, the average lapse rate was -6.44 °C/km, a bit lower than the typical value of 7 °C/km.

On the way in to Los Angeles the morning of April 25th, no strong inversion layer was present and temperature increased to the ground.

Temperature profile descending into Los Angeles on the morning of April 25, 2016.  Image credit: Bill Mitchell (CC-BY).
Temperature profile descending into Los Angeles on the morning of April 25, 2016. Image credit: Bill Mitchell (CC-BY).

This is a pretty easy way to do a little bit of science while you’re on the plane, and to practice the your plotting skills when you’re on the ground. For comparison, the University of Wyoming has records of weather balloon profiles from around the world. You can plot them yourself from the “Text: List” data, or use the “GIF: to 10mb” option to have it plotted for you.

Here is the code, although the long lines have been wrapped and will need to be rejoined before use.


# Script for plotting Alt/Temp profile
# File in format Alt (m),Temp (C)

infile <- "20160425_MSP_profile.csv" # Name of CSV file for plotting

library(ggplot2) # Needed for plotting
library(tools) # Needed for removing file extension to automate output filename

mydata <- read.csv(infile) # Import data
mydata[,1] <- mydata[,1]/1000 # convert m to km
mystats <- lm(mydata[,2]~mydata[,1]) # Run linear regression to get lapse rate
myslope <- mystats$coefficients[2] # Slope of regression
myint <- mystats$coefficients[1] # Intercept of regression

p <- ggplot(mydata, aes(x=mydata[,2], y=mydata[,1])) + stat_smooth(method="lm", color="blue") + geom_point() + labs(x="Temp (C)",y="Altitude (km)") + annotate("text", x=-30, y=1, label=sprintf("y=%.2fx + %.2f",myslope,myint)) + theme_classic() # Create plot

png(file=paste(file_path_sans_ext(infile),"png",sep="."), width=800, height=800) # Set output image info
print(p) # Plot it!
dev.off() # Done plotting

This Year in Uranium Decay

Pumice from the Bishop Tuff (~767 ka).  Zircons in this pumice are rich (relatively) in uranium, with up to 0.5% U.[1,2]  Image credit: Bill Mitchell (CC-BY).
Pumice from the Bishop Tuff (~767 ka). Zircons in this pumice are rich (relatively) in uranium, with up to 0.5% U.[1,2] Image credit: Bill Mitchell (CC-BY).

With 2016 now upon us, I felt it would be appropriate to think about what a new year means for uranium geochronology. What can we expect from the year ahead? Without getting into any of the active research going on, I felt it would be useful to address simply what is physically happening.

On Earth, there is roughly 1×1017 kg of uranium.[3] The ratio of 238U:235U is about 137.8:1, and 238U has a mass of roughly 238 g/mol (=0.238 kg/mol). Looking only at 238U, that gives us
1x1017[kg]x(137.8/138.8)/0.238[kg/mol] = 4.17x1017 mol [238U]

Radioactive decay is exponential, with the surviving proportion given by e-λt where λ is the decay constant (in units of 1/time) and t is time, or alternatively, e-ln(2)/T1/2*t, where T1/2 is the half-life and t is time.

To find the proportion that decays, we subtract the surviving proportion from 1: (1-e-λt)

Multiplying this proportion by the number of moles of 238U will give us the moles of decay, and multiplying by the molar mass will give the mass lost to decay:

(1-e-λt)*molU

Plugging in numbers, with λ238 = 1.54*10-10 y-1, t = 1 y and the moles of 238U from above, we get:

(1-e-1.54*10-10)*4.17*1017 mol [238U] = 6.4*107 mol

That yields (with proper use of metric prefixes) roughly 64 Mmol U decay, or 15 Gg of U on Earth that will decay over the next year.

Although those numbers sound very large, they are much smaller than even the increase in US CO2 emissions from 2013 to 2014 (50 Tg, or 50,000 Gg); total US CO2 emissions in 2014 were estimated at 5.4 Pg (=5.4 million Gg).[US EIA]

As for what’s in store for geochronology as a field, I think there will be a lot of discussion and consideration regarding yet another analysis of the Bishop Tuff.[4] Dating samples which are <1 Ma (refresher on geologic time and conventions) using U/Pb can be tricky, and Ickert et al. get into some of the issues when trying to get extremely high-precision dates from zircons. The paper is not open access, but the authors can be contacted for a copy (@cwmagee and @srmulcahy are active on Twitter, too!).

***
[1] J. L. Crowley, B. Schoene, S. A. Bowring. “U-Pb dating of zircon in the Bishop Tuff at the millennial scale” Geology 2007, 35, p. 1123-1126. DOI: 10.1130/G24017A.1
[2] K. J. Chamberlain, C. J. N. Wilson, J. L. Wooden, B. L. A. Charlier, T. R. Ireland. “New Perspectives on the Bishop Tuff from Zircon Textures, Ages, and Trace Elements” Journal of Petrology 2014, 55, p. 395-426. DOI: 10.1093/petrology/egt072
[3] G. Fiorentini, M. Lissia, F. Mantovani, R. Vannucci. “Geo-Neutrinos: a short review” Arxiv 2004. arXiv:hep-ph/0409152 and final DOI: 10.1016/j.nuclphysbps.2005.01.087
[4] R. B. Ickert, R. Mundil, C. W. Magee, Jr., S. R. Mulcahy. “The U-Th-Pb systematics of zircon from the Bishop Tuff: A case study in challenges to high-precision Pb/U geochronology at the millennial scale” Geochimica et Cosmochimica Acta 2015, 168, p. 88-110. DOI: 10.1016/j.gca.2015.07.018

Various Interesting Articles

Thin section photomicrograph of a gabbro, (crossed polarizing filters).  Image credit: Siim Sepp (CC-BY-SA).
Thin section photomicrograph of a gabbro, (crossed polarizing filters). Image credit: Siim Sepp (CC-BY-SA).

There have been a couple of interesting articles I’ve come across recently, which are worth mentioning.

First, Emily Lakdawalla has an excellent summary of the Pluto discoveries from both the American Geophysical Union’s Fall Meeting and the [NASA] Division of Planetary Science meeting. There’s a lot of new stuff there, and it’s pretty exciting.

Second, the Joides Resolution blog (the Joides Resolution is an ocean sediment coring vessel) has a series of posts (1, 2, 3) on geologic thin sections. Not surprisingly, the thin sections pictured are from rocks such as gabbros or sheeted dikes, which are expected in oceanic crust and in ophiolites (oceanic crust exposed on land). There’s a great exposure of the Coast Range Ophiolite just west of Patterson, CA, in Del Puerto Canyon, which is described in a recent blog post by Garry Hayes.

Third, Dave Petley has a great post on The Landslide Blog about the recent landslide in Shenzhen, China. I find landslides fascinating, and always learn something when I read The Landslide Blog.

Communicating Science Precisely and Accurately

Newton's cradle pendulums swinging back and forth over a copy of Isaac Newton's Principia Mathematica.  Image credit: DemonDeLuxe (CC-BY-SA).
Newton’s cradle pendulums swinging back and forth over a copy of Isaac Newton’s Principia Mathematica. Image credit: DemonDeLuxe (CC-BY-SA).

Recently when I was volunteering at my local science museum, I was leading activities on resonance. I had tuning forks, tuned plastic pipes, and a series of pendula with differing lengths on an arm rotated by a much heavier pendulum. The main idea was that when the frequencies of two pendula or a tuning fork and tuned pipe match, then the energy from one could be transferred into the other, making it oscillate. When you hold a tuning fork up to a resonant cavity, and the cavity will sound. Similarly with my pendula, if the pendulum driving the rotating arm is swinging at the same frequency as the natural frequency of one of the pendulua coupled to it, that pendulum will swing too. Other pendula with faster or slower oscillations will be relatively unaffected.

In the course of talking with visitors, I was reminded of a constant challenge for science communication: being precise, accurate, and accessible. Scientific language is often used to convey precisely the conditions or idea in question. And yet, sometimes a more colloquial meaning of a word is understandable. As I was talking about how quickly this pendulum oscillated, and how slowly that one oscillated, it was difficult to maintain a clear, concise distinction between speed (distance/time) and frequency (1/time). It isn’t about the speed with which the pendulum moves, nor is it about how high (how large the amplitude of motion is) the pendulum swings. But frequency isn’t necessarily a word that visitors understand in distinction with a colloquial version of “speed”.

Next week will be a big week for science communication, and you should keep an eye on the science news. The American Geophysical Union (AGU) is having its 2015 Fall Meeting, which is a gathering of more than 20,000 scientists in San Francisco. There will be lots of new results presented, many of them esoteric or incremental, but others will be quite accessible and groundbreaking. Many science journalists will be on hand covering the proceedings, and most of them do an excellent job.

However, there are a few headlines to watch out for. “Water found on Mars!” is a fairly common one, although the announcements, if you investigate a little more deeply, are indeed new when coupled with the precise situation. This past summer, the big announcement of water on Mars was in fact new: liquid water, at the surface, presently. Another headline to watch out for is “[volcano] ready to erupt!” Yes, many volcanoes have magma chambers under them, which may or may not be larger than previously thought. However, most of the time, the magma chambers underneath the volcanoes are actually much more solid/mushy than reports make them out to be.

If you’re interested in following along, I’d recommend reading the AGU blogs, as well as Erik Klemetti’s Eruptions blog. Twitter will also be very busy using the hashtag #AGU15.

Glacial Erratics

Glacial erratics on a prairie in South Dakota.  Image credit: laikolosse (CC-BY).
Glacial erratics on a prairie in South Dakota. Image credit: laikolosse (CC-BY).

When glaciers flow down across the ground, they can break off rocks and pick them up in the ice. As the ice moves and eventually melts, those rocks are deposited. When the large rocks are exposed on the surface, they are termed glacial erratics. Much of Minnesota and the eastern Dakotas are covered under these glacial deposits, and these glacial erratics are relatively common.

Glacial deposits are also interesting because they will have grains or rocks of all sizes, from very fine silt and mud up through large boulders. This can make identifying glacial deposits in the field straightforward in some cases, because there will be many grain sizes all together. When grains settle out of the air or from water, the coarse ones deposit first, and the grains end up becoming finer as you go up the stratigraphic column.

Satellite Image Processing

Artist's rendering of NASA's EO-1 spacecraft, which holds the Advanced Land Imager (ALI) instrument.  Image credit: NASA.
Artist’s rendering of NASA’s EO-1 spacecraft, which holds the Advanced Land Imager (ALI) instrument. Image credit: NASA.

I have a long-standing fascination with metrology, the science of measuring things.* One of my favorite classes as an undergraduate was on chemical instrumentation and computers, where a major topic of discussion was how you take some chemical or physical property and change that into a voltage. That voltage is then converted to a digital format with an analog-to-digital converter.

Think for a moment about a digital camera: how does that work? Light goes in, and when you press the button it saves an image file, but what is happening in between those steps?

Inside the camera, light hits a photosensitive semiconductor, charges are separated, accumulated, and through electronics and circuitry too complicated for this post, converted into voltages. There’s another little tricky thing, though: your camera is usually has four types of photosensitive semiconductors for each pixel. One is sensitive to red light, another green, another blue, and the fourth is panchromatic—sensitive to all colors of visible light.

Satellite imagery can be fascinating, and is often freely available if you can figure out where to find it (free registration may be required). However, unlike getting pictures from your digital camera, by going directly to the source some additional work may be required to turn the images into what you’re looking for. What needs to be done is determined by the instrument and the type of imagery you want.**

For my purposes, I’m generally interested in true-color imagery (or something reasonably close to true-color) of scenes on Earth. Terra MODIS takes images of most areas every day, but the resolution is only 250 m/px at its best. Other satellites, such as NASA’s EO-1, have instruments with better resolution, but they cover much less area—it may be a few days or weeks between images of a given spot.

Today I am interested in images from the Advanced Land Imager (ALI) on EO-1, which can be found through Earth Explorer. I’ve posted images from ALI in the past, which is how I know the images I want are ones I should be able to make.

Lava flow on Heard Island, April 20, 2013. Image credit: NASA Earth Observatory image by Jesse Allen and Robert Simmon, using EO-1 ALI data from the NASA EO-1 team.
Lava flow on Heard Island, April 20, 2013. Image credit: NASA Earth Observatory image by Jesse Allen and Robert Simmon, using EO-1 ALI data from the NASA EO-1 team.

Earth Explorer has an option to download the data as a GeoTIFF, which imports easily into QGIS. Using the layering features in QGIS, the 630–690 nm band (Band 5, red) can be made to grade from black to red, the 525–605 nm (Band 4, green) to grade from black to green additively on top of the red layer, and the 433–453 nm (Band 2, blue) to grade from black to blue on top of the other two layers. Now we have a composite RGB image.

There’s a problem (sort of) with this RGB image, and you’ll see it quickly if you do your processing on the Heard Island imagery from April 20, 2013 shown above: the resolution of your image isn’t as high.

In this case, the three layers used for the composite image have a resolution of 30 m/px. The NASA image, though, has a resolution of 10 m/pixel. Where does this higher resolution come from?

Remember how I mentioned that digital cameras have a fourth sensor, sensitive to panchromatic light? Well, ALI also has a panchromatic band (Band 1), with resolution of 10 m/pixel.

In order to merge the color layers and the panchromatic layer, the color image needs to be scaled up by a factor of 3 in each dimension, making 3×3 pixel areas of the same color. Then some not-that-complicated steps (which I have yet to fully figure out) are needed to adjust the lightness of those pixels—but not the hue—to match the higher-resolution panchromatic image.

*****
* Also meteorology, the science of weather.
** The same holds true for imagery from other instruments or spacecraft, be it New Horizons, the Curiosity rover on Mars, or the Solar Dynamics Observatory.

Satellite Communications

Geostationary orbits, used by some communications satellites.  Image credit: Lookang (CC-BY-SA).
Geostationary orbits, used by some communications satellites. Image credit: Lookang (CC-BY-SA).

One important aspect of field work in remote places is keeping lines of communication open. At a minimum, the ability to call for help is needed. Sending status updates, checking email, talking with loved ones, and a number of other uses are good to have. Even in this day and age, though, not every remote place has good cell phone coverage. These places are where satellite phone systems are extremely useful.

There are two main types of satellite systems: geostationary satellite systems, and low-Earth-orbit satellite systems.

Geostationary satellite systems have satellites over fixed locations above Earth’s equator, at an altitude of roughly 36,000 km (22,000 mi). Geostationary satellites are nice in that they are always in the same spot relative to a location on Earth, so there are no signal hand-offs where calls may drop, nor do the stations on the ground need to have any kind of tracking mechanism to keep the antenna pointed at the satellite. Unfortunately, because the geostationary satellites are located over the equator, they do not work well pole-ward of 70° latitude, because they are too close to the horizon for reliable, interference-free signals. Geostationary satellites also have a noticeable delay, because the round-trip light time is a minimum of ~0.25 seconds, and the time to receive a response back doubles that.

Low-Earth-orbit satellite systems require many more satellites, but the satellites are much closer to Earth, generally only 650–1100 km above the surface. Many of these satellites are in a polar or near-polar orbit, which gives them good coverage near the poles. Each satellite is only over any given area for 4–15 min, so hand-offs are necessary (and are not always reliable). One advantage of low-Earth-orbit systems is that the transmitter and antenna on the ground do not need to be especially powerful or carefully aimed. Low-Earth orbit systems have substantially less data throughput than the geostationary systems (9600 kbps for LEO vs. 60–512 kbps for geostationary). For reference, the LEO throughput is much less than dial-up modems, and geostationary throughput is up to 10x higher than dial-up, though still far short of broadband internet access (4 Mbps down, 1 Mbps up).

I mentioned that the antennas (and power) for a geostationary satellite setup need to be better than ones for low-Earth orbit satellites. This is because of the inverse-square law, where the as the distance is increased, the power which reaches the receiver drops by the square of that increase. Think of standing outside at night with a friend (representing the ground station and satellite), and each of you has a flashlight (representing the radio transmitters) and eyes (the radio receivers). When you are close, the light is very bright, and you probably have to look away. As you move away from each other, the lights appear dimmer and dimmer. Each time you double the distance between you, the brightness of the light dims by a factor of four. If you need a certain level of brightness at the receiver (your eye, or the satellite antenna), then there has to be either a sufficiently bright light shining (power level), or it needs to be focused enough—and harvested enough by a sufficiently large receiver—to achieve that level of signal.

Inverse-square law in action; as the distance increases (e.g. from r to 2r), the area the energy is directed over increases as the square of the distance (e.g. from 1 to 4 units).  Image credit: Borb (CC-BY-SA).
Inverse-square law in action; as the distance increases (e.g. from r to 2r), the area the energy is directed over increases as the square of the distance (e.g. from 1 to 4 units). Image credit: Borb (CC-BY-SA).

With a difference in altitude of ~40x between low-Earth orbit and geostationary orbit, there is a difference of 1600x in the signal level, all else being equal. For that reason, satellite phones for low-Earth-orbit satellites can get away with less powerful radios and smaller antennas that are less sensitive to proper positioning. It’s handy to not need exact positioning for the low-Earth-orbit satellites, because their quick movement across the sky can be difficult to track without a motorized, computer-driven antenna. Mobile or ship-based satellite communication systems tend to rely more on the low-Earth-orbit satellites precisely because the aim of the antenna is much less important. Nobody wants to try to hold an antenna pointing in a certain direction while pitching about on a ship in 4 m seas in the wind and the cold.

As an amateur radio operator, one thing I enjoy doing is going outside when the International Space Station is flying over, and listening to the radio signals it sends down. During the morning or evening passes on clear days where the space station is visible, it is quite easy to point in the right direction. Spot the station, then point your hand-held antenna toward it. During the day, in the depths of night, or when it’s cloudy, tracking the station can be more difficult (at least without computer assistance). Still, it’s pretty neat to hear astronauts answering questions from the local middle school students, all the while knowing that the signal coming from the space station is coming directly to your radio, no internet or commercial broadcast station required.