Category Archives: Science Miscellany

Mapping the Eclipse for a Citizen Science Project

Map of the continental United States showing the amateur radio grids and path of the eclipse.  Image credit: Bill Mitchell (CC-BY).
Map of the continental United States showing the amateur radio grids and path of the eclipse. Image credit: Bill Mitchell (CC-BY).

During the solar eclipse next week, I will be at the Science Museum of Minnesota with a citizen science project studying the effects of the eclipse on radio propagation. While there are many radio-related projects going on—the most accessible being a study of AM radio reception—I will be using amateur radio to make contacts and provide reception reports during the eclipse. One of the important pieces of information that will be exchanged with other amateur stations is a “grid”, which is a shorthand for rough latitude and longitude.

Amateur radio grids are 2° longitude by 1° latitude, and represented with pairs of letters and numbers. For instance, the Science Museum of Minnesota is located in EN34. Fields (20°x10°) are designated with letters, and increase from -180 longitude and -90 latitude (AA) to 160 longitude and 80 latitude (RR). Fields are further subdivided into grids using numbers, which increase from 00 at the southwest corner to 99 at the northeast. Looking again at our example, the first character, E, indicates a location between 100° and 80° W longitude, and N indicates a location between 40° and 50° N latitude. The numbers provide further refinement on that range. The 3 means the longitude is between 6° and 8° east of the west edge of the field (i.e. 94°–92° W), and the 4 after it means the latitude is 4°–5° north of the south edge of the field (i.e. 44°–45° N). Further letters (A-X) and numbers can be used to specify locations more precisely in a similar fashion. Longitude is always indicated first, and increases west-to-east; latitude is indicated second, increasing south-to-north.

For the event, I want to have a map of the continental US and southern Canada with the grids outlined on it. During the event as we hear which grid other stations are in, we can mark their location on the map. Unfortunately, I was not able to find a map that I wanted to use for this purpose, so I decided to make my own with QGIS.

For my eclipse map, I needed to gather a few datasets. First and foremost, I needed a US state map. Canadian provinces were also a high priority. Once I had those, I was still missing Mexico and other North American areas, so I found a world map as well. That covered the basics, but as long as I’m making a special map for the eclipse, I wanted to have the path of totality, which I found from NASA. I unzipped each of those files into a folder for my eclipse grid map project.

In QGIS, I loaded all the datasets (vectors). The Canadian provinces were in a different projection, so I saved (converted) it to the projection I wanted (EPSG:4269), which is a simple latitude-longitude projection. I found that the Canadian provinces included detailed coastlines and islands, so I simplified it (Vector | Geometry Tools | Simplify Geometry) using a tolerance of 0.01 or something like that. The islands cleaned up a little, but the overall shapes didn’t change much.

With the datasets loaded, I needed to make my field and grid boundaries. Using the grid tool (Vector | Research Tools | Vector Grid) I created the field grid (xmin=-180, xmax=180, ymin=-90, ymax=90, parameter x=20, parameter y=10) and the fine grid (same except parameter x=2, parameter y=1).

I looked up the coordinates for the Science Museum of Minnesota, and put them into a CSV text file. By loading in that CSV file, I put a star on the map where I will be located.

From that point, it was just a matter of adjusting colors and display properties. I gave reasonable, light colors to the US and Canada, and thickened the borders for the US states. I used a dashed line for the field lines, and a lighter grey dotted line for the smaller grids. The eclipse path I made a partially-transparent grey.

That’s about all there was to it! In the print composer I added in some of the labels for a few grids to help demonstrate the letter/number scheme.

Results (PDF): 8.5″x11″, 11″x17″.

Advertisements

When Counting Gets Difficult, Part 2

Prion sp., March 22, 2016, seen just west of Heard Island.  Image credit: Bill Mitchell.
Prion sp., March 22, 2016, seen just west of Heard Island. Image credit: Bill Mitchell.

Earlier I posed a question: suppose a group of 40 birds are identified to genus level (prion sp.). Four photographs of random birds are identified to species level, all of one species that was expected to be in the minority (fulmar prion) and likely would be present in mixed flocks. How many birds of the 40 should be upgraded from genus-level ID to species-level ID?

Clearly there is a minimum of one fulmar prion present, because it was identified in the photographs. With four photographs and 40 birds, the chance of randomly catching the same bird all four times is quite small, so the number of fulmar prions is probably much higher than 1. At the same time, it would not be reasonable from a sample of our photographs to say all 40 were fulmar prions.

If we have four photographs of fulmar prions (A), what is the minimum number of non-fulmar prions (B) needed in a 40-prion flock to have a 95% chance of photographing at least one non-fulmar prion?

To answer this question, I used a Monte Carlo simulation, which I wrote in R. I generated 40-element combinations of A and B ranging from all A to all B. Then for each of those populations, I ran 100,000 trials, sampling 4 random birds from each population (with replacement). By tracking the proportion of trials for each population that had at least one B, it becomes possible to find the 95% confidence limit.

pop_size <- 40  # Set the population size
sample_n <- 4  # Set the number of samples (photographs)
n_trials <- 100000  # Set the number of trials for each population

x <- seq(0:pop_size)  # Create a vector of the numbers from 0 to pop_size (i.e. how many B in population)

sample_from_pop <- function(population, sample_size, n_trials){
	# Run Monte Carlo sampling, taking sample_size samples (with replacement)
                # from population (vector of TRUE/FALSE), repeating n_trials times
	# population: vector of TRUE/FALSE representing e.g. species A (TRUE) and B (FALSE)
	# sample_size: the number of members of the population to inspect
	# n_trials: the number of times to repeat the sampling
	my_count <- 0
	for(k in 1:n_trials){  # Repeat sampling n_trials times
		my_results <- sample(population, sample_size, replace=TRUE)  # Get the samples
		if(FALSE %in% my_results){  # Look for whether it had species B
			my_count <- my_count + 1  # Add one to the count if it did
		}
	}
	return(my_count/n_trials)  # Return the proportion of trials detecting species B
}

create_pop <- function(n,N){  # Make the populations
	return(append(rep(TRUE,N-n),rep(FALSE,n)))  # Populations have N-n repetitions of TRUE (sp. A), n reps of FALSE (sp. B)
}

mypops <- lapply(0:pop_size, create_pop, pop_size)  # Create populations for sampling

# Apply the sampling function to the populations, recording the proportion of trials sampling at least one of species B
my_percentages <- sapply(mypops, sample_from_pop, sample_size=sample_n, n_trials=n_trials)

My simulation results showed that with 22 or more birds of species B (non-fulmar prions), there was a >95% that they would be detected. In other words, from my photographic data, there is a 95% probability that the flock of 40 prions contained no fewer than 19 fulmar prions.

Let’s take a look at it graphically.

library(ggplot2)

mydata <- data.frame(my_percentages, 0:pop_size)  # Make a data.frame with the results and the # of species B
names(mydata) <- c("DetProb", "B")  # Rename the columns to something friendly and vaguely descriptive

p <- ggplot(mydata2, aes(x=B,y=DetProb)) + geom_point() # Create the basic ggplot2 scatterplot
p <- p + geom_hline(yintercept=0.95)  # Add a horizontal line at 95%
p <- p + theme_bw() + labs(x="# of species B (pop. 40)", y="Detection probability of B")  # Tidy up the presentation and labeling
print(p)  # Display it!
Results of the Monte Carlo simulation.  At left is all A, while at right is a population with all B.  The horizontal line is the 95% probability line.  Points above the line have a >95% chance of detecting species B.
Results of the Monte Carlo simulation. At left is all A, while at right is a population with all B. The horizontal line is the 95% probability line. Points above the line have a >95% chance of detecting species B.

With 22 or more non-fulmar prions, there’s a >95% chance one would be photographed. With 19 fulmar prions and 21 non-fulmar prions, there’s a >5% chance the non-fulmar prions would be missed. So our minimum number of fulmar prions is 19. I may have seen a flock of 40 fulmar prions, but there aren’t enough observations to say with statistical confidence that they were all fulmar prions.

When Counting is Difficult

A fulmar prion glides swiftly over the swell of the Southern Ocean.  Image credit: Bill Mitchell (CC-BY)
A fulmar prion glides swiftly over the swell of the Southern Ocean. Image credit: Bill Mitchell (CC-BY)

During the Heard Island Expedition, including the nearly three weeks at sea on the Southern Ocean, I made a few observations for a citizen science project: eBird. It’s a pretty simple system: identify and count all the birds you see in a small area and/or time period, then submit your list to a centralized database. That database is used for research, and keeps track of your life/year/county lists. With so few observations in the southern Indian Ocean in March and April (and no penguins on my life list before the expedition), I figured I would make a few counts.

On its face, identifying and counting birds is straightforward. Get a good look, maybe a photograph, and count (or estimate) the number present of that species.

It gets more difficult when you go outside your usual spot, particularly when the biome is much different. Although I have some familiarity with the Sibley Guide for North American birds, I’ve never payed very close attention to the seabird section, and have never birded at sea before. All the birds I expected to see on this expedition would be life birds, and that changes things a bit. I would have to observe very closely, and photograph where I could.

Before the expedition, I read up on the birds I would likely find on the island. In addition to four species of penguins, there were three species of albatross (wandering, black-browed, and light-mantled sooty) and two species of prions (Antarctic and fulmar). Albatrosses are large and the species near Heard are readily distinguished. Prions, however, can be quite difficult even with good observations. They’re not quite to the level of the Empidonax flycatchers, but close.

At sea, we usually had prions flying near the ship. I took pictures, knowing that I might be able to get help with ID if I needed it—and of course I needed it.

That’s where the problem started: I had a count where I had observed 40 prions flying around the ship, which I identified only to genus level. From my reading on Heard Island, I knew that breeding populations for prions on Heard Island were generally larger Antarctic prions than fulmar prions, with an estimate of a 10:1 margin. I had four clear pictures of individual birds, which my helpful eBird reviewer was able to get to an expert for further identification. All four were fulmar prions.

With 40 birds identified to genus level, and four photos of random birds identified to species level as a species expected to be a minor proportion, how many of the original 40 birds can I reasonably assign as fulmar prions?

I have an answer to this question, which I will post next week.

Science on a Plane

Temperature profile flying in to MSP around 2120 UTC on April 25, 2016.  Image credit: Bill Mitchell (CC-BY).
Temperature profile flying in to MSP around 2120 UTC on April 25, 2016. Image credit: Bill Mitchell (CC-BY).

One of my favorite things to do on an airplane, when I can, is to take a temperature profile during the descent. Until recently, this could generally only be done on long international flights, when they had little screens which showed the altitude and temperature along with other flight data. However, I found on my latest trip that sometimes now even domestic flights have this information in a nice tabular form.

To take a temperature profile, when the captain makes the announcement that the descent is beginning, get out your notebook and set your screen to the flight information, where hopefully it tells you altitude (m) and temperature (°C). Record the altitude and temperature as frequently as they are updated on the way down, though you might set a minimum altitude change (20 m) to avoid lots of identical points if the plane levels off for a while. When you land, be sure to include the time, date, and location of arrival.

When you get a chance, transfer the data to a CSV (comma-separated value) file, including the column headers like in the example below.


Alt (m),Temp (C)
10325,-52
10138,-51
9992,-48
...
250,17

You can then use your favorite plotting program (I like R with ggplot) to plot up the data. I’ve included my R script for plotting at the bottom of the page. Just adjust the filename for infile, and it should do the rest for you.

At the top of the page is the profile I took on my way in to Minneapolis on the afternoon of April 25th. There were storms in the area, and we see a clear inversion layer (warmer air above than below) about 1 km up, with a smaller inversion at 1.6 km. From the linear regression, the average lapse rate was -6.44 °C/km, a bit lower than the typical value of 7 °C/km.

On the way in to Los Angeles the morning of April 25th, no strong inversion layer was present and temperature increased to the ground.

Temperature profile descending into Los Angeles on the morning of April 25, 2016.  Image credit: Bill Mitchell (CC-BY).
Temperature profile descending into Los Angeles on the morning of April 25, 2016. Image credit: Bill Mitchell (CC-BY).

This is a pretty easy way to do a little bit of science while you’re on the plane, and to practice the your plotting skills when you’re on the ground. For comparison, the University of Wyoming has records of weather balloon profiles from around the world. You can plot them yourself from the “Text: List” data, or use the “GIF: to 10mb” option to have it plotted for you.

Here is the code, although the long lines have been wrapped and will need to be rejoined before use.


# Script for plotting Alt/Temp profile
# File in format Alt (m),Temp (C)

infile <- "20160425_MSP_profile.csv" # Name of CSV file for plotting

library(ggplot2) # Needed for plotting
library(tools) # Needed for removing file extension to automate output filename

mydata <- read.csv(infile) # Import data
mydata[,1] <- mydata[,1]/1000 # convert m to km
mystats <- lm(mydata[,2]~mydata[,1]) # Run linear regression to get lapse rate
myslope <- mystats$coefficients[2] # Slope of regression
myint <- mystats$coefficients[1] # Intercept of regression

p <- ggplot(mydata, aes(x=mydata[,2], y=mydata[,1])) + stat_smooth(method="lm", color="blue") + geom_point() + labs(x="Temp (C)",y="Altitude (km)") + annotate("text", x=-30, y=1, label=sprintf("y=%.2fx + %.2f",myslope,myint)) + theme_classic() # Create plot

png(file=paste(file_path_sans_ext(infile),"png",sep="."), width=800, height=800) # Set output image info
print(p) # Plot it!
dev.off() # Done plotting

This Year in Uranium Decay

Pumice from the Bishop Tuff (~767 ka).  Zircons in this pumice are rich (relatively) in uranium, with up to 0.5% U.[1,2]  Image credit: Bill Mitchell (CC-BY).
Pumice from the Bishop Tuff (~767 ka). Zircons in this pumice are rich (relatively) in uranium, with up to 0.5% U.[1,2] Image credit: Bill Mitchell (CC-BY).

With 2016 now upon us, I felt it would be appropriate to think about what a new year means for uranium geochronology. What can we expect from the year ahead? Without getting into any of the active research going on, I felt it would be useful to address simply what is physically happening.

On Earth, there is roughly 1×1017 kg of uranium.[3] The ratio of 238U:235U is about 137.8:1, and 238U has a mass of roughly 238 g/mol (=0.238 kg/mol). Looking only at 238U, that gives us
1x1017[kg]x(137.8/138.8)/0.238[kg/mol] = 4.17x1017 mol [238U]

Radioactive decay is exponential, with the surviving proportion given by e-λt where λ is the decay constant (in units of 1/time) and t is time, or alternatively, e-ln(2)/T1/2*t, where T1/2 is the half-life and t is time.

To find the proportion that decays, we subtract the surviving proportion from 1: (1-e-λt)

Multiplying this proportion by the number of moles of 238U will give us the moles of decay, and multiplying by the molar mass will give the mass lost to decay:

(1-e-λt)*molU

Plugging in numbers, with λ238 = 1.54*10-10 y-1, t = 1 y and the moles of 238U from above, we get:

(1-e-1.54*10-10)*4.17*1017 mol [238U] = 6.4*107 mol

That yields (with proper use of metric prefixes) roughly 64 Mmol U decay, or 15 Gg of U on Earth that will decay over the next year.

Although those numbers sound very large, they are much smaller than even the increase in US CO2 emissions from 2013 to 2014 (50 Tg, or 50,000 Gg); total US CO2 emissions in 2014 were estimated at 5.4 Pg (=5.4 million Gg).[US EIA]

As for what’s in store for geochronology as a field, I think there will be a lot of discussion and consideration regarding yet another analysis of the Bishop Tuff.[4] Dating samples which are <1 Ma (refresher on geologic time and conventions) using U/Pb can be tricky, and Ickert et al. get into some of the issues when trying to get extremely high-precision dates from zircons. The paper is not open access, but the authors can be contacted for a copy (@cwmagee and @srmulcahy are active on Twitter, too!).

***
[1] J. L. Crowley, B. Schoene, S. A. Bowring. “U-Pb dating of zircon in the Bishop Tuff at the millennial scale” Geology 2007, 35, p. 1123-1126. DOI: 10.1130/G24017A.1
[2] K. J. Chamberlain, C. J. N. Wilson, J. L. Wooden, B. L. A. Charlier, T. R. Ireland. “New Perspectives on the Bishop Tuff from Zircon Textures, Ages, and Trace Elements” Journal of Petrology 2014, 55, p. 395-426. DOI: 10.1093/petrology/egt072
[3] G. Fiorentini, M. Lissia, F. Mantovani, R. Vannucci. “Geo-Neutrinos: a short review” Arxiv 2004. arXiv:hep-ph/0409152 and final DOI: 10.1016/j.nuclphysbps.2005.01.087
[4] R. B. Ickert, R. Mundil, C. W. Magee, Jr., S. R. Mulcahy. “The U-Th-Pb systematics of zircon from the Bishop Tuff: A case study in challenges to high-precision Pb/U geochronology at the millennial scale” Geochimica et Cosmochimica Acta 2015, 168, p. 88-110. DOI: 10.1016/j.gca.2015.07.018

Various Interesting Articles

Thin section photomicrograph of a gabbro, (crossed polarizing filters).  Image credit: Siim Sepp (CC-BY-SA).
Thin section photomicrograph of a gabbro, (crossed polarizing filters). Image credit: Siim Sepp (CC-BY-SA).

There have been a couple of interesting articles I’ve come across recently, which are worth mentioning.

First, Emily Lakdawalla has an excellent summary of the Pluto discoveries from both the American Geophysical Union’s Fall Meeting and the [NASA] Division of Planetary Science meeting. There’s a lot of new stuff there, and it’s pretty exciting.

Second, the Joides Resolution blog (the Joides Resolution is an ocean sediment coring vessel) has a series of posts (1, 2, 3) on geologic thin sections. Not surprisingly, the thin sections pictured are from rocks such as gabbros or sheeted dikes, which are expected in oceanic crust and in ophiolites (oceanic crust exposed on land). There’s a great exposure of the Coast Range Ophiolite just west of Patterson, CA, in Del Puerto Canyon, which is described in a recent blog post by Garry Hayes.

Third, Dave Petley has a great post on The Landslide Blog about the recent landslide in Shenzhen, China. I find landslides fascinating, and always learn something when I read The Landslide Blog.

Communicating Science Precisely and Accurately

Newton's cradle pendulums swinging back and forth over a copy of Isaac Newton's Principia Mathematica.  Image credit: DemonDeLuxe (CC-BY-SA).
Newton’s cradle pendulums swinging back and forth over a copy of Isaac Newton’s Principia Mathematica. Image credit: DemonDeLuxe (CC-BY-SA).

Recently when I was volunteering at my local science museum, I was leading activities on resonance. I had tuning forks, tuned plastic pipes, and a series of pendula with differing lengths on an arm rotated by a much heavier pendulum. The main idea was that when the frequencies of two pendula or a tuning fork and tuned pipe match, then the energy from one could be transferred into the other, making it oscillate. When you hold a tuning fork up to a resonant cavity, and the cavity will sound. Similarly with my pendula, if the pendulum driving the rotating arm is swinging at the same frequency as the natural frequency of one of the pendulua coupled to it, that pendulum will swing too. Other pendula with faster or slower oscillations will be relatively unaffected.

In the course of talking with visitors, I was reminded of a constant challenge for science communication: being precise, accurate, and accessible. Scientific language is often used to convey precisely the conditions or idea in question. And yet, sometimes a more colloquial meaning of a word is understandable. As I was talking about how quickly this pendulum oscillated, and how slowly that one oscillated, it was difficult to maintain a clear, concise distinction between speed (distance/time) and frequency (1/time). It isn’t about the speed with which the pendulum moves, nor is it about how high (how large the amplitude of motion is) the pendulum swings. But frequency isn’t necessarily a word that visitors understand in distinction with a colloquial version of “speed”.

Next week will be a big week for science communication, and you should keep an eye on the science news. The American Geophysical Union (AGU) is having its 2015 Fall Meeting, which is a gathering of more than 20,000 scientists in San Francisco. There will be lots of new results presented, many of them esoteric or incremental, but others will be quite accessible and groundbreaking. Many science journalists will be on hand covering the proceedings, and most of them do an excellent job.

However, there are a few headlines to watch out for. “Water found on Mars!” is a fairly common one, although the announcements, if you investigate a little more deeply, are indeed new when coupled with the precise situation. This past summer, the big announcement of water on Mars was in fact new: liquid water, at the surface, presently. Another headline to watch out for is “[volcano] ready to erupt!” Yes, many volcanoes have magma chambers under them, which may or may not be larger than previously thought. However, most of the time, the magma chambers underneath the volcanoes are actually much more solid/mushy than reports make them out to be.

If you’re interested in following along, I’d recommend reading the AGU blogs, as well as Erik Klemetti’s Eruptions blog. Twitter will also be very busy using the hashtag #AGU15.