Satellite Communications

Geostationary orbits, used by some communications satellites.  Image credit: Lookang (CC-BY-SA).
Geostationary orbits, used by some communications satellites. Image credit: Lookang (CC-BY-SA).

One important aspect of field work in remote places is keeping lines of communication open. At a minimum, the ability to call for help is needed. Sending status updates, checking email, talking with loved ones, and a number of other uses are good to have. Even in this day and age, though, not every remote place has good cell phone coverage. These places are where satellite phone systems are extremely useful.

There are two main types of satellite systems: geostationary satellite systems, and low-Earth-orbit satellite systems.

Geostationary satellite systems have satellites over fixed locations above Earth’s equator, at an altitude of roughly 36,000 km (22,000 mi). Geostationary satellites are nice in that they are always in the same spot relative to a location on Earth, so there are no signal hand-offs where calls may drop, nor do the stations on the ground need to have any kind of tracking mechanism to keep the antenna pointed at the satellite. Unfortunately, because the geostationary satellites are located over the equator, they do not work well pole-ward of 70° latitude, because they are too close to the horizon for reliable, interference-free signals. Geostationary satellites also have a noticeable delay, because the round-trip light time is a minimum of ~0.25 seconds, and the time to receive a response back doubles that.

Low-Earth-orbit satellite systems require many more satellites, but the satellites are much closer to Earth, generally only 650–1100 km above the surface. Many of these satellites are in a polar or near-polar orbit, which gives them good coverage near the poles. Each satellite is only over any given area for 4–15 min, so hand-offs are necessary (and are not always reliable). One advantage of low-Earth-orbit systems is that the transmitter and antenna on the ground do not need to be especially powerful or carefully aimed. Low-Earth orbit systems have substantially less data throughput than the geostationary systems (9600 kbps for LEO vs. 60–512 kbps for geostationary). For reference, the LEO throughput is much less than dial-up modems, and geostationary throughput is up to 10x higher than dial-up, though still far short of broadband internet access (4 Mbps down, 1 Mbps up).

I mentioned that the antennas (and power) for a geostationary satellite setup need to be better than ones for low-Earth orbit satellites. This is because of the inverse-square law, where the as the distance is increased, the power which reaches the receiver drops by the square of that increase. Think of standing outside at night with a friend (representing the ground station and satellite), and each of you has a flashlight (representing the radio transmitters) and eyes (the radio receivers). When you are close, the light is very bright, and you probably have to look away. As you move away from each other, the lights appear dimmer and dimmer. Each time you double the distance between you, the brightness of the light dims by a factor of four. If you need a certain level of brightness at the receiver (your eye, or the satellite antenna), then there has to be either a sufficiently bright light shining (power level), or it needs to be focused enough—and harvested enough by a sufficiently large receiver—to achieve that level of signal.

Inverse-square law in action; as the distance increases (e.g. from r to 2r), the area the energy is directed over increases as the square of the distance (e.g. from 1 to 4 units).  Image credit: Borb (CC-BY-SA).
Inverse-square law in action; as the distance increases (e.g. from r to 2r), the area the energy is directed over increases as the square of the distance (e.g. from 1 to 4 units). Image credit: Borb (CC-BY-SA).

With a difference in altitude of ~40x between low-Earth orbit and geostationary orbit, there is a difference of 1600x in the signal level, all else being equal. For that reason, satellite phones for low-Earth-orbit satellites can get away with less powerful radios and smaller antennas that are less sensitive to proper positioning. It’s handy to not need exact positioning for the low-Earth-orbit satellites, because their quick movement across the sky can be difficult to track without a motorized, computer-driven antenna. Mobile or ship-based satellite communication systems tend to rely more on the low-Earth-orbit satellites precisely because the aim of the antenna is much less important. Nobody wants to try to hold an antenna pointing in a certain direction while pitching about on a ship in 4 m seas in the wind and the cold.

As an amateur radio operator, one thing I enjoy doing is going outside when the International Space Station is flying over, and listening to the radio signals it sends down. During the morning or evening passes on clear days where the space station is visible, it is quite easy to point in the right direction. Spot the station, then point your hand-held antenna toward it. During the day, in the depths of night, or when it’s cloudy, tracking the station can be more difficult (at least without computer assistance). Still, it’s pretty neat to hear astronauts answering questions from the local middle school students, all the while knowing that the signal coming from the space station is coming directly to your radio, no internet or commercial broadcast station required.

The 1991 Heard Island Feasibility Test

MV Cory Chouest the ship used for the experiments detailed below.  Image credit: US Navy (public domain, via Wikipedia).
MV Cory Chouest, the ship used for the experiments detailed below. Image credit: US Navy (public domain, via Wikipedia).

Batten the hatches and hang on to the hand rails, because this installment of science at/on/near Heard Island is going to be a wild ride! We’ll explore a paper entitled The Heard Island Feasibility Test,[1] and along the way we’ll make ports of call in climate science, oceanography, and physics. I encourage you to check out a copy of the paper, either at your local (research) library or online. It’s really well-written! There’s also a pre-experiment lecture given by the study’s lead author which is freely available online, and details the rationale behind the study and the expected results.

In 1991, scientists were concerned about global warming. They were very interested in measuring the ocean temperature—oceans can store much more heat than the atmosphere, so while the atmosphere may not warm quickly in a changing climate, the oceans are likely to capture most of the heat. Additionally, water has a high heat capacity (the amount of energy it takes to raise its temperature by a degree), which is why it takes so long to bring a pot of water water to a boil on the stove.

Measuring the ocean temperature seems fairly straightforward: put a thermometer in the ocean, and log the temperature. Scatter a bunch of stations around the world and it’s done, right? Wrong.

The problem with using a thermometer (or many thermometers) to measure the ocean temperature is that there are many small-scale features which can influence the measured temperature. The variability of these measurements is likely to be quite high, and they each measure only a small place— extrapolating to the whole ocean isn’t necessarily justified.

How, then, can a measurement be made which yields an average temperature over a huge volume of ocean?

Sound. Ocean temperatures can be measured with sound. This is an amazing world in which we live!

In water, the speed of sound will vary depending on temperature, pressure, and (to a limited extent) salinity, and be in the ballpark of 1.48 km/s. With variations in speed of 4–5 m/s/°C, a +5 m°C (0.005 °C) change in temperature results in a -0.1 s change in travel time over a 10 Mm (10,000 km) path.[1] Have an acoustic source emit a signal, measure the signal at a distant receiver, and the time delay will yield an apparent average speed of sound. Shifts in these speeds due to warming of about 5 m°C/yr would theoretically produce measurably earlier arrival times.

Speed of sound measured at various depths in the Pacific Ocean north of Hawaii.  Image credit:  Nicoguaro (CC-BY-SA); data from the 2005 World Ocean Atlas.
Speed of sound measured at various depths in the Pacific Ocean north of Hawaii. Image credit: Nicoguaro (CC-BY-SA); data from the 2005 World Ocean Atlas.

One potential problem with all this is the part about receiving a sound signal 10 Mm away from its source. However, the temperature and pressure profile of the ocean cause a minimum in sound velocity at a depth of 500–1,000 m (for mid/low-latitude oceans). This low-velocity region, termed a SOFAR channel acts as a waveguide or a duct, where sounds within it tend to stay within rather than dispersing.[2] Low-frequency sounds (50–100 Hz)are not attenuated or absorbed much by the water, so long-distance reception of these sounds might be possible.

The feasibility test was designed as a proof-of-concept for ocean-wide acoustic reception. Using powerful low-frequency transducers on loan from the U.S. Navy, the scientists would be able to send the signals and have receivers around the world listening for them.[3] Unfortunately for the scientists, the transducers could only operate to a depth of 300 m. That meant that a high-latitude site needed to be found, where the SOFAR channel—that special place which enables long-distance reception—is much closer to the surface.

Heard Island was chosen as a transmission site, because the direct sound paths (mostly, but not entirely, great circles) would reach across both the Pacific and Atlantic oceans.

No major field work is complete without a little drama, though. Late in the planning and preparation phase, the US National Marine Fisheries Service notified the researchers that permits were required to mitigate threats to marine mammals from the powerful sounds. The Australians (Heard Island is an Australian territory) required the permits too. A second vessel was chartered and biologists were assembled to monitor marine mammal activity and fulfill the responsibilities associated with the permits.

The two ships sailed as originally scheduled on January 9, 1991, but neither the American nor Australian permits had been issued. With a scheduled transmission start of January 26th, there wasn’t much room for delay. Fortunately, the permits arrived just in time: January 18th and January 25th. I bet the scientists were very tense during the voyage from Perth/Fremantle (Australia) to Heard Island.

An unscheduled 5-minute equipment test the day before the first scheduled transmission was received in Bermuda, and shortly thereafter at Whidbey Island (near Seattle, and almost 18 Mm away). Basic feasibility was already shown!

Signals were sent in a 1-hour-on, 2-hours-off pattern. Some of the transmissions were a continuous-wave (CW) 57 Hz tone (to avoid 50 Hz and 60 Hz power noise), while others were a mixture of several different frequencies near 57 Hz. For details on these transmission modes I refer you to the paper.

Transmissions for the experiment were aborted on the 6th day—ahead of schedule—when a gale and 10-m swells caused one acoustic source to be lost from the string and fall to the ocean floor. The other sources were badly damaged. Conditions in the Southern Ocean can make field work there very difficult.

One thing I found surprising, but makes plenty of sense upon consideration, was that rather than staying in one fixed location, the ship towed the sources along at 3 kt (5.5 km/h, 3.5 mph). This makes sense once you think about the wind and waves in the Southern Ocean, and how, to maintain control of the ship, the vessel must be underway. Being broadside to the swell in a high sea is extremely dangerous.

In this experiment, the receivers were sensitive enough to detect the Doppler shift from the ship’s movement. In fact, the Doppler shift combined with the known path of the ship (from GPS) allowed the azimuth of the signals to be determined. For many of the signals, it was on the expected heading (not quite a great circle due to the non-spherical Earth and the inconsistent depth of the SOFAR channel). At Whidbey Island receiver array, though, the signals arrived from a bearing of 215°, not the 230° predicted. In that case, the signal appears to have taken a longer path southeast of New Zealand, rather than through the Tasman Sea and between Australia and New Zealand.

Fortunately for all involved, there was little impact noticed on the marine mammals.[4] Despite the low observed impacts, the authors make recommendations for the Acoustic Thermometry of Ocean Climate project to reduce adverse effects to marine life. Using shorter-range transmit/receive pairs, the total power needed can be reduced significantly. Additionally, with temperate waters having a deeper SOFAR channel, the transmitters can be bottom-mounted at depths of around 0.5–1 km, which will help physically separate them from the near-surface-dwelling marine mammals.

In short, the Heard Island Feasibility Test was a resounding (pardon the pun) success. Ocean acoustic temperature measurement is possible, and measurements were made in the North Pacific for a decade, from 1996–2006.

This paper was a really interesting one, and fairly accessible (scientifically) to someone not in the field of signal processing or oceanography. I enjoyed reading it, and suggest you take a look at it if you’re at all interested. My summary here has skipped over large parts which detail the nature of the propagation and the signal processing aspects.

[1] Munk, W. H., Spindel, R. C., Baggeroer, A., Birdsall, T. G. (1994) “The Heard Island Feasibility Test” J. Acoust. Soc. Am. 96 (4), p. 2330–2342. DOI 10.1121/1.410105

[2] This phenomenon is analogous to atmospheric ducting of radio waves, which can cause TV and FM radio stations to be heard far beyond their normal range, and for weather radar to pick up ground clutter far from the station.

[3] This sounds almost analogous to the upcoming VK0EK ham radio expedition to Heard Island, where radio operators (including myself) will have stations around the world listening for their signal.

[4] Bowles, A. E., Smultea, M., Würsig, B., DeMaster, D. P., Palka, D. (1994) “Relative abundance and behavior of marine mammals exposed to transmissions from the Heard Island Feasibility Test” J. Acoust. Soc. Am. 96 (4), p. 2469–2484. DOI 10.1121/1.410120

Book Discussion: The End of Night

Moon over Berkeley, and a lot of stray light.  Image credit: laikolosse (CC-BY-NC).
Moon over Berkeley, and a lot of stray light. Image credit: laikolosse (CC-BY-NC).

Recently I’ve been reading an interesting book by Paul Bogard, The End of Night. It’s non-fiction, and is based around the increasing amounts of night-time light in the developed world—and why that may not be a great thing.

Just 200 years ago, before electric lights, the night sky—complete with the swath of the Milky Way, and other naked-eye observable galaxies—was spectacular on any clear night from anywhere on Earth. Today, however, few in the developed world see that a few times a year, let alone on every clear night. Instead, our nights look like they do in the photo of Berkeley above, with a milky haze of yellow-orange light from the 589 nm sodium D lines (admittedly there is some fog in the picture too).

Darkish summer skies in Canada; many stars are visible, and a hint of the Milky Way can be discerned.  Pale orange light is from the Sun being relatively near the horizon even in the middle of the night.  Image credit: laikolosse (CC-BY-NC).
Darkish summer skies in Canada; many stars are visible, and a hint of the Milky Way can be discerned. Pale orange light is from the Sun being relatively near the horizon even in the middle of the night. Image credit: (CC-BY-NC).

There were four key messages I took from the book.

First, straightforwardly, is that a dark night sky is incredibly beautiful, and we should preserve that beauty. Seeing the Milky Way clearly with the naked eye can be a powerful experience especially for those who have rarely or never have. Unfortunately, very few places in western Europe (or the eastern half of the US, or populated areas in the western US) are near dark skies. Bogard cites a statistic that 80% of children born in the US today will never see a truly dark sky. In my experience as a teaching assistant at Berkeley going on a geology field trip to Bishop, CA and the eastern Sierras, I can attest to a large proportion of the students being quite surprised by all the stars visible from such dark skies.

Adding light doesn’t make it easier to see. Sure it makes it easier to see initially when you first go from the light into the dark, but if the light isn’t in the right places, it actually makes it harder to see. I have biked at night on roads where the streetlights made it very difficult for me to see the road, because the lights were so bright and everything else so dark. Glare from bad lighting makes the lighting less effective. It’s also light going places it isn’t needed or wanted, which is wasted energy (and money, and CO2 where electricity comes from fossil fuels). The picture at top has a number of lights shining directly into the camera, miles away; all that light is wasted and unneeded. Bedrooms have light streaming in from outside even at night, which causes its own problems.

People are evolutionarily adapted to sleep at night, and our bodies expect for it to be dark at that time. Longer periods of light are decreasing the amount and quality of sleep we get (particularly night-shift workers), and those have significant detrimental public health consequences including increased risk of cancer.[1 and references therein] Turning off or dimming the lights at night—especially blue light and light in the bedroom—could help us sleep better and be healthier for it. Minimizing the number of people who are working at all hours of the night and thus exposed to the health risks of doing so would also be good, both from a moral and economic perspective.

Finally, it was made clear throughout that the point here isn’t to turn off all the lights and go back to the stone age, but rather to be thoughtful and deliberate about our outdoor lighting. Making light fixtures which put light where it should go (i.e. down on the ground, not out to the sides or up) and using them only when needed is fairly simple. How much light is really needed at a car dealership, ice rink, or empty parking lot in the middle of the night? Gas stations are often extremely brightly lit, yet most of that light isn’t needed to pump gas or wash the bugs off the windshield. In fact, pulling out of a bright gas station at night can be dangerous since your eyes will have adjusted to the brighter environment and it will take a few minutes for them to dilate again and bring your vision back to its optimum level.

Supposing I turn off my 60 W worth of porch lights (sadly not the dark-sky friendly type) for an average of 10 hours/day year round (3650 h), that reduces my electricity consumption by 219 kWh (780 MJ). At a residential electric rate of $0.08/kWh, that translates to a savings of $17.52 annually, as well as a CO2 savings of 150 kg (3400 moles). It also makes my bedroom darker.

I went outside one night recently when it was clear (albeit humid). I have an urban view, and can only see ~25% of the sky. From my deck, I counted twenty stars. That’s right, I could only see 20 stars. Extrapolating for the full-sky view gets me up to 80, and if we want to be generous we could round that to 100. Compare that to the dark(ish) sky pictured above, and you can see that there’s really a huge difference. In this neighborhood, too, there’s enough stray light running around (even in the summer when leaves block it) that turning off my outside lights isn’t making it hard to get around.

Night can be a pretty neat time. In college, I would occasionally go cross-country skiing at night. Sure, I would have a headlamp with me, but on a clear night I typically didn’t use it. Even cloudy nights, thanks to some local light pollution, were easy to ski without the aid of the headlamp. Being outside at night in the crisp, quiet solitude of a snowy winter was amazing. It’s part of what I missed while in graduate school in a bright, noisy city where it never snows.

After reading this book, I’m looking forward to the Heard Island expedition even more, because the southern ocean, like Antarctica, is home to pristine skies free from artificial lights. Of course, unlike Antarctica, the likely sky condition is mostly cloudy or cloudy, so getting a clear night may be very rare. That will only serve to make the moment more special, if and when it happens. I will bring a camera, and I will try to get a picture of such an event. But that picture will be but a still, lifeless version of the magic at Heard Island.

[1] Hansen, J. J Natl Cancer Inst (2001) 93, p. 1513–1515. DOI: 10.1093/jnci/93.20.1513