Saturday, August 25, 2012

More on effective ecological monitoring: how to do it right

Long-term monitoring is one of the major pillars of ecological science. Yet, conducting long-term ecological research well is not obvious nor simple. It isn't as simple as "Just repeat."

I mentioned Dave Lindenmayer and Gene Likens' book on ecological monitoring. I don't often pick up books again after reading them, but did. I thought it deserved to be distilled down a bit more in my head.

When I think of the keys to long-term research, much of what they wrote resonated with ideas I've had.

There subsections were:


Good questions and evolving questions
The use of a conceptual model
Selection of appropriate entities to measure
Good design
Well-developed partnerships
Strong and dedicated leadership
Ongoing funding
Frequent use of data
Scientific productivity
Maintenance of data integrity and calibration of field techniques

Plus a section entitled "Little things matter a lot! Some tricks of the trade":  field transport, field staff, access to field sites, time in the field, 


Out of all of those, I think there are three points where things most often go wrong/are ignored:

First, test hypotheses. Monitoring can generate luck, but it's better to have a mental model of how ecosystems work. The long-term data should be used to test that. Do not just monitor a phenomenon, but also the potential underlying determinants. Stream NO3- might be your grand response, but have competing hypotheses about the factors that could be driving stream NO3-.

Second, be outside. Field stations are amazing places. Mostly because people are together looking at complex systems. Up time observing together and down time discussing observations are essential. No great program became great with people working in isolation from one another. Automated data collection is great only if it frees us up to spend the remainder of the time outside observing.

Third, analyze data annually. Don't let it accumulate. When I was doing weekly soil CO2 flux data at Cedar Creek, I analyzed the data that night. Just to see what the pattern was and make sure nothing went bonkers that day. It takes practice to generate (new) hypotheses and be ready for surprises. Groups need to come together to compare trends frequently. If you are not analyzing and discussing your data annually, you're not doing it right. Long-term data analysis is a process, not an event.

Some advances are data driven. Others wisdom-driven. To improve long-term research, go no further than these three points.

Friday, August 24, 2012

Summer Reading: Elixir--a history of humankind and water


Last summer reading for the year.

To start, the title is horrible. This is a book about how mankind, over the past 5000 years has harnessed water for civilization's purpose. It's not about sweetened, alcoholic medicines. Just call it Water. You don't write a book about water and title it "cough syrup".

After cover, the book got better. It's a global survey of human waterworks. How people have harnessed  water for irrigation, sanitation, flood control, even war. Africa, Australia, Peru, China, Greece, Britain, Mexico. It's an amazing cross-section of history. It draws on Fagan's knowledge of archaeology and climate well.

Why read it?

The reviewers were impressed, but you could tell they weren't sure. It's not prescriptive for future society. But it's a rich toolbox from which to draw. To learn about qanats is to appreciate the constraints on Persian society and the great lengths people went to irrigate. It's also clear how much work it was to maintain these systems and why they would have been a key to the organization of civilization. And how incredibly complex and well-organized societies were thousands of years ago.

You also learn a bit better how droughts and floods would have crushed societies, but also how societies buffered themselves against them, too.




Wednesday, August 22, 2012

From Fenton: On Jealousy

Fenton's essay, "A lesson from Michelangelo" came out in the New York Review of books in 1995. I must have read it my first year of graduate school. The essay was an amazingly condensed survey of the "humanity of ambition" in the artistic world. Every scientist should read it.


Reading Fenton means knowing what it is to "pull a Giambologna", or the pain of "pasquinades" that apply to papers and grants as much as they did critiques of art.


The personalities and psychology of Michelangelo and Leonardo, Wordsworth and Keats...are the best roadmap for understanding the myriad psychologies of how scientists interact and how they express their ambition.

We all have ambition, but it can be expressed in many ways.

Michelangelo set fire to most of his works as his death drew close. Weaver pulled up most of his plot markers.

Leonardo could barely be bothered to sign his works. The great ecologists might sign their papers, but make little effort to control the fate of their ideas, no less require attribution.

Fenton's discussion of Auden is one of his most important lessons:

"Auden wrote a wonderful thing to Stephen Spender in 1942--it is quoted in Auden's Juvenilia --when he said: 'You (at least I fancy so) can be jealous of someone else writing a good poem because it seems a rival strength. I'm not, because every good poem, of yours say, is a strength, which is put at my disposal.' And he said that this arose because Spender was strong and he, Auden, was weak, but this was a fertile weakness."

To view your rival's strength as being at your disposal is one of the greatest propellants in science.



Tuesday, August 21, 2012

Global soil 15N patterns

Soil del15N of surface soils vs. C:N. n=1000. Most of the points are from Africa and US at this point.

Nitrogen isotopes have the potential to integrate important aspects of the N cycle. In the short-term, vegetation del15N (adjusted for mycorrhizal type) is probably the best index of N availability to plants. When N availability is high, gaseous N loss is more common, which enriches plants in 15N.

Soils are a better long-term integrator of the N cycle as plant material gets incorporated into soil organic matter. 

Global patterns for soil 15N have been worked through once or twice. Amundson et al. did the first synthesis of global patterns and showed that soils in drier, hotter climates were enriched in 15N. These ecosystems should have the greatest fraction of N lost via fractionating pathways like denitrification.

Houlton and Bai (2009) looked at soil 15N patterns and calculated that terrestrial world has an average signature of +5‰. One of their advances was to show that the signatures of NO3- being leaches were similar to that of soils, which implicates gaseous N loss as the primary pathway of enrichment.

Given all that, it is still an open question as to what are the proximal controls on whether soils lose a lot of N to denitrification. Are there other characteristics of soils that are associated with isotopic enrichment (i.e. high N availability) beyond being hot and dry?

I think I'll try to tackle this in a synthesis. Above are data from 1000 soils that I've pulled together so far. One thing that's clear is that soils with high C:N are rarely enriched much. Likely these soils would have low N availability and little denitrification. Low C:N soils often have high del15N, which implies high rates of denitrification. 

But there are also low C:N soils that aren't enriched in 15N. Why not? Are they cold? Wet? Do they have high pH? 

That alone is a pretty interesting question.

Saturday, August 18, 2012

Guest TNC post: How Do Grasslands Survive Drought?



TNC asked me to write up a post for the blog, Cool Green Science. The post covers some results from our recent Nature Climate Change paper on drought tolerance of grasses. Post is here.

Cool Green Science is cool as far as blogs are concerned. It's interesting to see short pieces from Mark Tercek as well as on-the-ground staff.


Thursday, August 16, 2012

How to design traits experiments

Relationship between the # of species I sampled and the number of replicates for each species across all the trait screening experiments and surveys I have done over the past 15 years.

One of the fundamental tradeoffs in plant trait science (and ecology in general) is how to array replicates among subjects vs. within them. In trait screening research, this usually takes the form of deciding whether to measure a lot of species or a lot of replicates for species.

If one measures a large number of species with little replication, then there is little certainty of the any one point.

If one measures few species with a lot of replication, generality in relationships across species is compromised, potentially to the point where an existing relationship among species is not detected.

The tension is an old one and often addressed by "balancing" designs. Measuring an intermediate number of species with an intermediate level of replication.

That would be wrong.

Different scenarios require different approaches. If the goal of a project is to test for relationships among species, then first replicate among and then within. This often means just one replicate per species, which seems flawed, but the species is actually a replicate. Replicating within a species reduces statistical power to detect an overall relationship, even if there is no way to assess the confidence of any one point.*

*This approach was something that was in the milk at Cedar Creek. In 2001, I published a trait study from Cedar Creek that stated what was commonly understood explicitly "Only one plant or clone was sampled per species. Although this minimizes our confidence in the value for a parameter of any one species, for a given amount of sampling effort, this approach maximizes the confidence in the overall relationship among all species." That experiment had 76 species. Seems paltry in a way.

Other scenarios generate other approaches. For example, if traits are measured to better understand the performance of species in an experiment, then first array replicates across those species and then replicate within the species.

If the objective is to generate community-weighted means, then replicates should be added in proportion to their abundance. I'm pretty sure no one has ever done this design.

There are other important questions about trait screening design (that I'm trying to work out in a paper). How to incorporate phylogeny and when and how to array replicates across growth environments all influence the design. With any luck, the manuscript I'm leading will document best practices and can be referred to by researchers and reviewers.

The key to emphasize here is that often the best design will have no replication at the species level. You measure as little as you can on each species for as many species as possible. One measurement for one individual for a lot of species is the best experimental design in many cases and should not be compromised with replicates.


Tuesday, August 14, 2012

Summer Reading: Effective Ecological Monitoring



I saw Gene Likens at ESA in Portland. He mentioned a new book out that he had written with Dave Lindenmayer on ecological monitoring. That was Wednesday. Requested the book Saturday from our library. Got the book today--apparently had to be sent up from Wichita State (Why would Wichita State have it but not Kansas State?).

Read it tonight.

There was an ethos at Hubbard Brook that struck both Kendra and I as incredibly unique. It was a combination of a long-term perspective with a constant vigilance over data sets. At the annual investigators meeting, we remember a number of talks that did little more than added a single data point.  One data point. Whatever the value was from last year. Fern nutrient concentrations. Streamwater nitrate. Bird abundance. Salamander counts. Analyzing and reporting data every year seemed like the highest expression of dedication to long-term research.

It made sense. Analyzing long-term data would require annual vigilance. Data should probably be checked each year. Errors can be caught. Responses to unique events can be seen. Explanations and hypotheses can develop over time.

That unique window into some of what Gene Likens had brought to the science of long-term monitoring is distilled in a new book. It distills the two authors approaches and thoughts about long-term monitoring. It reads like a 50-year summary of intellectual battles in some ways. Why every LTER was not issued 5 copies of this book, I don't know.

Some highlights:

Chapter 4: The problematic, the effective, and the ugly -- some case studies. One whole section is devoted to NEON and TERN. Box 4.1 is entitled "Trepidation". "Writing this chapter was never-racking and certainly far from 'career-enhancing' as we have been critical of a major province level program...and a number of national-level programs...We had concerns for several reasons...

Chapter 2: Why monitoring fails. Excessive bureaucracy. "Another less obvious, but no less real impediment to long-term monitoring is what might be called the loss of the cultural infrastructure. A field site that might appear to administrators or bureaucrats to have 'spartan' or even unsave living quarters and/or laboratory facilities, in fact may be the 'heart' of innovative and productive science for the project, allowing scientists to work and live together while doing research, adjacent to the research site. This certainly was the case in the early days of the Hubbard Brook Ecosystem Study..."

Chapter 3: What makes effective long-term monitoring. Frequent use of data. "Another key ingredient for maintaining long records of high quality is the frequent examination and use of these data. Such examinations result in important discoveries and stimulate new research and management questions."

Books like this are rare. We don't tell stories. We don't distill experiences down to wisdom.

This one does.