When we think of the future of scientific data collection we may envision automated sensor networks, robots, or probes diligently gathering information and instantly sending it to scientists for analysis in the comforts of an office or laboratory. That future is both here and still quite a ways off. Despite the potential for savings promised by new technology, the cost and reliability are chronic drawbacks. Researchers are instead turning to increasingly popular citizen scientist models to generate gobs of inexpensive, crowd-sourced data.
The oldest example is Audubon’s Christmas Bird Count (CBC) data set (birds!), which is about to take place for the 115th consecutive year. The CBC along with Audubon’s Breeding Bird Survey data set have been instrumental in understanding the dynamics of bird populations over the last several decades and how they may shift in the future due to climate change. Audubon’s Birds and Climate Change report synthesizes a massive amount of this data and is freely viewable by the public online.
An aside: I have to admit I’m a bit concerned that Audubon published this popular version of the report before their methods and conclusions were vetted by peer review, but I’ll withhold my critical perspective until a full technical report is out, which should be sometime in the next few months. But that’s another story; stay tuned.
While Audubon laid the foundation of citizen science, its future lies in user-friendly crowd-sourced online databases. The explosion of phone-computers and ubiquitous wireless connectivity has led to the generation of massive amounts of inexpensive data over the last few years and scientists are taking advantage.
The example closest to my heart is the Cornell Lab of Ornithology’s eBird.org, which has become a mecca for birders worldwide. It has been so successful because benefits are shared by both sides—the scientists who utilize the database as well as the birders reporting data. Data is publicly available for all to see and search in the sexiest way possible: with interactive maps and easily generated species lists for all imaginable scales of time and geography. When birders want to learn something about a bird species’ distribution and abundance, eBird.org is increasingly one of the key places they look. And the database has already been mined to produce hundreds of scientific publications.
Following a similar crowd-sourced model is inaturalist.org, launched by the California Academy of Natural Science six years ago. Anybody with a camera-phone can snap photos of any plant or animal and upload it the database. While it probably won’t ever gain the popularity among bird people of eBird, it does fill one important bird-related niche: window collision victims. Birders do not typically report dead birds as these do not “count” on lists per traditional birding rules set out by the American Birding Association. But birds that have crashed into windows are easy to photograph and important to document for obvious conservation reasons.
Nicholas School PhD candidate Natalia Ocampo-Penuela is using iNaturalist not only to document bird window collisions on Duke’s campus, but also worldwide. Her project already has generated data from several US states as well as from other countries such as Colombia and Ecuador. For more on the bird window collisions project see the website and/or my previous blog post on the topic.
Even if you don’t know what kind of deceased bird you may have found, you can still upload data to iNaturalist as long as you snap a photo. The system relies on physical evidence (photo or sound recording) and identities are confirmed or generated through a peer review process. You don’t have to know anything about insects or snakes in order to send in a photo of whatever happens to scurry or slither by—a community of volunteer experts will identify the taxa for you. And nothing enters the ‘research grade’ database until the identity has been confirmed by an independent citizen scientist reviewer.
eBird in contrast relies on regional reviewers to quality control data, but only unusual observations get flagged for review. Expected species pass through into the database without review, whether identified correctly or not. Many birds are easy to identify, even for a novice, and in the grand scheme of things the sheer quantity of data trumps noise produced by a small percentage of errors. But problems can arise from confusing species nomenclature or cryptic species pairs. For example, I have caught myself several times reporting Red-headed Woodpecker when I meant to report Red-bellied Woodpecker (which has blazing bright red head); and how many novice birders know how to differentiate House Finch and Purple Finch or Sharp-shinned and Coopers Hawks? I know the folks at Cornell and their dedicated team of volunteers are working hard to address these issues.
I highly recommend Nicholas School students contribute observations to these databases. Not only does it provide a service to the scientific community—fulfilling on its own—but can also turn any natural foray into a fun, casual science mission. And there is a lot to be learned about species identification and distribution in the process of participating thanks to the generosity of volunteer reviewers.
An army of robotic, automated microphones and cameras may eventually render all us birders and field naturalists irrelevant, but in the meantime, scientists are tapping into our collective expertise to advance knowledge of biodiversity patterns across spatial scales all over the world.