Latest On The Conservation Gateway

A well-managed and operational Conservation Gateway is in our future! Marketing, Conservation, and Science have partnered on a plan to rebuild the Gateway into the organization’s enterprise content management system (AEM), with a planned launch of a minimal viable product in early FY26. If you’re interested in learning more about the project, reach out to megan.sheehan@tnc.org for more info!

Spatial Data Accuracy: Local vs Non-Local

   

Jim smith photo

FIFTH IN A SIX-PART SERIES

This is the fifth in a series of six short blogs that delve into the enigma that is map accuracy.

What has happened so far

The series takeaways leading up to this chapter are (in brief): 1) spatial data usability involves more than a traditional quantitative estimate of map “accuracy”; 2) overall percent accuracy is an incomplete, and often misleading, measure of map accuracy; 3) what one needs the map to represent and what doesn’t matter are the drivers; and 4) category accuracy is where to focus if all you are the numbers.

Picking up where I left off

Lest you think that I’m prevaricating, hedging my bets, or making excuses, consider that there is no definitive, absolute answer to the “How accurate is this map?” question.

If this were an algebraic equation, each of the "whats" would be an "X" because each is an unknown to the data producer.

We’ve heard that “all politics is local.” In the same vein, all map accuracy is local as well.  Whether a dataset is intended to be or not, virtually every quantitative map accuracy assessment is generated with very local data and interpreted locally for local applications. That is, how does the map compare to what I see out of my window, or to what I see as I drive down the road, or to what a set of geo-referenced vegetation plots in my location indicates? While this may not seem like a “fair” evaluation in that it doesn’t indicate whether non-local data sets (like LANDFIRE) achieve the program goals, it is certainly valid procedurally. I think it is true that if the “local” accuracy is high, it is more likely that the “regional” accuracy is high as well IF the assessment data set and the assessment process encompass the entire region. Ditto for issues of low accuracy. The latter is more likely indicative of low regional accuracy than high regional accuracy BUT that should never, never be assumed outright.

When LANDFIRE states that the spatial data, as delivered, is appropriate for large landscape analyses, it means the data should correctly display the basic pattern of vegetation when viewed AT the large landscape, regional or national scale, not zoomed in to a watershed, small area or pixel. Unfortunately, that is usually how we assess its accuracy in our hearts and minds.  Here’s an analogy: think of a map as an “impressionist” painting. If you look at a tiny area of the painting you can see the quality of the brush work, and the ways the paint is layered and colors juxtaposed, but it may not look like anything in particular. You have to step back and view the painting as a whole—the large landscape—to really assess its beauty.

Soooooo....

How could LANDFIRE (or other non-local data sets) be assessed for practical application to real mapping tasks?

  • One way is to ask for qualitative review by regional experts. Key questions would include, Are the mapped categories distributed across the large landscape or region relatively well? Are vegetation types mapped that should not be present? Are vegetation types missing that should be in the final data set? Do the types appear in about the right patch size? Do vegetation types occur near or next to the correct vegetation types? Is there the right relative amount of the primary vegetation type? Getting this type of review can be challenging.
  • Another approach would be to compare the spatial data product against another regional data product, assuming they mapped [nearly the] same vegetation types and represent the same time period. However, consider that if a comparable “alternative” regional product already exists, creating another doesn’t make much sense.

The truth is that “non-local” accuracy assessments are difficult to pull off, so we nearly always revert back to local accuracy assessments which may or may not indicate the usefulness of a data set for the applications they were designed to support.

At the core, data users alone can are tasked to determine whether a dataset works for their specific purpose. LANDFIRE’s responsibility as a data producer is to provide whatever information we can, qualitative, quantitative, internal or external. We have and will continue to assess our spatial data product accuracy where possible using local information.

Concluding the series, my next blog,  LANDFIRE Agreement Results, describes LANDFIRE’s approach and I jump into the heart of accuracy fire in the next blog. Care to jump in with me?

Contact Jim