By Jim Smith -- LANDFIRE Project Lead
(aka "Codger")
Recently I saw a bumper sticker that said, “Just because you can doesn’t mean you should.” I couldn’t have said it better, especially regarding zooming in on spatial data.
Nowadays, (alert---grumble approaching), people zoom in tightly on their chosen landscape, region, and even pixel, whether the data support that kind of close-up view or not. Understandably, that means a LOT of misapplication of perfectly good science followed by head scratching and complaining.
To set a context, I want to look at the “good ole days” when people used less-precise spatial data, but their sense of proportion was better. By “ole,” I mean before the mid-1980s or so, when almost all spatial data and spatial analyses were “analog,” i.e. Mylar map layers, hard copy remote sensing images and light tables (Ian McHarg’s revelation?). In 1978, pixels on satellite images were at least an acre in size. Digital aerial cameras and terrain-correct imagery barely existed. The output from an image processing system was a line printer “map” that used symbols for mapped categories, like “&” for Pine and “$” for Hardwood (yes, smarty pants, that was about all we could map from satellite imagery at that time). The power and true elegance we have at our finger tips today was unfathomable when I started working in this field barely 30 years ago.
Let me wax nostalgic a bit more – indulge me because I am an old GIS coot (relatively anyway). I remember command line ArcInfo, and when “INFO” was the actual relational data base used by ESRI software (did you ever wonder where the name ArcInfo came from?). I remember when ArcInfo came in modules like ArcEdit and ArcPlot, each with its own manual, which meant a total of about three feet of shelf space for the set. I remember when ArcInfo required a so-called “minicomputer” such as a DEC VAX or Data General, and when an IBM mainframe computer only had 512K [not MB or GB] RAM available. I know I sound like the clichéd dad telling the kids about how bad it was when he was growing up -- carrying his brother on his back to school in knee-deep snow with no shoes and all that -- but pay attention anyway, ‘cause dad knows a thing or two.
While I have no desire to go back to those days, there is one concept that I really wish we could resurrect. In the days of paper maps, Mylar overlays and photographic film, spatial data had an inherent scale that was almost always known, and really could not be effectively ignored. Paper maps had printed scales -- USGS quarter quads were 1:24,000 -- one tiny millimeter on one of these maps (a slip of a pretty sharp pencil) represented 24 meters on the ground, almost as large as a pixel on a mid-scale satellite image today. Aerial photographs had scales, and the products derived from them inherited that scale. You knew it -- there was not much you could do about it.
Today, if you care about scale, you have to investigate for hours or read almost unintelligible metadata (if available) to understand where the digital spatial data came from -- that stuff you are zooming in on 10 or 100 times -- and what their inherent scale is. I think that most, or at least many, data users have no idea that they should even be asking the question about appropriate use of scale -- after all, the results look beautiful, don’t they? This pesky question means that users often worry about how accurately categories were mapped, without thinking for a New York minute about the data’s inherent scale, or about the implied scale of the analysis. I am especially frustrated with the “My Favorite Pixel Syndrome” when a user dismisses the entire dataset because it mis-maps the user’s favorite 30-meter location, even though the data were designed to be used at the watershed level or even larger geographies.
So, listen up: all that fancy-schmancy-looking data in your GIS actually has a scale. Remember this, kids, every time you nonchalantly zoom-in, or create a map product, or run any kind of spatial analysis. Believe an old codger.
Contact Jim Smith