OpenStreetMap in Africa (2013-1015), beautifully visualized

The Ito World crew is back at it with a new OpenStreetMap visualization, this time for Africa. Results are shown at the continental scale and for selected cities over the last couple years. The final product is stunning, as usual.

Growth in West Africa as part of the Ebola response, and the Nigeria eHealth Import are the most distinctive. Other growth areas include a broad swath of East Africa, and the incredible density of the Map Lesotho project. Most impressive, however, is that the growth is not constrained to these areas; it is distributed across the continent. The missing areas are gradually filling in, it is only a matter of time…

Diapers from the Crowd


While a departure from the typical content on this blog, I hope you’ll spare a moment to read about a fundraising effort my wife and I put together for a cause that has grown near and dear to our hearts, the difficulty many mothers face diapering their children.

It is stunning that in this country up to 30% of mothers struggle to meet the diaper requirements of their children…30%. This stress is the leading cause of mental health problems in new mothers. And to make it worse, there is a hole in the social safety net. Traditional support programs for low-income mothers and families (WIC and SNAP) do not cover diapers and wipes. Recent stories in The Atlantic and the Baltimore Sun cover the “diaper dilemma” problem in detail. This problem has led to the creation of “diaper banks” around the country, including the DC Diaper Bank near us in Washington, DC.

Hilary and I have been humbled and amazed by having a child. So as our first daughter, Flynn, just turned a year old, we wanted to do something to help out those struggling to diaper their children.  Our goal is to diaper 2 kids for a year, which costs about $2,000. This comes out to about $20 per week for 1 kid.

If anyone is interested in helping out, please visit Flynn’s 1st Birthday Diaper Drive page on GoFundMe to learn more about the issue and contribute to the cause. All funds will go to the DC Diaper Bank.

Thanks from Flynn, Hilary, and I.


The Flatness of U.S. States



It all started with delicious pancakes and a glorified misconception. In a 2003 article published in the Annals of Improbable Research (AIR), researchers claimed to scientifically prove that “Kansas is Flatter Than a Pancake” . The experiment compared the variation in surface elevation obtained from a laser scan of an IHOP pancake and an elevation transect across the State of Kansas. And while the researchers’ conclusion is technically correct, it is based on two logical fallacies. First, the scale the analysis shrunk the 400 mile-long Kansas elevation transect down to the 18 cm width of the pancake, thereby significantly reducing the variability of the elevation data. Second, pancakes have edges, which creates some significant relief relative to the size of the pancake, approximately 70 miles (!) of elevation if applied to Kansas scale (Lee Allison, Geotimes 2003). Using this approach, there is no place on earth that is not flatter than a pancake.

Now, I can take a joke, and at the time thought the article was clever and funny. And while I still think it was clever, it began to bother me that the erroneous and persistent view that Kansas is flat, and therefore boring, would have negative economic consequences for the state.  I grew up on the High Plains of southwestern Kansas, where there are broad stretches of very flat uplands. But even within the High Plains region there are areas with enough relief to certainly not be considered flat as a pancake…and this doesn’t include the other two-thirds of the state.

Official Physiographic Regions for the State of Kansas

Official Physiographic Regions for the State of Kansas. Note the large number of regions denoted by “hills.”

The joke of it is that the official Physiographic Regions of Kansas Map describes the majority of the state in terms of hills: Flint Hills, Red Hills, Smoky Hills, Chautauqua Hills, Osage Cuestas (Spanish for “hills”). Not to mention the very hilly Glaciated Region of northeastern Kansas, anyone who attended classes on Mount Oread can confirm that for you.  And after travelling through other areas of the country, I realized that Kansas isn’t even close to the flattest state.

As luck would have it, a few years after the AIR article I found an opportunity to work on this question of flatness and how to measure it. As part of my PhD coursework I was investigating the utility of open source geospatial software as a replacement for proprietary GIS and needed a topic that could actually test the processing power of the software. Combining my background in geomorphology and soil science with a large terrain modeling exercise using the open source stack offered the perfect opportunity to address the question of flatness. What emerged from that work was published last year (2014) in the Geographical Review as a paper coauthored with Dr. Jerry Dobson entitled “The Flatness of U.S. States” .

The article is posted below, so I won’t rewrite it here, but the central goals were twofold. First, create a measure of flatness that reflected the human perception of flat. This measure needed to be based on how humans perceive flatness, quantitatively based, repeatable, and globally applicable. Second, understand how the general population of the U.S. thinks about flat landscapes, and if there was a bias towards assuming Kansas was the flattest state. This blog post focuses more on the details associated with the first goal, while the article posted below has the description of The American Geographical Society’s Geographic Knowledge and Values Survey that provided data for the second.


There were many measures of flat that had been developed in the geomorphological literature, but they tended to be localized measures, meant for hydrological and landscape modeling. I wanted something that could capture the sense of expanse that you feel in a very flat place. Beginning with that thought, I tried to imagine a perfect model of flatness. It had to expand in all directions and be vast. The mental model was that of being on a boat in flat seas and looking out at nothing but horizon in all directions. With a little research, I discovered there is an equation for determining how far you can see at sea. It is height dependent, both for the observer and the object of observation, and it calculates that a 6 foot / 1.83 m tall person, looking at nothing on the landscape (object of observation = 0 ft), can see 5,310 meters before the curvature of the earth takes over and obscures view. This was a critical variable to determine, the distance measure for capturing the sense of “flat as a pancake” is 5,310 meters (at a minimum).

A conceptual model of flatness.

A conceptual model of flatness.

With perception model and distance measure in hand, I needed to determine what the appropriate digital elevation model to use. Even though the study area for this paper is the Lower 48 of the United States,  a global dataset was needed so that the methodology could be applied globally. The NASA Shuttle Radar Topography Mission (SRTM) data that had been processed by the Consortium of International Agricultural Research Centers (CGIAR) Consortium for Spatial Information (CSI) was the best choice. Specifically 90 meter resolution SRTM Version 4.1 was used, and is available here:

In terms of software, the underlying goal of this project was to use only open source software to conduct the analysis. This meant I had to become familiar with both Linux and the QGIS and GRASS workflows. I built an Ubuntu virtual machine in Virtual Box (eventually switching to VMware Workstation) with QGIS 1.2 and Grass 6.3 with the QGIS plugin; by the time I finished the project I was using Ubuntu 10.04, QGIS 1.8 and GRASS 6.4 (and sometimes GRASS 7.0 RC). You don’t realize how much “button-ology” becomes ingrained until you have to switch toolkits, and the combined Windows to Linux and ESRI to QGIS/GRASS transition was rough at times. There were times I knew I could complete a task in seconds in ArcGIS, but spent hours figuring out how to do it in QGIS and GRASS. However, it is worthwhile to become facile in another software as it reinforces that you have to think about what you are doing before you start pushing buttons.

The open source stack has come a long way since I started this project back in 2009, with usability being the greatest improvement. It is a lot easier now for a mere mortal to get up and running with open source than it was then, and the community continues to make big strides on that front. From a functionality standpoint, I did some comparisons between GRASS (Windows install) and ArcGIS 9.2 GRID functions and found that they were very equivalent in terms of processing speeds. It seems there are only so many ways to perform map algebra; note, I discuss the new game-changing approaches to distributed raster processing at the end.

The first attempts to model flatness used a nested approach of slope and relief calculations run at different focal window sizes that were then combined into an index score. However, they just didn’t seem to work that well. To start I was only working on a Kansas subset and compared various model outputs to places I knew well. In researching other analysis functions I came across the r.horizon algorithm. Originally designed for modeling solar radiation, it has an option that traces a ray from a fixed point at a set azimuth, out to a set distance, and measures the angle of intersection of the ray and the terrain. Discovering this function changed my whole approach; it automatically incorporated the distance measure and was only concerned with “up” terrain. To model flat, r.horizon needed to be run for 16 different azimuths, each 22.5 degrees apart, to complete the full 360 degree perspective. Additionally it needed to be run for every raster cell. The output was then 16 different layers, one for each azimuth, with the intersection angle of the ray and the terrain.

Graphic displays how the Flat Index is calculated

Graphic displays how the Flat Index is calculated for every 90 meter cell, using independent measures collected across 16 different directions.

Next I had to determine at what angular measurement flat stopped being flat. This is a subjective decision and one based on my experience growing up on the High Plains. On a return trip to my hometown I surveyed a number of places to get a feel for what was truly flat and what wasn’t. Upon reviewing the topographic maps of those areas, I determined that an upward rise of 100 ft / 30 meters over a distance of 3.3 miles was enough to stop the feeling of “flat as a pancake.” This correlated to an angular measure of 0.32 degrees. Now this measure is completely arbitrary, and it would be interesting to get how others would classify it. I did review it with a few other western Kansas natives who agreed with me. Note, we were not concerned with down elevation at all. This is because canyons and valleys do not impact the perception of flatness until you’re standing near the edge; anyone who’s been a mile away from the South Rim of the Grand Canyon can confirm that you don’t know its there.

Graphic displays the angular measure critera (0.32 degree) used to make the binary flat/not flat classification.

Graphic displays the angular measure criteria (0.32 degree) used to make the binary flat/not flat classification.

The data processing for this project was massive, requiring downloading all the individual tiles of the SRTM for the Lower 48 (55 tiles, over 4GB in total size), importing (, mosaicing (r.patch), setting regions (g.region), then ultimately subsetting into four sections because of a bug in r.horizon (r.mapcalc conditional statements), running r.horizon 16 times on every raster cell in the Lower 48 (1,164,081,047 cells), running the cut point reclassification (r.recode), then compiling the final index score (r.mapcalc). Each segment of the DEM took about 36 hours to process in r.horizon, meaning the entire Lower 48 took about 6 days total.

In the final step, each of the 16 individual azimuth scores were added together (r.mapcalc) to create a single index score ranging from 0-16 (0 being non-flat in all directions, 16 being flat in all directions). This index score was divided into four groupings, with Not Flat (0-4), Flat (5-8), Flatter (9-12), Flattest (13-16) categories. Zonal statistics (r.statistics) for each state were extracted from the final flat index, also known as the “Flat Map”, to calculate the rankings for flattest state. A water bodies data layer was used as a mask in the zonal statistics (r.mask) so as to eliminate the impact of flat surface water elevations (reservoirs and lakes) from the final calculation. A second mask was also used to eliminate the influence of two areas of bad data located in the southeastern U.S., mainly in Florida and South Carolina. Both total number of flat pixels and percent area flat pixels were calculated and ranked for the flat, flatter, and flattest categories. See the article below for a table of results.


Below are a series of maps that display the final Flat Index. The spatial distribution of flat areas is intriguing, with some confirmations and surprises to our initial hypotheses. Interesting areas include the Piedmont and coastal plains of the eastern coastal states, Florida and the coastal areas of the Gulf States, the Red River Valley in Minnesota and North Dakota, the glacial outwash in Illinois and Indiana, the Lower Mississippi River valley, the High Plains region of the Great Plains, the Basin and Range country of the Intermontain West, and the Central Valley of California. A complete table of the state rankings is available in the article, and there are several more zoomed in maps available below. Each image is clickable and will open a much larger version.

Map shows the Flat Index, a.k.a the "Flat Map", for the Lower 48 of the United States.

Map shows the Flat Index, a.k.a the “Flat Map”, for the Lower 48 of the United States.

Map displays the Flatter and Flattest Categories of the Flat Index

Map displays the Flatter and Flattest Categories of the Flat Index. Useful for visualizing the patterns of flat lands within the continental United States.

Map displays the Flattest Category of the Flat Index.

Map displays the Flattest Category of the Flat Index. These are the areas that can be called “flat as a pancake.”

Rank order of States by the percentage of their area in the Flattest class

Rank order of States by the percentage of their area in the Flattest class. As initially thought, Kansas isn’t even close to the flattest.


The media response to what Jerry Dobson, my coauthor and PhD advisor, and I refer to as the “Flat Map” took me by surprise. Jerry was always confident it would be well received, but the range of international, national, and regional coverage it received was beyond anything I imagined. Articles about the Flat Map were written in The Atlantic, The Guardian, National Geographic, Mental Floss, Smithsonian Magazine, Chicago Tribune, Williston Herald, Des Moines Register, Lincoln Journal Star, Hays Daily News, Dodge City Daily Globe, and one of my favorites (just for the headline of the forum post) the WavingTheWheat Forums.

And just recently, Jerry sent along this little gem from the 2015 Kansas Official Travel Guide…that’s right, the Flat Map made the Tourism Guide. In the very chippy AIR response to the Flat Map, the AIR editors indicate they got a call from the Kansas Director of Tourism. I’ll take this.

Does the perception of flatness impact tourism?

Does the perception of flatness impact tourism? Seems the State of Kansas is interested in projecting that the Kansas landscape has hills.

More Maps

Map shows the Flat Index for the Lower 48 of the United States overlaid with the boundaries of the USGS Physiographic regions.

Map shows the Flat Index for the Lower 48 of the United States overlaid with the boundaries of the USGS Physiographic regions. Interesting correlation between flat areas and physiographic boundaries.

Map displays the Flat Index over Florida

Map displays the Flat Index over Florida, note large areas of flat land in southern half of the state and along the panhandle coast.

Map shows the Flat Index over Louisiana and the Lower Mississippi Valley

Map shows the Flat Index over Louisiana and the Lower Mississippi Valley. Large areas of flat lands occur within the river valley and along the coastal areas.

Map shows the Flat Index over Illinois and Indiana

Map shows the Flat Index over Illinois and Indiana. Note the huge area of Illinois within the Flattest class, the result of glacial outwash geomorphic processes.

Map shows the Flat Index over northern Texas and Oklahoma

Map shows the Flat Index over northern Texas and Oklahoma. Note large tracts of flat land in the western High Plains region.

Map shows the Flat Index over Kansas

Map shows the Flat Index over Kansas. Note large areas of flat land in the western High Plains region and in the central area of the state corresponding to the Arkansas River valley and McPherson-Wellington Lowlands physiographic province.


I would like to thank Dr. Jerry Dobson for his efforts on this paper. We worked together conceptualizing “flat” and how to build a novel, terrain-based, and repeatable method for measuring it. It was a long road to get the Flat Map out to the world, and Jerry was a constant source of inspiration and determination to get it published. When I was swamped with work at the State Department, Jerry pushed forward on the write up and talking with the media.


In terms of the future, there is much more that can be done here. New distributed raster processing tools (Mr. Geo and GeoTrellis) could rapidly increase processing speeds, and provide an opportunity for using a more refined, multi-scalar approach to flatness. New global elevation datasets are also becoming available, and could potentially reduce the error of the analysis through lower margins of error in forested areas. If I was to do it again, the USGS National Elevation Datasets, particularly at the 30 meter and even 10 meter resolution, would be a great option for the United States. On the perception front, the terrain analysis results could be compared with landcover data to determine how landcover affects perception. Social media polling could also gather a huge amount of place-based data on “Is your location flat?” and “Is your location boring?”. I would also like to get the data hosted on web mapping server somewhere, so people could interact with it directly. A tiled map service and the new Cesium viewer would be a great tool for exploring the data. If anyone is interested in working together, let me know.


Below is a pre-publication version of the article submitted to Geographical Review. Please cite the published version for any academic works.

Download the PDF file .

Moving forward…

Just wanted to let everyone know that I am moving on from Boundless. It was a fascinating ride, and I learned a lot about startup life. Launching a new product is a true challenge, requiring a deft hand to manage all the constituent elements of the business. We got close, but strategic priorities required a narrower product focus. I wish everyone there well, the company has a bounty of talented people and the sky’s the limit.

Moving forward, I am going to take the next couple of months to finish my dissertation in Geography (stay tuned, lots coming on MapGive, Imagery to the Crowd, and disruptive innovation). Completing the PhD is my focus, but I’m looking forward to exploring new options. I’m open to continuing down the product management and private sector paths, but I also miss the analytical and complex emergency focus of my previous work. I’ll be reaching out to friends around town and beyond, and if anyone has any suggestions, please let me know.

With the growth in open source software, cloud computing, open data, imagery, point clouds, and the Internet of Things, there are going to be an amazing array of new opportunities for geographers. So regardless of what comes next, I will continue to use geographic data, tools, and analysis to disrupt existing workflows and business models, and strive to make the world a better place.



Dream Team and the Rise of Geographers

Dream Team

Image links to original article, which seems to work best in Firefox.

Back in September, the team at Government Executive did a fantasy football – inspired take on government leadership. Entitled the Dream Team, the short piece highlights eight bureaucrats found throughout the government (or formerly in the government in my case).  To use the words of the GovExec authors, “here are some of the folks we’d love to see on any leadership team tackling the kinds of big problems only government can address.

Collectively, this group has a range of skills: finance, acquisitions, program and project management, legal, IT security, HR, software platform development, disaster recovery…and somewhat curiously, two geographers.

I’m humbled to have been included on this list, particularly given the accomplishments of the other geographer, Mike Byrne.  That said, what I find more intriguing is how two geographers made this list to begin with. What is it about geography that is gaining the attention of management and leadership journalists? Why would they think these are the skills needed on government teams to solve big problems? And why now?

Starting with the question of awareness, its clear that digital geography and maps have captured the public’s attention. The combination of freely available Global Positioning System (GPS), ubiquitous data networks, high resolution imagery, increasingly powerful mobile devices, and interoperable web services has fundamentally changed how the average person interacts with maps and geographic data. In the ten years since the introduction of Google Earth, the expectation of the average person is now to have complex, updated, descriptive, interactive maps at their disposal anytime, anywhere, and on any device. This shift is nothing short of revolutionary.

And while the glitz of slippy maps and spinning globes has brought the public back to maps, there is more to the story of why geographers are critical elements of multidisciplinary leadership teams.  I believe there are two key characteristics that set geographers apart: Information Integration and Problem Solving. The first is the the capacity of the spatial dimension to integrate information across disciplines, and the second is how spatial logic combined with digital tools can predict, analyze, and visualize the impact of a policy decision.

Begin TL;DR

Let’s begin with the idea of information integration. At its core, Geography is a spatial science that utilizes a range of qualitative, descriptive, quantitative, technical, and analytical approaches in applications that cross the physical sciences, social sciences, and humanities.  It may seem trite to say, but everything happens somewhere, so anything that involves location can be studied from a geographic perspective.  Where most disciplines have fairly defined domains of knowledge, Geography, and its focus on the spatial dimension, cuts laterally across these domains.

The cross cutting nature of location is the fundamental reason why Geography is in a resurgence.  Geographic location provides the mechanism to integrate disparate streams of information. Data that relates to one discipline can be linked to other data simply by its location. As a conceptual framework, Geography is an integrative lens, i.e., what are the forces that interact to define the characteristics of this location, and when combined with Geographic Information Systems (GIS) technology, spatial location acts as the relational key used to link tables of information together in a database.  It is this union of mental framework and technology that provides Geography a unique capacity for information integration.

However, information is typically aggregated for a purpose, the goal is to solve a problem, find an answer, understand a situation, and Geography offers unique tools for that as well.  All problem solving is about breaking a complex problem into divisible, solvable units. For a geographer this starts with the application of spatial logic to the problem; this means when solving a multivariate problem, a geographer will accept the spatial distribution and spatial associations of a phenomenon as primary evidence, and then seek to discover the processes that led to that distribution. This contrasts most disciplines, where process based knowledge on individual characteristics are combined, then tested against the spatial distribution .  The elevation of spatial logic over process logic is the key differentiator between a geographer and a domain-specific analyst.

Bringing spatial logic into the technological domain relies upon Geographic Information Science (GISc) to provide the conceptual framework, algorithms, and specialized tools needed to analyze data encoded with location information.  It is the analytical power of these functions, integrated with the data collection, storage, retrieval and dissemination tools of GIS, that form the toolkit for problem solving used by geographers.  Additionally, cartographic visualization provides a mechanism to encode these data and analysis so that complex spatio-temporal relationships can be displayed and quickly understood.  Whether it is data exploration, sense making, or communicating results, displaying geographic data in map form is a tremendous advantage over text.  And now with the web, cartography can be interactive, cross-multiple scales, and be dynamic through time.

So, why is now the time for geographers?  The answer is threefold, first it has to do with the scale and complexity of problems we are facing, second is the amount and variety of information that can be applied to the problems, and third is the maturity of the digital geographic tools. Finding solutions to deal with climate change, energy, sustainable development, disaster risk reduction, and national security will require interdisciplinary approaches that are firmly grounded in the spatial dimension.  With the transition to a digital world, society (governments included) finds itself in a state of information overload. In this world where data is plentiful, value shifts from acquiring data to understanding it. Geography as a discipline, and geographers equipped with a new generation of spatial information technology, are well adapted to this new paradigm.

If done correctly, the modern geographer has broad academic training across a range of disciplines, uses the spatial perspective as a means of information integration and analysis, and is facile with the digital tools needed to collect, store, analyze, visualize, and disseminate geographic information and analysis.  Combine these skills with management and leadership training, and the geographer becomes a portent fusion.  One that I think this article correctly identifies as critical to the leadership teams needed to solve big problems.

Here they come to save the day...

Here they come to save the day…

A decade of OpenStreetMap, beautifully visualized

Adding to the collection of amazing OpenStreetMap animations, the folks at Scout worked with the Ito World team to create a new addition to the “Year in Edits” series. Celebrating the 10 year anniversary of OpenStreetMap, the new video looks at the growth in OSM between 2004 to 2014 . As I’ve blogged before, I love these videos. The production quality is high, music is great, and they provide an easy way to communicate how amazing the OSM database has become.  Kudos to all the volunteer mappers out there.


2013 Bold Award and Thoughts on Government Innovation

NextGov Bold Award 2013

Bold Award 2013, awarded for the Imagery to the Crowd initiative

In 2013, the NextGov media organization began the Bold Award, given for innovation in Federal technology. It is an interesting award, as the idea of innovation in the government is usually the punchline of a bad joke.  But the folks at NextGov are on to something, as innovation in the government is actually really difficult. Besides the usual set of restrictions (a stodgy, risk-adverse bureaucracy, broken acquisition processes, and woefully dated information systems), one of the challenges is measuring the impact of an innovation.

In the private sector, a financial metric like Return on Investment (ROI) is a visceral and quantifiable measure of success.  Usually a product is built to fill a gap identified in the market. In order to manage the risk of launching a new product, a protoype is built, beta tested, feedback from the market incorporated, and a revised prototype is built. Wash-Rinse-Repeat till the product makes money, or the underlying assumptions are proven wrong and development stops.

For several reasons the notion that an innovation could be proposed, implemented, measured, iterated, and the team rewarded for success does not translate to government.  First, success is difficult to quantify, let alone tie back to specific actions.  In the context of the HIU, what is the value of informing a policy maker better? How do you measure a good decision? How do you know its a good decision when you can’t know the alternative?  When trying to build something like Imagery to the Crowd or the CyberGIS, how can I measure the impact on foreign policy?  When a decision is ultimately made, it was on the basis of multiple streams of information, how do you determine the value of a single product?  This situation is not unique to government, but the government does introduce some unique dynamics.

Second, there is no incentive to reduce cost. A culture of “we have money at the end of the year” means dollars get spent by years end, often regardless of utility. So if you actually save money, the bureaucracy figures you can do your mission for less and cuts your budget. This is a sentiment that is counter-intuitive at best, criminal at worst.

Third, the broken acquisition systems means that there is no way to fund an agile approach to product development in the government.  Implementations of “minimal viable product” and rapid prototypes are a rare occurrence in the government.  Instead, innovation must follow a procurement process where the innovator has to determine “requirements” (a mind-numbing process), put it out to bid (a mind-numbing process), idea awarded to low ball estimator, it gets built (maybe correctly), and two fiscal years after you started, you have some implementation of your innovative idea (that you have to pay extraordinary costs to the original contractor to change).  Not a recipe for success.

We know government bureaucrats work for the citizens (something I was proud to do), and that they have a duty to reduce costs and increase the quality of services delivered to/for citizens. However, the system is broken when the momentum behind keeping the status quo in place massively overwhelms the need for change.  So what exactly is the motivation for a government employee to be innovative? We know its not money, as innovators are worth significantly more in the private sector. From my experience it comes down to the fact that people care. Yeah, not usually a thought that people use to describe government bureaucrats, but it’s true. There is a tremendous amount of talent and willingness to work hard in the government workforce. The problem is they are shackled, and the cost to be innovative is a personal willingness to put themselves at risk and continually run through bureaucratic walls. As documented by the Washington Post lately, the government is losing the next generation of leaders because of this nonsense.

So back to the idea of awarding innovation in Federal technology. As part of the NextGov inaugural class, I was nominated and awarded a 2013 Bold Award for the Imagery to the Crowd initiative. This was an honor to win, but also disingenuous in that it would not have happened without a crew of people. Those folks at the HIU and elsewhere know who they are and the key role they played.  Gratitude. #oMC.

The 2014 Bold Award winners are listed here

Why Maps Matter

Back in March 2014, FCW published two articles written by Frank Konkel that mentioned the HIU’s work with digital mapping. The first, entitled “Why Maps Matter“, is a good summary piece that reviews how geographic technology is used in various U.S. government agencies. The key points are the growing recognition in the government that visualization is a powerful tool for policy making, and how new companies are making it significantly easier for new users to leverage geographic technology. Beyond discussing the HIU, case studies from the Federal Communications Commission (FCC), Capitol Hill, National Park Service (NPS), and the National Geospatial-Intelligence Agency (NGA) are mentioned.

The second article, State Department: Mapping the humanitarian crisis in Syria, is a shorter piece that focuses solely on the HIU and its work in mapping the Syria humanitarian crisis. Having worked closely on Syria for two years, I can say we put a tremendous amount of effort into building comprehensive refugee datasets, verifying data from news reports, NGO reports, and commercial satellite imagery. Additionally we built inter-agency compatible data schema that leveraged geographic locations and P-Codes for information integration (P-Code dataset, P-Code Viewer). And to visualize it all, we built custom web mapping applications with tools to interactively explore all of the data across time and space.  A significant portion of the HIU work on Syria (and now Iraq) is available on the HIU Middle East Products page, additionally, the data used for the Refugee and Internally Displaced Peoples layers are available for download on the HIU Data page.

It is clear the appreciation and value for geographical data, analysis, and visualization are on the rise; FCW lists the the Why Maps Matter article as the 3rd most popular of 2014. Fully recognizing the value of geography requires that the notion of maps as “pieces of paper” must be replaced with an appreciation and use of geographic data and spatial analysis as a tool of policy formation. This change is happening, albeit slower than I’d like, but its adoption will result in better, more agile policy, and benefit the government and citizens alike.

Clip of Syria map produced by the HIU, full map available here -

Clip of Syria map produced by the HIU, full map available here –

QGIS 2.0 Update // Install on Windows

In response to my previous post on the challenges of installing the new QGIS 2.0 version, I wanted to highlight a new post by the folks over at Digital Geography. Not only did they write a good post on the Ubuntu install of QGIS 2.0 (now updated with an install video), but they have followed up with a summary of the Windows install of QGIS 2.0.

I posted a comment about my concerns regarding the automatic installation of the SAGA and OTG dependencies, and Riccardo answered that the Windows install does include both. I haven’t tested it yet, but automatically including them would be great. Would appreciate if anyone could confirm this in the Windows install, and if the Ubuntu install has been updated.


QGIS 2.0 and the Open Source Learning Curve

As likely everyone in the geo world knows by now, the widely awaited release of QGIS 2.0 was announced at FOSS4G this week. Having closely watched the development of QGIS 2.0 I was eager to get it installed and take a look at the new features. I am for the most part a Windows user, but I like to take the opportunity to work with the open source geo stack in a Linux environment. My Linux skills are better than a beginner, but because I don’t work with it on a daily basis, those skills are rusty. So I decided this is a good excuse to build out a new Ubuntu virtual machine, as I haven’t upgraded since 10.04.

Build Out The Virtual Machine

Let’s just say I was quickly faced with the challenges of getting Ubuntu 12.04 installed in VMware Workstation 7.1. Without getting into details, the new notions of “Easy Install” and its effect on installing VMware Tools took a while to figure out. In case anyone has the Easy Install problem, this YouTube video has the answer, also posted in code below:

sudo mv /etc/rc.local.backup /etc/rc.local
sudo mv /opt/vmware-tools-installer/lightdm.conf /etc/init
reboot the virtual machine

The second problem, about configuring VMware Tools with the correct headers, is tougher. I’m still not sure I fixed it, but given its rating on Ask Ubuntu, it’s obviously a problem people struggle with. Suffice it to say there is no way that I could have fixed this on my own, nor do I really have a good understanding of what was wrong, or what was done to fix it.

The Open Source Learning Curve

This leads me to the point with this post, open source is still really tricky to work with. The challenges in getting over the learning curve leads to two simultaneous feelings: one, absolute confusion at the problems that occur, and two, absolute amazement at the number of people who are smart enough to have figured them out and posted about it on forums to help the rest of us. While its clear the usability of open source is getting markedly better (even from my novice perspective), it still feels as if there is a “hazing” process that I hope can be overcome.

QGIS 2.0 Installation

With that in mind, here is some of what I discovered in my attempt to get QGIS 2.0 installed on the new Ubuntu 12.04 virtual machine. First, I followed the directions from the QGIS installation directions, but unfortunately I was getting errors in the Terminal related to the QGIS Public Key, and a second set on load. Then I discovered the Digital Geography blog post that had the same directions, plus a little magic command at the end (see below) that got rid of the on load errors.

sudo chown -R "your username" ~/.qgis2

After viewing a shapefile and briefly reviewing the new cartography tools, I wanted to look at the new analytical tools. But there was a problem, and they didn’t load correctly. So back to the Google, and after searching for a few minutes I found the Linfinty Blog on making Ubuntu 12.04 work with QGIS Sextante. After a few minutes downloading tools I check again, this time the image processing tools are working, but I keep getting a SAGA configuration error. After figuring out what SAGA GIS is, and reading the SAGA help page from the error dialog box, it was back to searching. The key this time was a GIS Stack Exchange post on configuring SAGA and GRASS in Sextante, this time the code snippet of glory is:

sudo ln -s /usr/lib64/saga /usr/lib/saga

Next, back to QGIS to fire up a SAGA powered processing routine, and the same error strikes again. More searching and reading, now its about differences in SAGA versions and what’s in Ubuntu repos. With no clear answer, I check the Processing options in which there is an “Options and Configuration”, which has a “Providers” section, which as an “Enable SAGA 2.0.8 Compatibility” option.  Upon selecting the option, boom, the all the Processing functions open. Mind you I haven’t checked whether they actually work, after several hours they now at least open.

"Processing" configuration options in QGIS 2.0

“Processing” configuration options in QGIS 2.0


It is clear that the evolution of open source geo took several steps forward in this last week, and the pieces are converging into what will be a true end-to-end, enterprise-grade, geographic toolkit. And while usability is improving, in my opinion, it is still the biggest weakness. Given the high switching costs for organizations and individuals to adopt and learn open source geo, the community must continue to drive down those costs. It is unfair to ask the individual developers who commit to these projects to fix the usability issues, as they are smart enough to work around them. Seems to me it is now up to the commercial users to dedicate resources to making the stack more usable.

Key Resources: