2013 Bold Award and Thoughts on Government Innovation

NextGov Bold Award 2013
Bold Award 2013, awarded for the Imagery to the Crowd initiative

In 2013, the NextGov media organization began the Bold Award, given for innovation in Federal technology. It is an interesting award, as the idea of innovation in the government is usually the punchline of a bad joke.  But the folks at NextGov are on to something, as innovation in the government is actually really difficult. Besides the usual set of restrictions (a stodgy, risk-adverse bureaucracy, broken acquisition processes, and woefully dated information systems), one of the challenges is measuring the impact of an innovation.

In the private sector, a financial metric like Return on Investment (ROI) is a visceral and quantifiable measure of success.  Usually a product is built to fill a gap identified in the market. In order to manage the risk of launching a new product, a protoype is built, beta tested, feedback from the market incorporated, and a revised prototype is built. Wash-Rinse-Repeat till the product makes money, or the underlying assumptions are proven wrong and development stops.

For several reasons the notion that an innovation could be proposed, implemented, measured, iterated, and the team rewarded for success does not translate to government.  First, success is difficult to quantify, let alone tie back to specific actions.  In the context of the HIU, what is the value of informing a policy maker better? How do you measure a good decision? How do you know its a good decision when you can’t know the alternative?  When trying to build something like Imagery to the Crowd or the CyberGIS, how can I measure the impact on foreign policy?  When a decision is ultimately made, it was on the basis of multiple streams of information, how do you determine the value of a single product?  This situation is not unique to government, but the government does introduce some unique dynamics.

Second, there is no incentive to reduce cost. A culture of “we have money at the end of the year” means dollars get spent by years end, often regardless of utility. So if you actually save money, the bureaucracy figures you can do your mission for less and cuts your budget. This is a sentiment that is counter-intuitive at best, criminal at worst.

Third, the broken acquisition systems means that there is no way to fund an agile approach to product development in the government.  Implementations of “minimal viable product” and rapid prototypes are a rare occurrence in the government.  Instead, innovation must follow a procurement process where the innovator has to determine “requirements” (a mind-numbing process), put it out to bid (a mind-numbing process), idea awarded to low ball estimator, it gets built (maybe correctly), and two fiscal years after you started, you have some implementation of your innovative idea (that you have to pay extraordinary costs to the original contractor to change).  Not a recipe for success.

We know government bureaucrats work for the citizens (something I was proud to do), and that they have a duty to reduce costs and increase the quality of services delivered to/for citizens. However, the system is broken when the momentum behind keeping the status quo in place massively overwhelms the need for change.  So what exactly is the motivation for a government employee to be innovative? We know its not money, as innovators are worth significantly more in the private sector. From my experience it comes down to the fact that people care. Yeah, not usually a thought that people use to describe government bureaucrats, but it’s true. There is a tremendous amount of talent and willingness to work hard in the government workforce. The problem is they are shackled, and the cost to be innovative is a personal willingness to put themselves at risk and continually run through bureaucratic walls. As documented by the Washington Post lately, the government is losing the next generation of leaders because of this nonsense.

So back to the idea of awarding innovation in Federal technology. As part of the NextGov inaugural class, I was nominated and awarded a 2013 Bold Award for the Imagery to the Crowd initiative. This was an honor to win, but also disingenuous in that it would not have happened without a crew of people. Those folks at the HIU and elsewhere know who they are and the key role they played.  Gratitude. #oMC.

The 2014 Bold Award winners are listed here

Why Maps Matter

Back in March 2014, FCW published two articles written by Frank Konkel that mentioned the HIU’s work with digital mapping. The first, entitled “Why Maps Matter“, is a good summary piece that reviews how geographic technology is used in various U.S. government agencies. The key points are the growing recognition in the government that visualization is a powerful tool for policy making, and how new companies are making it significantly easier for new users to leverage geographic technology. Beyond discussing the HIU, case studies from the Federal Communications Commission (FCC), Capitol Hill, National Park Service (NPS), and the National Geospatial-Intelligence Agency (NGA) are mentioned.

The second article, State Department: Mapping the humanitarian crisis in Syria, is a shorter piece that focuses solely on the HIU and its work in mapping the Syria humanitarian crisis. Having worked closely on Syria for two years, I can say we put a tremendous amount of effort into building comprehensive refugee datasets, verifying data from news reports, NGO reports, and commercial satellite imagery. Additionally we built inter-agency compatible data schema that leveraged geographic locations and P-Codes for information integration (P-Code dataset, P-Code Viewer). And to visualize it all, we built custom web mapping applications with tools to interactively explore all of the data across time and space.  A significant portion of the HIU work on Syria (and now Iraq) is available on the HIU Middle East Products page, additionally, the data used for the Refugee and Internally Displaced Peoples layers are available for download on the HIU Data page.

It is clear the appreciation and value for geographical data, analysis, and visualization are on the rise; FCW lists the the Why Maps Matter article as the 3rd most popular of 2014. Fully recognizing the value of geography requires that the notion of maps as “pieces of paper” must be replaced with an appreciation and use of geographic data and spatial analysis as a tool of policy formation. This change is happening, albeit slower than I’d like, but its adoption will result in better, more agile policy, and benefit the government and citizens alike.

Clip of Syria map produced by the HIU, full map available here - https://hiu.state.gov/Products/Syria_DisplacementRefugees_2014Oct23_HIU_U1109.pdf
Clip of Syria map produced by the HIU, full map available here – https://hiu.state.gov/Products/Syria_DisplacementRefugees_2014Oct23_HIU_U1109.pdf

World Country Polygon Datasets

The Humanitarian Information Unit (HIU) has released several new datasets that leverage the Office of the Geographer‘s work on mapping International Boundaries. The Large Scale International Boundaries (LSIB) dataset, maintained by the Geographic Information Unit (GIU), is a vector line file that is believed to be the most accurate worldwide (non-Europe, non-US) international boundary vector line file available. The lines reflect U.S. government (USG) policy and thus not necessarily de facto control (cited from metadata attached to files). In September 2011, the HIU first released the boundaries publicly for download. Working with colleagues at DevelopmentSeed after that release, they made some substantial improvements to the underlying data structure that helped lead to this work.

The LSIB dataset is designed for cartographic representation and map production. However, this poses a problem for GIS analysis, because the dataset is only composed of vector lines of terrestrial boundaries between countries. This means they do not contain coastlines, and could not be converted into polygons for GIS analysis. To address this issue, the HIU combined the LSIB dataset with the World Vector Shorelines (1:250,000) dataset. The combination of these two datasets is one of the highest resolution country polygon datasets available. Additionally, the LSIB-WVS polygon file is believed to be the most accurate available dataset for determining island sovereignty. It corrects the numerous island sovereignty mistakes in the original WVS data (cited from metadata attached to files).

Two other modifications were made to the datasets. First, the large cartographic scale of the data also introduces a problem in that the data are too detailed for global scale mapping. Therefore, the HIU also created “generalized” versions of the original LSIB-WVS polygons that are suitable for smaller scale mapping. Second, in order to facilitate the ability to “join” data to the polygons in a GIS, several attributes were added to the database, including Country Name and several ISO 3166-1 Country Codes (ISO Alpha 2, ISO Alpha 3, and ISO Number). After a year of work, the data have been released into the public domain.

All datasets can be downloaded from the HIU Data page or the links below:

LSIB – WVS Country Polygons

High Resolution LSIB-WVS Country Polygons (Americas) :: https://hiu.state.gov/data/Americas_LSIBPolygons_2013March08_HIU_USDoS.zip

High Resolution LSIB-WVS Country Polygons (Africa/Eurasia) :: https://hiu.state.gov/data/EurasiaAfrica_LSIBPolygons_2013March08_HIU_USDoS.zip

Simplified Versions

Simplified Global World Vector Shorelines :: https://hiu.state.gov/data/Global_SimplifiedShoreline_2013March08_HIU_USDoS.zip

Simplified Global Country Polygons :: https://hiu.state.gov/data/Global_LSIBSimplifiedPolygons_2013March08_HIU_USDoS.zip

LSIB Lines

Large Scale International Boundaries (LSIB), AFRICA and the AMERICAS :: https://hiu.state.gov/data/AFRICAandAMERICAS_LSIB4b_2012Sep04_USDoS_HIU.zip

Large Scale International Boundaries (LSIB), EURASIA :: https://hiu.state.gov/data/EURASIA_LSIB4b_2012Sep04_USDoS_HIU.zip

Cartographic Guidance

Note, both the polygon and line datasets are useful for cartographic representation. This is due to the variety of different boundary classifications that are in the LSIB. Below is a subset from the metadata attached to the datasets that describes USG cartographic representation of the boundary lines.

From the LSIB lines metadata:
The “Label” attribute field provides a name for any line requiring non-standard depiction, such as “1949 Armistice Line” or “DMZ”

The “Rank” attribute categorizes lines into one of three categories:
a) A rank of “1” (includes most of the 320 international boundaries) for those which the USG considers “full international boundaries.”
b) A rank of “3” for other lines of international separation. Most are considered by the US government to be in dispute.
c) A rank of “7” for other lines of separation such as DMZ’s, No-Mans Land (Israel), UNDOF zone lines (Golan Hts.), Sudan’s Abyei, and for the US Naval Base Guantanamo Bay on Cuba.

Any line with a rank of “3” or “7” is to be dotted or dashed differently and in a manner visually subordinate to the normal rank “1” lines.

Additional information about how the LSIB dataset is produced, and the processes that went into the production of the new datasets are included in the metadata.

And for more information about the Office of the Geographer, see the article from State Magazine below:

State Magazine (March 2009) Office of the Geographer
Article about the Office of the Geographer from State Magazine in March 2009

Imagery to the Crowd, ICCM 2012

Here is my ignite talk on the “Imagery to the Crowd” project from the International Conference on Crisis Mapping (ICCM 2012). I’ve attended each of the four ICCM conferences (Cleveland, Boston, Geneva, Washington DC). They have been a great way to understand the organizations that comprise the humanitarian community, and more importantly, meet the individuals who power those organizations. It was exciting to present on our work at the HIU, and contribute back to the Crisis Mapping community.

All of the Ignite talk videos are available at the Crisis Mappers Website (lineup .pdf) and collectively they represent a solid cross-section of the field. At the macro-level, I believe the story continues to be about the integration of these new tools and methodologies into established humanitarian practices. The toolkits are stabilizing (crowdsourcing, structured data collection using SMS, volunteer networks, open geographic data and mapping, social media data mining) and are being adopted by the major humanitarian organizations. While I am partial towards crowdsource mapping, the Digital Humanitarian Network and the UN OCHA Humanitarian eXchange Language (HXL) are two other exciting projects.

Imagery to the Crowd…early results

We have been busy reviewing the results of the Camp Roberts / Relief 12-3 mapping experiment for the Horn of Africa. In this phase of the project, the OpenStreetMap (OSM) community was provided short-term access to high resolution commercial satellite imagery over two large collections of refugee camps in Ethiopia (Dollo Ado) and Kenya (Dadaab).  The goal was to map the roads and footpaths in 10 refugee camps, that contain a population over 600,000 people, in 48 hours. A more detailed numerical analysis of the data will follow, but from a qualitative perspective the results are amazing. Below are examples taken from one specific camp, the Bokolmanyo camp in Ethiopia, and links to each of the 10 camps mapped in the experiment.

Bokolmanyo before the mapping experiment
Bokolmanyo refugee camp in the OSM database on 20 May 2012
Bokolmanyo after the mapping experiment
Bokolmanyo refugee camp in the OSM database on 28 May 2012

The ‘Dollo Ado’ refugee camp in Ethiopia is actually composed of 5 individual camps. These camps literally did not exist in OSM before the experiment began. The latest population estimates for the camps report that in total there are 151,972 individuals / 36,721 households living in the Dollo Ado camps (from the UNHCR data portal for the Horn of Africa, and specifically the 22 May 2012 Dollo Ado population statistical report).

Export as KML for Google Earth/Google MapsOpen standalone map in fullscreen modeExport as GeoJSONExport as GeoRSS
Dollo Ado Refugee Camps

loading map - please wait...

Bokolmanyo: 4.549560, 41.539478
Melkadida: 4.522779, 41.720324
Kobe: 4.481878, 41.742554
Helawein: 4.368492, 41.861429
Buramino: 4.303960, 41.915073


Similarly, the ‘Dadaab’ camp in Kenya is also composed 5 individual camps with a total of 465,334 individuals living there (UNHCR 20 May 2012 Dadaab population statistical report). These camps have been in operation longer than Dollo Ado, and contains 3 times more people. At the beginning of the experiment 3 of these camps had some map data in OSM, however the newer Ifo 2 and Kambioos camps were non-existent. All camps had significant improvements.

Export as KML for Google Earth/Google MapsOpen standalone map in fullscreen modeExport as GeoJSONExport as GeoRSS
Dadaab Refugee Camps

loading map - please wait...

Dagahaley: 0.193290, 40.286608
Ifo 2: 0.148573, 40.318623
Ifo: 0.119047, 40.315189
Hagadera: 0.009999, 40.370765
Kambioos: -0.043087, 40.370121


These impressive results are due to the hard work of a wide range of people, and I would like to thank several of them: first is the OSM volunteers who donated their time and energy to mapping these camps – you literally helped put 600,000 people on the map; the HIU technology team who went above and beyond in getting the tech stack running; the State Department, Office of the Geographer (Lee Schwartz and Benson Wilder) – USAID Office of Foreign Disaster Assistance (Chad Blevins) – USG partners (Katie Baucom and Nat Woolpert) who were key to keeping the process moving; John Crowley for providing constant energy and opening the Camp Roberts venue as a place to work; Kate Chapman and Schuyler Earl from the Humanitarian OpenStreetMap Team for advising on the process and making modifications to the tasking server to accommodate NextView; the UN’s Operational Satellite Applications Programme (UNOSAT) for its early help with image processing and serving.

Let’s hope this is just the beginning. I’ll be posting the results of the numerical analysis here, as well as details on the actual request workflow and technological implementation.

Imagery to the crowd…phase 1

Over the past year, the Humanitarian Information Unit (HIU) at the U.S. State Department has been working with the Humanitarian OpenStreetMap Team (HOT) to publish current high-resolution commercial satellite imagery during humanitarian emergencies. The imagery is used to map the affected areas, and provide a common framework for governments and aid agencies to work from. All of the map data is stored in the OpenStreetMap database (http://osm.org ), under a license that ensures the data is freely available and open for a range of uses.

This work began as part of the RELIEF Exercises 11-4 at Camp Roberts in August 2011, and focused primarily on the legal and policy issues associated with sharing imagery. Now with RELIEF Exercise 12-3 happening in DC this week, the project is moving into its first technical implementation. As a proof of concept, the HIU is publishing imagery for the refugee camps in the Horn of Africa, and making the imagery available to the volunteer mapping community. The goal is to produce detailed vector data for the refugee camps, including roads and footpaths in and around the camps. There are tens of thousands of refugees living in these camps who are victims of famine and conflict, and these data can be used to improve planning for humanitarian assistance.

How to help: We are going to open access to the imagery on Monday 21 May 2012. We would like to spend two 24-hour periods tracing the areas of interest, which will include 11 refugee sites. All work will be done through the HOT Tasking Manager (http://tasks.hotosm.org), a microtasking platform that will split up the image tracing into ‘tiles’ that will require approximately 30-45 minutes to map.

Accomplishing this task will require that volunteers become familiar with OpenStreetMap and the basic concepts of mapping. But, don’t worry, there are plenty of resources out there to help. For more information on the OpenStreetMap (OSM) process, see the “Beginning OpenStreetMap Tutorial” available from the LearnOSM website (http://learnOSM.org), specifically Chapters 1,2,3,6. For more information on HOT’s work in Somalia see the HOT Somalia project page, and other HOT related materials on the HOT wiki.

GIS 2.0 and Humanitarian Information Management Lecture

Today I gave a guest lecture to the Prof. Stephen Egbert and Prof. Shannon O’Lear ‘s ‘Geography and Genocide’ class (Geography 571) at the University of Kansas. Students in this class come from a range of backgrounds, so the content was designed as an introduction to GIScience and it’s potential applications. This included a brief review of GIS 2.0 concepts, and moved on to show how these tools are being utilized in humanitarian applications.

It is always interesting to introduce people to these technologies. The Open Street Map – Project Haiti video just blows people away. I like to show it first as an indication of what GIS 2.0 is all about — open source GIS software combined with inexpensive, powerful hardware is allowing people to interact, produce, and consume geographic data in amazing ways. Back this up with a review of Ushahidi-Haiti, the role of Twitter in Iran, and the utility of virtual globes loaded with high-resolution satellite imagery in Darfur, and you see the lights go on.

OpenStreetMap – Project Haiti from ItoWorld on Vimeo.

This lecture also gave me the opportunity to review some the KML datasets I have been working on for the Humanitarian Information Unit regarding Darfur and DRC. While there is nothing earth-shattering about mapping point data in KML, utilizing the time component and animation capability in Google Earth does begin to translate a dataset into a story (or geo-narrative as Madden and Ross call it). I’ll make these available as soon as possible.

The PowerPoint presentation can be downloaded here (23MB).

GIS 2.0 lecture at the University of Kansas

On Tuesday (23 Feb 2010) I presented a lecture to Prof. Terry Slocum’s Geography seminar on Neogeography. The class is a good group and have plans for an interactive web map of the KU campus.

Beyond the lecture summary, a Google LatLong blog post describes a new method for using Google Fusion tables for storing geographic data and creating custom maps. I believe this may be a real benefit for this group, and take some of the programming difficulty out of creating this application.

Link to blog post: http://google-latlong.blogspot.com/2010/02/mapping-your-data-with-google-fusion.html

Lecture Presentation:

Neogeography Lecture for KU Geography 911 (JSC)

First International Conference on Crisis Mapping (ICCM 2009)

Crisis Mapping is an evolving set of technologies and approaches, spanning a variety of disciplines, that is focused on information management, analysis, and visualization for crisis events. Fundamentally Crisis Mapping is focused on responding to humanitarian disasters; these can range from natural disasters like floods and earthquakes to ‘complex’ disasters caused by human conflict. Currently there is no defined field of Crisis Mapping, but the taxonomy of Crisis Mapping proposed by Meier and Zeimke is a good start at outlining the scope and sub-domains of the field. The First International Conference on Crisis Mapping (ICCM 2009) was directed by Patrick Meier and Jen Zeimke and therefore used the taxonomy as guide for the conference. The conference also began with a series of Ignite talks. While sitting through 24 consecutive 5-minutes talks is a bit of an overload, it effectively give everyone a common base for understanding the topics and talents in the group. The roundtable portion of ICCM 2009 was structured to address each of the taxonomy components. These roundtables started with good panels to get the ideas flowing and then opened up for everyone to participate; this created a very fluid and stimulating environment.

Crisis Mapping sits at the intersection of several different topics: humanitarian disasters, sustainable development, information technology, geographic information systems, mobile technology, Web 2.0, etc. Utilizing Crisis Mapping tools during a humanitarian disaster has many potential configurations of technologies and organizational structures. The question for me is how can this problem be optimized? The power of combining geographic information systems and mobile communications is undeniable, but that fact alone does not solve the implementation problem. This is the value of ICCM — put the lead users from the various communities in the same place and see what shakes loose. As a newcomer to the humanitarian field, my focus at ICCM was to sit back and listen. This was my first direct interaction with folks who do this for a living and I wanted to hear about the issues, opportunities, challenges, rewards, and predictions of how the convergence of technologies now described as ‘crisis mapping’ would impact their domains. The range of people in attendance, and who have subsequently become the core of the Crisis Mappers Network, was a variety of UN, NGOs, foundations, USG, software developers/neogeographers, and academics. Each group brought a valuable perspective to the discussions. The only group that I felt was missing was from the cellular telecommunications industry. Ultimately many of the technologies that will be developed for crisis mapping are based on ubiquitous access to mobile voice and data, and having the cellular companies on board during the product development phase would likely prove useful.

Ultimately it was the collection of people and ideas at the ICCM conference that made it a unique experience. In some ways it was similar to the energy of the FOSS4G conferences, but in this case, two factors made it stand out. First, the humanitarian focus meant that inherently the the cost of failure is measured in human lives; something that cannot be understated. Second, this felt like getting in at the ground floor of something great. I’m left with the distinct feeling that in twenty years I’ll look back and say “I was in the room at the first one”.