I posted a comment about my concerns regarding the automatic installation of the SAGA and OTG dependencies, and Riccardo answered that the Windows install does include both. I haven’t tested it yet, but automatically including them would be great. Would appreciate if anyone could confirm this in the Windows install, and if the Ubuntu install has been updated.
As likely everyone in the geo world knows by now, the widely awaited release of QGIS 2.0 was announced at FOSS4G this week. Having closely watched the development of QGIS 2.0 I was eager to get it installed and take a look at the new features. I am for the most part a Windows user, but I like to take the opportunity to work with the open source geo stack in a Linux environment. My Linux skills are better than a beginner, but because I don’t work with it on a daily basis, those skills are rusty. So I decided this is a good excuse to build out a new Ubuntu virtual machine, as I haven’t upgraded since 10.04.
Build Out The Virtual Machine
Let’s just say I was quickly faced with the challenges of getting Ubuntu 12.04 installed in VMware Workstation 7.1. Without getting into details, the new notions of “Easy Install” and its effect on installing VMware Tools took a while to figure out. In case anyone has the Easy Install problem, this YouTube video has the answer, also posted in code below:
The second problem, about configuring VMware Tools with the correct headers, is tougher. I’m still not sure I fixed it, but given its rating on Ask Ubuntu, it’s obviously a problem people struggle with. Suffice it to say there is no way that I could have fixed this on my own, nor do I really have a good understanding of what was wrong, or what was done to fix it.
The Open Source Learning Curve
This leads me to the point with this post, open source is still really tricky to work with. The challenges in getting over the learning curve leads to two simultaneous feelings: one, absolute confusion at the problems that occur, and two, absolute amazement at the number of people who are smart enough to have figured them out and posted about it on forums to help the rest of us. While its clear the usability of open source is getting markedly better (even from my novice perspective), it still feels as if there is a “hazing” process that I hope can be overcome.
QGIS 2.0 Installation
With that in mind, here is some of what I discovered in my attempt to get QGIS 2.0 installed on the new Ubuntu 12.04 virtual machine. First, I followed the directions from the QGIS installation directions, but unfortunately I was getting errors in the Terminal related to the QGIS Public Key, and a second set on load. Then I discovered the Digital Geography blog post that had the same directions, plus a little magic command at the end (see below) that got rid of the on load errors.
Next, back to QGIS to fire up a SAGA powered processing routine, and the same error strikes again. More searching and reading, now its about differences in SAGA versions and what’s in Ubuntu repos. With no clear answer, I check the Processing options in which there is an “Options and Configuration”, which has a “Providers” section, which as an “Enable SAGA 2.0.8 Compatibility” option. Upon selecting the option, boom, the all the Processing functions open. Mind you I haven’t checked whether they actually work, after several hours they now at least open.
It is clear that the evolution of open source geo took several steps forward in this last week, and the pieces are converging into what will be a true end-to-end, enterprise-grade, geographic toolkit. And while usability is improving, in my opinion, it is still the biggest weakness. Given the high switching costs for organizations and individuals to adopt and learn open source geo, the community must continue to drive down those costs. It is unfair to ask the individual developers who commit to these projects to fix the usability issues, as they are smart enough to work around them. Seems to me it is now up to the commercial users to dedicate resources to making the stack more usable.
The Humanitarian Information Unit (HIU) has released several new datasets that leverage the Office of the Geographer‘s work on mapping International Boundaries. The Large Scale International Boundaries (LSIB) dataset, maintained by the Geographic Information Unit (GIU), is a vector line file that is believed to be the most accurate worldwide (non-Europe, non-US) international boundary vector line file available. The lines reflect U.S. government (USG) policy and thus not necessarily de facto control (cited from metadata attached to files). In September 2011, the HIU first released the boundaries publicly for download. Working with colleagues at DevelopmentSeed after that release, they made some substantial improvements to the underlying data structure that helped lead to this work.
The LSIB dataset is designed for cartographic representation and map production. However, this poses a problem for GIS analysis, because the dataset is only composed of vector lines of terrestrial boundaries between countries. This means they do not contain coastlines, and could not be converted into polygons for GIS analysis. To address this issue, the HIU combined the LSIB dataset with the World Vector Shorelines (1:250,000) dataset. The combination of these two datasets is one of the highest resolution country polygon datasets available. Additionally, the LSIB-WVS polygon file is believed to be the most accurate available dataset for determining island sovereignty. It corrects the numerous island sovereignty mistakes in the original WVS data (cited from metadata attached to files).
Two other modifications were made to the datasets. First, the large cartographic scale of the data also introduces a problem in that the data are too detailed for global scale mapping. Therefore, the HIU also created “generalized” versions of the original LSIB-WVS polygons that are suitable for smaller scale mapping. Second, in order to facilitate the ability to “join” data to the polygons in a GIS, several attributes were added to the database, including Country Name and several ISO 3166-1 Country Codes (ISO Alpha 2, ISO Alpha 3, and ISO Number). After a year of work, the data have been released into the public domain.
All datasets can be downloaded from the HIU Data page or the links below:
Note, both the polygon and line datasets are useful for cartographic representation. This is due to the variety of different boundary classifications that are in the LSIB. Below is a subset from the metadata attached to the datasets that describes USG cartographic representation of the boundary lines.
From the LSIB lines metadata: The “Label” attribute field provides a name for any line requiring non-standard depiction, such as “1949 Armistice Line” or “DMZ”
The “Rank” attribute categorizes lines into one of three categories:
a) A rank of “1” (includes most of the 320 international boundaries) for those which the USG considers “full international boundaries.”
b) A rank of “3” for other lines of international separation. Most are considered by the US government to be in dispute.
c) A rank of “7” for other lines of separation such as DMZ’s, No-Mans Land (Israel), UNDOF zone lines (Golan Hts.), Sudan’s Abyei, and for the US Naval Base Guantanamo Bay on Cuba.
Any line with a rank of “3” or “7” is to be dotted or dashed differently and in a manner visually subordinate to the normal rank “1” lines.
Additional information about how the LSIB dataset is produced, and the processes that went into the production of the new datasets are included in the metadata.
And for more information about the Office of the Geographer, see the article from State Magazine below:
‘Disruption is a theory: a conceptual model of cause and effect that makes it possible to better predict the outcomes of competitive battles in different circumstances’ — The Innovators Solution
My PhD dissertation at the University of Kansas is entitled “The Disruptive Potential of GIS 2.0: An application in the humanitarian domain”. The research involves several interrelated philosophical, technological, and methodological components, but at its core, it is about building a new way to harness the power of geographic analysis. In short, the idea is to show how Geographic Information Systems (GIS) has evolved into something different than it was before, explore the dynamics of that evolution, then build new tools and methods that capitalize on those dynamics.
The foundation of the argument is that a new generation of digital geographic tools, defined here as GIS 2.0, have completely changed how core GIS processes are implemented. While the core functions of a GIS remain the same — the creation, storage, analysis, visualization, and dissemination of geographic data — the number of software packages capable of implementing spatial functions and the distribution capacity of the Internet have fundamentally changed the desktop GIS paradigm. Driving GIS 2.0 is a converging set of technology trends including open source software, decreasing computation costs, ubiquitous data networks, mobile phones, location-based services, spatial database, and cloud computing.The most significant, open source software, has dramatically expanded access to geographic data and spatial analysis by lowering the barrier to entry into geographic computing. This expansion is leading to a new set of business models and organizations built around geographic data and analysis. Understanding how and why these trends converged, and what it means for the future, requires a conceptual framework that embeds the ideas of the Open Source Paradigm Shift and Commons-based Peer Production within the larger context of Disruptive Innovation Theory .
While there is a philosophical element to this argument, the goal of the dissertation is to utilize the insights provided by disruptive innovation theory to build geographic systems and processes that can actually make a difference in how the humanitarian community responds to a complex emergency. It has been long recognized that geographic analysis can benefit the coordination and response to complex emergencies , yet the deployment of GIS has been hampered by a set of issues related to cost, training, data quality, and data collection standards . Using GIS 2.0 concepts there is an opportunity to overcome these issues, but doing so requires new technological and methodological approaches. With utility as a goal, the research is structured around general three sections:
GIS 2.0 Philosophy: Exploring the fundamental reorganization of GIS processes, and building a conceptual model, based on disruptive innovation theory, for explaining that evolution and predicting future changes
GIS 2.0 Technology: Utilizing GIS 2.0 concepts build the “CyberGIS”, a geographic computing infrastructure constructed entirely from free and open source software
GIS 2.0 Methodology: Leverage the CyberGIS and GIS 2.0 concepts to build the “Imagery to the Crowd” process, a new methodology for crowdsourcing geographic data that can be deployed in a range of humanitarian applications
In the next series of posts I will explore each of the points above. My goal is to complete the dissertation in the coming months and I want to use this blog as a staging ground for drafts, chapters, and articles that can be submitted to my committee. As such they will likely be a bit rough. I am a perfectionist in my writing, which only serves to completely slow down my productivity, so hopefully this will force me to “release early and often.”
The core arguments of GIS 2.0 were originally conceived during 2006-2008, so they are a bit dated now. At the time there was really only anecdotal evidence to support the argument that the same Web 2.0 forces that built Wikipedia and disrupted the encyclopedia market were going to impact GIS. However, with the continued rise of FOSS4G, OpenStreetMap, and now the Humanitarian OpenStreetMap Team (HOT), it feels almost redundant to be making this argument now. Additionally, from the technology perspective there are lots of individuals and groups out there doing more cutting edge work than I ever will, but I hope the combination of philosophical approach and actual implementation can be a contribution to the discipline of geography — and more importantly, help the humanitarian community be more effective.
One of the interesting things about the “Imagery to the Crowd” projects has been the positive feedback we have received from a range of different communities. Ultimately we built the process from a belief that free and open geographic data could support the effective provision of humanitarian assistance, and that the power of open source software and organizations were the key to doing this efficiently.
Our goal with Imagery to the Crowd is to provide a catalyst, in the form of commercial high-resolution satellite imagery, to enable the volunteer mapping community to produce data in areas experiencing (or in risk of) a complex emergency. In many ways I thought of this process as trying to link the “cognitive surplus” of the crowd with the purchasing power of the United States Government, to help humanitarian and development organizations harness the power of geography to do what they already do better.
Somewhat surprisingly, a community outside of the humanitarian sector recognized the potential impact of this process, and the HIU was awarded the United States Geospatial Intelligence Foundation (USGIF) Government Achievement Award 2012 (Press Release, Symposium Daily pdf, Video page). The award was presented at the GeoInt Symposium in Orlando, FL (Oct 7-14 2012). Below is a video of the awards presentation, and includes the Academic and Industry Division winners from this year. The section on the HIU begins around the 7:25 mark.
At the conference I also was on a panel in a “GeoInt Forward” session focused on open source software. This panel was actually the best part of the conference. Typically the first day of the GeoInt Symposium is reserved for the golf event, but this year the organizers included an additional day of panel sessions. In general these sessions were very well attended, and with a full-house of approximately 250 people the session on Open Source Software exceeded my expectations. The session description and other panelists are listed below, and it is clear the defense and intelligence perspective that is GeoInt, but it was an interesting group doing work across a range of different applications. I tried to provide a bit of balance and discussed the philosphical approach to open source, and its potential as an organizing principle for organizations. The Imagery to the Crowd project is built on an a cloud-hosted open source geographic computing infrastructure, so I could speak to the reality of this system. It seems that the coming budget austerity has generated significant interest in open source, and now could be golden opportunity.
From the conference proceedings:
“Open Source Software (OSS) has moved from being a backroom, developers-only domain to a frontline component inside key military capabilities. OSS isn’t doing everything—yet—but it is slowly commoditizing key strategic parts of geospatial infrastructure, from operating systems to databases to applications. In this session, key government program managers will discuss where and how they see OSS moving to solve warfighter needs, as well as assess the gaps in OSS investment and capabilities.”
Moderator – John Scott, Senior Systems Engineer & Open Tech Lead, RadiantBlue
• John Snevely, DCGS Enterprise Steering Group Chair
• Col Stephen Hoogasian, U.S. Air Force, Program Manager, NRO
• Keith Barber, Senior Advisor, Agile Acquisition Strategic Initiative, NGA
• John Marshall, Chief Technology Officer, J2, Joint Staff
• Dan Risacher, Developer Advocate, Office of the Chief Information Officer, DoD
• Josh Campbell, GIS Architect, Office of the Geographer & Global Issues, State Department
Here is my ignite talk on the “Imagery to the Crowd” project from the International Conference on Crisis Mapping (ICCM 2012). I’ve attended each of the four ICCM conferences (Cleveland, Boston, Geneva, Washington DC). They have been a great way to understand the organizations that comprise the humanitarian community, and more importantly, meet the individuals who power those organizations. It was exciting to present on our work at the HIU, and contribute back to the Crisis Mapping community.
All of the Ignite talk videos are available at the Crisis Mappers Website(lineup .pdf) and collectively they represent a solid cross-section of the field. At the macro-level, I believe the story continues to be about the integration of these new tools and methodologies into established humanitarian practices. The toolkits are stabilizing (crowdsourcing, structured data collection using SMS, volunteer networks, open geographic data and mapping, social media data mining) and are being adopted by the major humanitarian organizations. While I am partial towards crowdsource mapping, the Digital Humanitarian Network and the UN OCHA Humanitarian eXchange Language (HXL) are two other exciting projects.
Available immediately, the U.S Department of State is looking to fill two positions related to geographic analysis and geographic programming. The Office of eDiplomacy, the State Department’s knowledge management gurus, want to build a first-rate geographic application development team. The two-person team will work with DoS bureaus to understand their workflows, leverage the geographic components of their data, and build custom geographic applications to help them. The team will be composed of one GIS applications developer, and one GIS analyst. Each position will require a substantial overlap in skills, meaning the developer must understand GIS analysis and the analyst will have to have some programming experience.
In that capacity, I started a project to build a completely open source geographic computing infrastructure focused on humanitarian applications. Called the CyberGIS, this project is built exclusively from open source geographic technology including, PostGIS/PostgreSQL, GeoServer, TileCache, OpenLayers, and TileMill, along with the standard Ubuntu, Apache, Tomcat, jQuery components, and we host our production environment in Amazon Web Services. Using the term CyberGIS was intentional and intended to place the project inline with on-going efforts in the academic community to unite the worlds of geographic information science and cyberinfrastructure.
We have used this infrastructure to build several HIU geographic web applications, including the Imagery to the Crowd projects. These award winning projects are just the beginning for the CyberGIS at the HIU, we have several applications under development that we hope to unveil publicly in the coming months. The success of the HIU CyberGIS has raised the attention of geography in the Department, and the fact that eDiplomacy is building this development team is a huge step in expanding the power of Geography to the entire State Department. Two years ago I could not have expected that we could move this far this fast, and now we have an opportunity to fundamentally influence how the Department operates.
If you have serious GIS analysis and open source geographic developer skills and want to be part of a geographic revolution, then I encourage you to apply. We need forward leaning, capable folks who fundamentally understand spatial analysis and geographic technololgy. You must be willing to work hard and be leaders in showing how geographic data and analysis can improve American diplomacy. This is a unique moment and we need the right team.
The main Careers page on the ActioNet website is here
However, I have a problem. I work in an organization where our primary product is a hard-copy map. As we evolve our product line, our challenge is to produce digital, interactive, web-enabled versions of our hard-copy map content and maintain a high-level of cartographic goodness. Our cartographic team works in Adobe Illustrator, and are quite proficient in it.
The problem we face is having to do cartographic work twice in order to switch between Illustrator and the Web. There has to be a better way. We are currently using TileMill for a significant amount of our web rendering, but also intend to transition to GeoServer. This means we have .AI, CartoCSS and .SLD in the mix.
We need to keep Illustrator as our foundation for creating content, so what is the best option to extend to the web? First is the problem of converting from .AI to CartoCSS, specifically the conversion of graphic styles for each Illustrator layer to its CartoCSS equivalent. I’ve never heard of a converter tool for this and am interested to know if anyone (@opengeo @ortelius @mapbox @ericg @tmcw @springmeyer @kelso @mattpriour…anyone at Adobe) has an idea if it is feasible or the amount of effort it would take.
Second is the problem of having to convert .AI to both CartoCSS for TileMill and SLD for GeoServer. The best option would be to have GeoServer consume CartoCSS natively, that way offline tiles and web services could maintain the same cartographic styling.
I posed similar questions on Twitter to @cageyjames and @spara, and @spara’s reply got me thinking whether I was looking at this question the wrong way. Is this too old school to be considering tools like this, so I wanted to know if anyone else had been thinking about it.
Thankfully @mattpriour replied that work was already being planned at OpenGeo to implement CartoCSS for GeoServer. And @godwinsgo also replied that GeoServer already has some form of CSS style rendering. So it looks like the CartoCSS in GeoServer has a chance of being completed, that just leaves the .AI to CartoCSS conversion.
The purpose of this post is to simply collect in one place some of the amazing animations ITO World has produced from the OpenStreetMap database. I am often searching around on Vimeo to find them, so I thought it might be useful to put them here, especially as several new ones have been recently released. These visualizations come across as very professional, they have a high production value and include a good soundtrack. I don’t personally know any of the folks at Ito World, but would love to know what software they use to produce the animations.
The one I still find the most amazing is the animation depicting the Haiti Earthquake response. I often use this animation to help explain the value of OpenStreetMap and the volunteer mapping community in a disaster response situation. The ‘Imagery to the Crowd‘ concept is a direct result of the Haiti response.
The Humanitarian Information Unit has for the second time worked with the Humanitarian OpenStreetMap Team to deliver high resolution commercial satellite imagery to the crowd. For this project we helped support the American Red Cross with a disaster risk reduction project focused on the citites of Gulu and Lira in northwest Uganda. Details of the project can be found on the Red Cross blog, “We Start With A Good Map” and the recent Red Cross news article “New Mapping Technologies for the Developing World.” One exciting element of this project is that ARC staff are working directly with locals in country on the project and helping to provide additional local knowledge to the map.
The HIU tasked, processed, and served the imagery using its CyberGIS computing infrastructure (more on this coming). The imagery services have been running for a couple weeks and the mapping results are quite stunning. The amount of detail in Gulu surprises me every time I look at it, especially the trees, huts, and buildings. The maps below are interactive and can be used to zoom and pan around the OpenStreetMap data. Details on how to help with the mapping task, or any other mapping task, can be found at the OSM Tasking Server.