Githubbing the GeoPackage specification

In the past few months one of the main things I’ve spent time on is the new GeoPackage specification being worked on within the OGC. I was involved in the very early days of the conception, before it was picked up for the OWS-9 testbed, as I feel it has the potential to fill a big hole in the geospatial world (James Fee sums up my feelings nicely). We discussed keeping it super lean, accessible to all, and implementable by non-GIS experts.

While I got wrapped up in other things a spec for public comment popped out in January, which I felt had strayed from what I believed were the original core design goals. I sent a huge missive on all the changes I felt could get it back to that lean core. I try to never be one to just criticize though, as it’s easy to cast stones and much harder to build something that is strong enough to hold up to others pelting rocks. And my open source roots teach me that a criticism without offering to put in effort to the alternative is close to worthless. So I decided to join the GeoPackage ‘Standards Working Group’, participating in weekly (and then twice a week) calls, and trying to work with the OGC workflow of wikis and massive Word documents.

One of my main goals was to learn about how the OGC process actually works, and hopefully from a place of knowledge be able to offer some suggestions for improvement from my open source software experience. That’s worth a whole post on its own, so I’ll hold off on much of that for now.

OGC staff has also been great about being open to new ways of working. It’s had to be balanced against actually getting the specification out, as many agencies need it ‘yesterday’. But we achieved an OGC first, putting this version of the specification out on GitHub. My hope is to attract developers who are comfortable with GitHub, who won’t need to learn the whole OGC feedback process just to suggest changes and fixes.

I do believe the specification is ‘pretty good’ – it has improved in almost all the ways I was hoping it would (Pepijn and Keith drove the tech push to ‘simpler’, with Paul doing a great job as Editor and Raj from OGC helping out a bunch). I do believe GeoPackage can improve even more, primarily by feedback from real implementations. My hope is it does not go to version 1.0 without 3 full implementations. That don’t just handle one part of the document, but implement all the core options (particularly Tiles and Features).

For the OGC this is an experiment, so if you do think OGC specifications could benefit by a more open process you can help encourage them by spending some of your technical brainpower on improving GeoPackage using GitHub (big shout out to Even who already has a bunch of suggestions in). Fork the spec and make Pull Requests, on core tech decisions or on editorial tweaks that make it easier to understand. For those without GitHub experience we wrote some tips on how to contribute without needing to figure out git). We also tried to add a mechanism for ‘extensions‘ to encourage innovations on the GeoPackage base, so I’d also love to see attempts to use that and extend for things like Styles (SLD or CSS), Rasters, UTFGrids and Photo Observations.

And if you can’t fully implement it but are interested in integrating, I do encourage you to check out the Apache licensed libgpkg implementation from Luciad. Pepijn has been the technical core of GeoPackage, researching what’s come before and continually building code to test our assumptions. Libgpkg is the result of his work, and the most complete implementation. Justin also has a GeoTools implementation and GeoServer module built on it for the Java crowd. And hopefully someone will implement for GDAL/OGR and PostGIS soon.

As always I’ve written too much before I’ve even said all I wanted to. But I will try to do at least one follow up blog post to dig in to some of the details of GeoPackage. I and the SWG welcome an open dialog on the work done and where to take it next. And I’m happy to explain the thinking that got the spec to where it is, since we wanted to keep the spec itself slim and not include all that detail. We are hoping to eventually publish some ‘best practice’ papers that can dig in to ideas we considered yet discarded as not quite ready.

OpenGeo.org

So within the GeoServer community we’re debating whether it’s kosher to do a fairly blatant ‘commercial’ announcement on the blog about OpenGeo.org. But in the meantime I figured I’d just announce here, since I can do whatever I want on my personal blog 🙂

I’m really excited to present OpenGeo, the newly minted geospatial division of The Open Planning Project. Nothing much is changing internally, but we’re getting serious about our image in the world. We’ve been supporting open source geospatial projects for years, and in the past couple years we’ve offered great consulting services around the projects we work on. But it’s always been confusing for people who don’t already know our work. They might see an openplans.org email address on the lists, and follow that to web tools for community organizers, click from there on The Open Planning Project logo, linking to a high tech non-profit, then maybe click projects, and see ‘GeoServer’, which they’ve been using, and from there click on ‘services’ to realize that yes, you can in fact pay us money to work on this. I’m not convinced that anyone made it that far.

So OpenGeo.org is about giving a more visible face to our services and products, so we can bring the geospatial work in TOPP to economic sustainability with full cost recovery. It also marks the launch of ‘GeoServer Enterprise‘ packages, which bundle web and telephone support, priority bug fixes, discount consulting rates, and a number of implementation hours by the experts. This is on the full OpenGeo stackOpenLayers, GeoWebCache and GeoServer. We’re hoping this makes it easier for more conservative organizations to embrace open source by establishing something much closer to a traditional vendor relationship. It should provide a clear answer to the classic question with open source ‘Who do I call when something goes wrong?’ – with GeoServer Enterprise you call us, the experts who wrote the software. We believe the total package is incredibly competitive with any offering, proprietary or open source, for the level of service and expert access provided – you’re not paying for the intellectual property of something we built in the past, you’re paying directly for our time now. We already have two clients on board, which is very exciting. The website will be improved with demos and more content, but we’re quite pleased with what we’ve got. Recently many contracts have been coming in, and the OpenGeo launch should help us grow even more. So if you’re looking for a job for a geo dot-org drop me a line, we’re definitely hiring all types of roles, and soon should update the website with specifics.

Letting everyone remix web maps

I’ve been meaning to lay down my little vision of how web mapping configuration should work for awhile.  It’s not a big idea, but a nice little way to perhaps bring more participation in to the creation of web maps, making it possible for anyone to make a mash-ups.  I believe Google Mapplets gets at a lot of this, but I’d like to add on standards based services that can bring in real GIS layers and restyle the base maps.

I should start with a view either of a decent base map (like a nice blue marble layer) or else a view that someone else composed.  I can browse around, or I can go to ‘edit mode’, to configure more of what I want to see.  From there I can specify a WMS, a WFS/WCS, a GeoRSS feed or a KML.  If there are multiple layers on the service I should be able to specify which one I want to add.  I can add in the various services, and maybe even add some annotations.  Once I compose a view that I like I can hit ‘save’, and it’ll save the map to my user account, or else export it out as an OWS Context document.  I should also be able to say ’embed’ and have it auto-create the OpenLayers javascript (pointing at the Context document created), that I can put on my blog or webpage.  My annotations will be done as inline KML in the context document.

Next, if the services are WFS/WCS or WMS that allows remote SLD I can change the styles of the layers I brought in.  I might want to emphasize one layer more than others, or even just a particular feature of the layer.  I am able to easily do thematic mapping as well, to specify what attribute to do a quantile or equal intervals on, with the colors I choose.  This remote SLD can be saved to the Context document directly, so I don’t even need permissions on a server.  If I have permissions on the WMS server I can then upload the new SLD as an option for others.

Past that I can get better performance on the map I configured by pointing the configuration at a GeoWebcache or TileCache instance that I have set up or have appropriate rights on.  I can completely drive configuration of GeoWebcache through the web UI, to set which layers to turn on and off.  I can even start it seeding the layers, or have it expire part, based on bounding box and number of zoom levels.

Then if I have a GeoServer set up, or at least layer creation rights on a server, I can start creating new data through WFS-T.  This is beyond annotations: I can create a new layer that persists on the server, centralizing its editing.  This is important because I can also set the edit permissions so other users can start editing the same layer.  I can also choose to turn on versioning, to keep a history of what users make what edits.  I can control the other user permissions, opening it up to everyone or just a select few.  All data I create is available as WMS/WFS/WCS, KML, GeoRSS, GeoJSON ect.

I can also upload a shapefile or geotiff right through the web interface.  Using the same styling utilities I can configure it to look as I like and make that the default style.  I can choose to turn caching on there too.  I can invite others to start editing the shapefile I created.  I can also always export it as a shapefile, or any other format.  All these more advanced layers are also saveable as a ‘view’, as an OWS Context document.

Whenever I put the javascript map anywhere it will always have a ‘configure the map’ button, for others to start to remix.  Once they hit the ‘configure the map’ button they get a nice listing of the layers and data that make up the map.  They can restyle and change layers to their own liking with no permissions.  And if they want to start caching or adding data they can get rights on a server, or just set up their own server.  Through this I hope we can make it incredibly easy for anyone to create their own map, and add a sort of ‘view source’ to maps that anyone can use, not just javascript programmers.  I feel this gets beyond mere ‘wizards’, which take you through a set path to add the portions you want, to a more flexible remixing environment that can hopefully encourage more lightweight maps that aren’t hard to create.  I believe this should also be the way the majority of people configure their GeoServers, though we should also have a more advanced admin interface to change all the little settings.

In terms of technology, we’re actually quite close to this vision.  It really leverages a lot of the standards architecture, utilizing OWS Context, Remote SLD and WMS/WFS.  We need to add the REST admin part of Geowebcache, but the ability to upload a shapefile or GeoTiff is now in a REST interface for GeoServer, thanks to the work of David Winslow.  We just need a front end in openlayers to edit it.  For OWS-5 we did inline KML in Context documents.  Versioning is also in place, though we need a bit more work on granular security permissions, though the framework is there. And we have funding for an ajax SLD editor, which should be the final major piece of this.   Then we just need to iterate through making the UI incredibly intuitive and easy to use.  But I’m excited for this vision, to bring the power of ‘real’ GIS – large datasets and styling of basemaps – to the easy to use style of the GeoWeb.

That’s all for me for now.  Two blog posts in the span of a less than a week, I guess this means I don’t have to post again for months 😉

Slides from GSDI-10

Just finished up a great week at GSDI-10 in Trinidad. I came down with Justin Deoliveira; he put on a great workshop on Monday, and we both had talks on Thursday. Sometime soon we should post the workshop modules on the GeoServer blog, but I wanted to get my talk online as several people asked about it. It was a fun talk, bringing together a lot of what I spent my time thinking about in Zambia around SDI’s, open source and Google Maps/Earth. The conference is great, because it gathers many of heads of national mapping agencies and other important players who work with lots of data and really want to figure out the best ways to share it. I think my talk was really well received, the timing felt right for the message: that we need to figure out how to combine what’s going on in the Google Maps/Virtual Earth world with the broader GSDI initiative in to a single GeoWeb. I hope to turn the thoughts I presented in to a paper at some point, but in the meantime my too literal slides do get across much of my argument (sometime I’ll learn how to make visually compelling presentations…)

You can also download  the powerpoint. Reading the slides you may notice the very soft launch of OpenGeo.org, which is going to be our rebranding of the geospatial division of The Open Planning Project, so it’s more clear that we offer services and support packages around GeoServer and OpenLayers. Still lots of work to be done, but we were excited to at least get a start for this conference. Once we get it ready for launch we’ll announce for real.

Oh, and the license for the slide show is Creative Commons Attribution 3.0.
Creative Commons License

Great post on GeoData licensing

I have some major posts to write, but thankfully there are others out in the world thinking about and writing about the same issues that I care about.  Today I found ‘The license: where are, where we’re going‘ on the OpenGeoData blog.  It is exactly the post I wanted to write after attending the Science Commons meeting last year, and then goes on to add even more information that I wasn’t even aware of.  It’s really great news to hear about progress being made on the Open Data Commons Database Licence.  I really hope they continue to work on it, since I had the same reaction as many OSM community members to the news of Creative Commons’ recently published “protocol” on open access data – it’s great for scientists, but doesn’t help much with what we’re doing in the geospatial community.  So thanks Richard for post and the great work you are all doing over on the OSM legal list.

Producing, not consuming, makes the GeoWeb go ’round.

Aaron on Streetsblog just passed me a link to post by Steven Johnson on what he calls ‘the pothole paradox‘. It’s a very nice introduction to GeoWeb ideas, and clearly explains to laypeople why geospatial metadata on everything could help. But for us working on building this open, interoperable GeoWeb it offers very little, except that their stovepipe system is going to attempt to solve it.

Why do I call outside.in a stovepipe? Because as it stands now it only sucks information in. He asserts that to be successful their system must be structured with two spatial properties:

1. Every single piece of data in the system has to be associated with some kind of machine-readable location, as precisely as possible.

2. Where appropriate, data should be associated with a human-readable place (a bar, school, playground, park, or even neighborhood.)

I would maintain that these are true for the broader GeoWeb, a system of systems. And I’m sure outside.in would love it if the entire web was structured that way, and they would be the one who aggregates the results. Unfortunately they are in a position to potentially do something about it, and as far as I can find they just want it all in their own system.

Why do I draw that conclusion? Mostly because there’s no way to access the geo information that their community is encoding. They have a number of easy options to tag location on blog posts, including the GeoRSS standard that many of us are promoting. And they let people submit non-located stories and easily add location. They are actively adding that meta geospatial data layer, and then keeping it all for themselves as far as I can tell. Why do I say that? Because as far as I can tell there’s no machine readable output of the geospatial locations that they’re adding. If I subscribe to a feed of stories about Brooklyn then there’s no GeoRSS tags. Which means that if I want to make a service that mashes up the geotagged stories from outside.in I have to rewrite all their mechanisms for getting that spatial information.

Yes, this is arguably their core ‘value add’, since they have the community that adds all the geospatial metadata. Outside.in thus would love it if the rest of the web had GeoRSS on everything, and then their users were the ones who got the additional information added to those that the authors did not tag. But if everyone thinks in this way then we’re basically stuck in a prisoner’s dilemma.

Thankfully the big players in the Geo Web have actually been getting out of this in the last couple years. Almost two years ago at a Location Intelligence conference GeoRSS was all the buzz. Microsoft and Yahoo! both announced they would support it, and there was a lot of pressure on Google to do so as well. But all seemed utterly unaware that the key is not to be able to read GeoRSS, it’s to output it by default whenever someone uses their API. Google wasn’t yet doing anything with GeoRSS, still trying to make KML the one true format, but they suffered from the same thing – if you used the Google Maps API it would not output any machine readable format, KML or otherwise.

At the time I stood up and asked the question, and it was weird how something so obvious to me was so hard to explain to them. They thought that just being able to read a standard was the most open thing they could do. But basically none of them were offering their information up to others to read. They all had users who were creating new geospatial information through their API’s, but it wasn’t contributing to ‘the’ GeoWeb, it was just making their own little web better. Each seemed to want to ‘own’ the web, which would constrain it to a bunch of competing projects, instead of a new public infrastructure like the internet. The latter web is the one they all wanted, and why they’re investing so hugely, but none wanted to blink, to be the one to cooperate since the other might defect.

Thankfully the past couple years have seen some great progress on this. I wrote about Google’s progress after Where 2.0, which is impressive as they are the ones in the lead. Microsoft has arguably done even more, they were the first to take up my call to produce GeoRSS from their native ‘Collections’ API. They’ve also launched the ability to search KML (and GeoRSS I think/hope?) and easily import KML, now that it is becoming an open standard. This might be surprising to some, but one should remember that those who are behind always rush to standards. Microsoft IE3 actually was the first full commercial implementation of CSS2. We’ve seen a parallel in the geospatial world, with ESRI talking much more about standards, getting their services to implement OGC standards for real, instead of just paying lip service. It looks like they are also embracing KML.

On Google’s side, their GeoSearch is now crawling GeoRSS, which is great (and would be able to crawl outside.in’s aggregrated feeds if they published them). Unfortunately looking just now it appears that their push to get people to make their Google Maps using KML or GeoRSS is no longer a priority. For awhile they were pushing this in their talks, like at the last Google Dev Day. It makes a ton of sense that it’s in their interest to, since more data produced that way means more data for their GeoSearch. But the documents appear to barely mention it. On the other hand MyMaps has everything that users create as KML, and that is arguably more important because presumably people who just need to throw up some points and lines will use that. The more complicated uses of the Maps API would need more than KML or GeoRSS anyways. MyMaps could of course be more open, if it were to produce GeoRSS as well. Yahoo!’s apis seem to have made no progress towards producing GeoRSS, which is unfortunate as they were the first to support reading it.

Ok, this is longer than I had intended (but that’s why I have a blog, so I don’t have to feel bad about writing too much, dammit!).  But I hope that outside.in starts producing machine readable feeds of the awesome geospatial tags their community is gather, to help lead the growth of the geospatial web instead of just waiting to capture the hyperlocal part of it for their own sandbox.  For the GeoWeb to truly succeed we’re going to need a huge number of organizations taking that leap towards cooperation over ownership.  Thankfully the big guys seem to be providing decent leadership, I’d give them a solid ‘B’ on their progress the last few months (though Microsoft might even deserve an A).

Collaborative Mapping: Tools, cont.

The next major area of tool improvement I see is expanding the wiki notion of editing to more of a merging revision control model, with branches, versions, patches and eventually expanding in to distributed repositories.  The ‘patch‘ is a small piece of code that can be applied to a computer program to fix something.  They are widely used in the open source software world, both to get the latest improvements, and to allow those who have commit rights to a source repository to review outside improvements before putting them in.  This helps create the meritocracy around projects, as they don’t let just anyone in to the repository as they might break the build.  Such a case is less likely with maps, but sometimes core contributors might want to see a couple sample patches before letting a new member in.  In the GeoServer versioning WFS work we have a GetDiff operation that returns a WFS Transaction that can then be applied to another WFS.  This fits in with the technical part of how a patch works – they’re really easy to apply to one’s dataset.  But unfortunately a WFS transaction is not as easy to read as a code patch.  The other great thing about patches is that when leaf nodes are updating their data they can just request the change set – the patches – instead of having to do a full check out.  So I’m still not sure how to solve this problem, the WFS Transaction is the best I’ve got, but I think we can do better, have a nice little format that just describes what changed.

Once we’ve got patches people are going to want the ability to merge changes.  If you made a patch and I made a patch and we both submit them then we need a way to see if they’re compatible.  Ideally you could merge at the feature level – if you change the road type and I change the road length of Interstate 5 then we shouldn’t get a conflict.  Even better, merge at the geometry level, if we changed different points on the road then those should merge nicely.  This will become important as people start to ‘check out’ their geo repositories, do edits, and then try to submit back in.  We could just do locking, which is what WFS-T does, but concurrent versioning is so much nicer – we just have to be able to pull off merging.

Right past merging is full on branches.  Which of course are much easier to pull off if you’ve got nice merging in place.  But branches will let people try out new geographic updates in their own sandbox before putting them on the mainstream.  This can lead to better reviews of the updates.  And with nice branching and merging you would be able to let a number of people work concurrently on their own area of the map, merging them seamlessly.  This is obviously a really hard problem, one that even ArcSDE has trouble with for the things people actually want to do.  I do think we’ll be able to get there in the open source world, indeed I believe we have a better chance of achieving it since once we get close we’ll get a lot of interest in people wanting it completed and meeting their needs, funding the iterative improvements.

The final piece, that I sort of don’t even want to think of yet, since it’s damn hard, is distributed versioning.  I do think it’s extremely important though, to let everyone have their own editing repository, which can flow back in to the main one.  I like the model a lot, and think it has great wins for geospatial.  But since we’ve barely got an SVN equivalent I think it’s wiser to wait a bit on these issues till we sort out what a patch should look like.  Indeed SVK was possible because SVn already existed.  But I’m definitely excited by the possibilities, for every node of the map to have the potential to be edited.  This can be a big win for areas with low bandwidth.

The next category of tool improvements is granular security settings.  Right now there’s not even a way to limit editing the map to only some users.  I think that many maps will flourish with the open to all editing style, making use of rollbacks to prevent vandalism.  But some will likely want to keep the map to set group of committers.  This way one could get commit rights after doing a number of good patches, perhaps ensuring higher quality for some maps.  You also might have different permissions for different users on different layers.  We should be able to get all of that with our current GeoServer security system, we just need to hook up a UI for it.  The trickier thing will be a nice feature, and I think is possible – limiting users to certain geospatial areas or features with specific properties.  Since the security system is integrated at the code level, and lets us use aspects, I think that this should be possible, will just take a bit of work to figure out.

Another area I see a lot of potential innovation is distributed processing of tiles.  Tiles are the clear winner for how to display geospatial information, Google Maps has risen the bar so that anything that isn’t tiled just feels out of date.  But tiling takes a ton of processing power.  Google is all set up to do it, but the rest of us aren’t.  To fully cache http://sigma.openplans.org to zoom level 17 would have taken me about 5 months.  Open Street Map has been making tremendous strides on this with their Tiles@Home initiative, which I am very impressed by.  OSM is lucky in many ways, in that they have a project that people want to devote their spare CPU cycles to.  It could be cool to set up marketplaces for processing of tiles, where companies that are going to keep their data private, or just that don’t have the reputation of OSM, can engage other nodes and give them micropayments for their work.  I think other areas of potential innovation include leveraging Amazon’s EC2 to process huge amounts of tiles.  We’re also going to need to have the collaborative mapping stuff hook up with the tiling efforts, so that when there are massive edits the tiles can expire themselves and get processors started on generating new ones.  We can likely leverage http’s Conditional GET functionality to let browsers and others cache geospatial data, but also get the most up to date data when its available.

The last area I’d like to see improvement on is more granular notification mechanisms.  GeoRSS output is the obvious choice, but could also do email or SMS notifications.  Speaking of which I’d love more innovation on mobile clients, and even super low tech versions like be able to SMS in a new or updated location by just entering cross streets or reading a position from GPS.  But one should be able to have the notifications based on very granular rules – ‘send updates for highways in this bounding box’, or ’email all occurrences of the brown spotted pigeon along this river bank’.  This would be useful not only for preventing vandalism, but also to enable people to take action on up to date reports.  The map becomes not just an artifact of what has happened, but a living thing can help create more up to date information.  If the brown spotted pigeon is seen in one area then it will alert more people who can then add updates on its location and get a more detailed map of its path.

I’m sure there are many more innovations to be had with tools, but this is just a start of the things that we’re starting to work on and the things I’d like to work on in the future.  At TOPP we’re doing this stuff when we don’t have paid client work (or have met revenue targets for the year, since we’re a non-profit), but if there’s anyone out there who wants to see specific areas accelerate we’d be very excited to take on paid work to do any of the things talked about here <end shameless plug/>.

Collaborative Mapping: Tools

Continuing the collaborative mapping thread, I’d like to think a bit about tools to make this happen. Do a bit of dreaming, and maybe think through how we can get there. Definitely as soon as I start to talk about this people want to do all kinds of crazy synchronization and distributed editing of features. I do think we’ll get there, but I fear going for too much too soon, getting loaded down by over-designing and not addressing the immediate problems. Indeed Open Street Map has proven that if the energy is there the tools just need to do the very basics. I have been putting my energy in to getting a standards based implementation, on top of WFS-T, but that’s more because I know it and I like standards. I don’t think it’s the best way to do things, and I don’t even think it should be the default way to do things – at this point I’d prefer something more RESTful. But I believe in being compatible with as much as possible, and there are already nice clients written against WFS-T. So it should always be a route in to collaborative editing.

First off, I think we need more user friendly options for collaborative editing. Not just putting some points on a map, but being able to get a sense of the history of the map, getting logs of changes and diffs of certain actions. Editing should be a breeze, and there should be a number of tools that enable this. Google’s MyMaps starts to get at the ease of editing, but I want it collaborative, able to track the history of edits and give you a visual diff of what’s changed. Rollbacks should also be a breeze – if you have really easy tools to edit it’s also going to be easier for people to vandalize. So you need to make tools that are even easier to rollback. On the GeoServer extended WFS-T Versioning API we’ve got a rollback operation, that can work against an area of the map, a certain property, or a certain user (or combinations of those). Soon we hope to be working on some tools built on top of openlayers to handle those operations in a nice editing environment.

The next step on user friendly options will be desktop applications that aren’t full GIS, but that lets users easily edit. These can leverage the tools of existing open source GIS desktop environments, like uDig and qgis, but can strip down the interface to just be simple editing environments with a few hard coded background layers. You could have branded environments for specific layers of information. And ideally build other kinds of reporting tools that also leverage the same GIS tools, but in an interface geared towards the task at hand, like search and rescue or tracking birds. The other thing I hope to work on is getting some of the editing hooked up with Google Earth. I just learned there’s a COM API that might allow us to hack something in, or we can try to get Google Earth to support POSTing of KML to arbitrary URLs as Sean suggest

Next I’d like to see integration with ‘power tools’, the full on, expensive ass GIS applications that are the realm of ‘professionals’. Not that I have a huge love for those tools, but I’d really like to engage as many people as possible in to collaborative mapping. GIS professionals are a great target audience, since most of them are already passionate about mapping. They have a lot of expertise to bring to the table. And while some of them can be elitist about collaborative mapping and ‘lesser’ tools, so too can many of the amateurs raise their noses at people who aren’t DIY. At the extremes it can obviously be a major divide, but I think both could have a lot to teach each other if they’re willing to listen. But I believe the first step to get there is to get the ‘power tools’ compatible with the collaborative mapping protocols, so you start them off in collaboration. This is one reason I’m an advocate of the WFS-T approach, as there are plugins for ArcGIS and other heavy desktop GIS’s. I think we could see some professionals get really excited about collaborative mapping, as it could become the thing they are passionate and do in their free time that is fun and helps boost their resume. This is how many open source contributions work now, it’s a complex interplay that includes professional development. Perhaps one’s collaborative mapping contributions could help land jobs in the future.

I’d also like to see more automation available in the process. This is an area that could use a lot of experimentation, how much to automate, how much to let humans collaborate on. But I think there’s an untapped area of figuring out vector geometries from the aggregrated tracks of GPS, cell phones and wifi positioning data. People are generating tons of data every single day, and most of it is not even recorded. It’s great when people take a GPS and decide explicitly to map an area and then go online and digitize it. But we could potentially get even more accurate than just one person’s GPS by aggregating all the data over a road. Good algorithms could extract the vector information, including turn restriction data, since it could figure out that 99% of fast moving tracks are going in the same direction. Of course we’ll still need people to add the valuable attribute information, but this way they’d have a nice geometry already in place.

You could also do feature extraction from satellite and aerial imagery. This is obviously a tough people that many people are working on, but perhaps it could also be improved by the leveraging human collaboration. In a system with good feedback people could perhaps help train the feature extraction to improve over time. It also could be valuable to do automated change detection, which then notifies people that somethings changed in the area, and then they could figure out the proper action.

The final area I think we could improve with automation is prevention of vandalism and silly mistakes. GeoServer had work done by Refractions a few years ago to do an automatic validation engine. Unfortunately this has languished with no documentation, but it’s still part of GeoServer. One can define arbitrary rules to automatically reject bad transactions – geometries that intersect badly, roads with out names, ect. This could also reject things like ‘Chris Rulez’ scrawled over the whole of the US, as it could know that no real roads run in completely straight lines for over 200 miles. I could imagine a whole nice chain of rules to ensure that all edits meet certain quality criteria. And perhaps instead of rejecting straight up any edit that doesn’t follow all rules can go in to a sandbox. I could also imagine some sort of continuous integration system once there is topology to check network validity, and other quality assurance pieces that can’t take place instantly.

Ok, I’ll wrap this post up for now, will continue this thread soon.

Collaborative Mapping: The Business Thread, cont.

So if there is a future where collaboratively mapping could be economically competitive, how do we go about actually getting there?  I actually think we’re further than many might think, though I believe there is still a lot of work to be done, innovating with the tools, communities and workflows to make this happen.  But I’ll address that in another post, for now I just want to present a possible path for collaborative mapping to bootstrap in to the mainstream.  I’m going to focus on street maps, since that’s the information that people pay big money for, and there is already early success with Open Street Map.  Later will examine how the lessons learned there can feed in to other domains and back

So step 0 is proving that it’s possible for a diverse group of people to collaborate on an openly licensed map.  I’d be hard pressed to entertain any arguments that Open Street Map has not already accomplished this.  Of course in its current state you can’t navigate a car on it, you’re not going to do emergency vehicle response with it.  But their driving principle has been they ‘just want a fscking map’, and a map they do have.  There are many contributors running around with GPS’s and creating a map.

The next point in the evolution is when the map is good enough for basic ‘context’.  Again, OSM is already there for several parts of the world.  If you’re doing a mashup of your favorite neighborhoods you don’t really care if all the streets are there.  You just need enough that it looks about like your neighborhood on other maps.  Many mashups use google maps and others in this way – which is sorta like using the same quality water to flush your toilet as comes out of your kitchen sink (USA!).  Which is to say a bit of a waste, but who really cares if someone else is paying for it.
Which speaks to another tipping point, which is when the big portals start putting ads on their maps.  Or when they start charging to use their APIs.  I concede now that this may never happen, that it’s a good loss leader to have people using your API for free as long as they put their maps out in the public.   But a part of me feels like we may be in that period of the GeoWeb like the first web bubble, when you could get $10 off coupons from CDNow and B+N, allowing you to buy any cd you wanted for a few bucks.  It wasn’t going to last, but it’s sure fun while it does.  But at some point there may be a shift when they need to make some money, which could drive more energy to collaborative maps as people look to get ads off their service.

The next step starts to get fun, which would be once a collaborative street map gets good enough for basic routing and navigation.  Right now it seems to be (though I could be wrong, I don’t know the OSM community intimately) people who set out to add data to the map, they want to get their area map.  If they go to new areas they’ll bring a GPS along, but it’s often to a totally unmapped area.  I think once large areas start to get close to completion we’ll have people hobble together ghetto car navigation kits.  A laptop with a GPS and the collaborative map, either connected over some kind of wireless internet or downloaded to the car.  One can drive around with this and it will show one’s place on the map, and directions to the end point as well.  Note that this kind of usage is currently illegal with Google Maps or any of the others who get their data from commercial providers.  From the API agreement: ‘In addition, the Service may not be used: (a) for or with real time route guidance (including without limitation, turn-by-turn route guidance and other routing that is enabled through the use of a sensor’.  This is because the commercial mapping providers make big money off of car navigation, and license the (exact same) data to do that at a higher price.

With basic navigation on a collaborative map in place you can get people excited about going off in to a ‘new frontier’, going off the map and tapping in to their inner Lewis and Clark.  Actively encourage people to Dérive (though I’m not sure how much the Situationists really would like the idea of people using cars to dérive) in to uncharted areas of the map.

On other fronts I believe that we’ll see niche areas getting high quality mapping.  Governments and companies will realize that if there’s a map that’s 80% done and they just need to fund the last 20%, and that owning the map is not their key value proposition, then they’ll just look to fund the collaborative map instead of doing it themselves.  Those that can think long term will realize that this will most always be cheaper, since they won’t have to keep paying to get it up to date.  With a good collaborative structure much of that will happen on its own.  And they may put a bit extra in each year.  And in areas where a few different organizations all partner up it will definitely be cheaper.  Already we’re seeing some enlightened folks fund Open Street Map contributors to have a mapping party and map an area.

We’ll also likely see collaborative maps for niche verticals.  If you’re doing walking maps then you don’t need the turn restriction information to do car routing, for example.  Someone may offer a map of the best drives in southern california, which would be a subset of the main map.  Or a detailed map of which roads need to be plowed after a snowstorm, that leaves out the roads that don’t.

After that I think you’ll see people hacking commercial nav systems to make use of the collaborative map, and then navigation companies offering low price versions of their systems that don’t rely on the commercial data.  Already we’re seeing navigation companies start to ‘leverage user contributions’, with TomTom’s ‘MapShare‘ to let people update points of interest and the like, and Dash Navigation‘s ability to leverage GPS from other cars to see if a new road has opened up.  I think you may see people even more excited about this if they knew their work was going to a common good instead of just to the advantage of one company.

Once people are able to ‘correct’ the map that they’re driving on I believe we’ll see a really big tipping point.  Build in some voice recognition to call out the name of a street while you’re driving.  This could be billed as the ‘mapping game’, where one gets points for driving new areas.  One could even imagine a company that sets up a business with sort of ‘bounty navigation’ where you can actually make money if you drive new areas of the map and do good reporting of road names and the like.  This could be one of the decoupled functions of the economics around collaborative map making, the navigation company partners with the company that guarantees the map is up to date, and instead of contracting out another company to drive the roads they just put money rewards on driving in new areas.  People could make it so their navigation is free, or even have it be like the electrical grid where if you generate a lot of extra navigation information they pay you.  I haven’t thought through all the details of this, but I think it could work, and would be super cool for helping people think of geospatial data as a commons that one can contribute to and that we’re all responsible for and can be a part of, not just consumers of a service.

Which speaks a bit to a further point, which is when governments realize that they can tap in to and contribute to this as well.  The census spends a ton of money keeping up to date road information.  But their data is not entirely accurate, and it doesn’t do any turn restrictions.  Instead of maintaining their own database they could combine with an open map, and plug in to that workflow.  Indeed such a map likely would have started from one of their TIGER line maps anyways in the US.  So government organizations can join the ecosystem, likely just as funders contracting out other companies to perform the work, as they are starting to do more and more with open source software.  Some may want to try to do it themselves, but the smart ones will plug in to existing ecosystems.

The other tipping point towards the end will be when the big mapping providers decide to invest in collaborative maps.  I had initially been thinking that things would need to be really far along worldwide before they’d make the switch, but a more likely solution might be that they use it in conjunction with their commercial maps.  They already make use of TeleAtlas and Navtech in different places.  So as long as the collaborative map didn’t have a restriction about combining with other sources they could just use it in places that have poor coverage from the major providers.  And they could see where areas of the map are close to being done and strategically fund those.  Another potential source of investment in this kind of mapping could be from aid agencies in areas that commercial providers haven’t mapped.  They could hook up their GPS’s to gather information, and then employ a few people to help process and QA it to make maps they can use.  Since it’s not a core value proposition to them they can share it with others, and start to build really good street maps in areas that no one has touched because it’s too hard for the money they would get.  I would love to try a start up in Africa that hooks up the correcting car navigation systems to a bunch of vehicles and just starts building the living map.  It’d be quite ironic if Africa ended up with more up to date maps than Europe.

They key with all this for me is the evolution of viewing mapping data as a public good, that we all collaborate on to make better.  As GPS’s become more and more prevalent we are all just emitting maps as we go through our lives.  All that’s really needed is a structure to turn that in to useful information, getting the tools better and setting up the economic reward structure.  I’m not a business person, so I don’t have much more to throw out in terms of economic ideas.  But I believe it is possible to set the levers right to encourage this.  And I’m going to do my best to get the tools better and better to show what is possible and get us all moving towards as a future where an up to date accurate map is a commons available to all, and that all are a part of.

Collaborative Mapping: The Business Thread

Since I was speaking at Where 2.0, which is a good bit more business oriented than most conferences I attend, I felt that it could be interesting to start to make the business case for how collaborative mapping could succeed.  I didn’t have time to do more than throw out a few ideas, but I think it’s important to start thinking about this.  I fully believe it will happen, though I’m not sure of the time frame at all.  But it’s almost inevitable, since the economic end result is a more efficient allocation of resources.  The only question is whether there will be enough incentives along the way.

At the core is the idea that there will be a series of ‘tipping points’, where it will be cheaper for an organization to fund a collaborative map than it is to buy the data from the commercial provider at the accuracy that they need.  The last clause is important, because there are many cases where one just needs data as a base for other information.  Many google maps mashups just need some context to display the information they’re showing.  So if there is a route to get from one tipping point to the next for different sets of organizations than a collaboratively built map will emerge as competitive if not better than those made by commercial providers.  There will be no middle man who bundles all the functions of mapping together and extracts rent after having done the work.  Instead there will be a diversity of organizations, including private companies, government, and individuals, who all work together to make accurate, up to date maps.

Currently the idea of a collaboratively built maps is still quite radical.  The work is being done mostly by amateurs (in the best sense of the word) and ‘true believers’, who know that it’s the right way to build open maps.  No traditional geodata providers seem to feel threatened at all, at least not yet.  This is very much like the early days of open source software, it was seen as some purist movement that would never have a big effect on anything.  But people stuck with it and ended up building a huge economic engine of change.  I am a true believe who is sure that collaborative mapping can become a more efficient way to build geospatial data, fully supported by a variety of business models.  So I’ve been thinking about how we can move from being rebels to becoming the default way of getting things done, as open source software has.  It took 20 years to go from starting the movement to mainstream success, and I think we can follow in their footsteps and do it even faster (and indeed leverage open source software as a base to build it upon)

Let’s start by fast forwarding to a future where we have economically successful collaborative maps.  Then from there we can look back and see how we might get there, what tipping points would be involved.  We are currently in a stage where business models around open source software are maturing.  Building software still costs money, even if it’s open source.  But there are a variety of ways to make money even with a collaborative base.  The key for me with this is that the functions of traditional software company have been decoupled from one another – you can buy support on open source software from one place, a manual on it from somewhere else, and then get training on how to use it from a third organization.  None of the functions of a software company have gone away, they’ve just been split up in to smaller pieces, and there is competition in the market for each of them.  Thus making them more efficient and more competitive.

The biggest providers of commercial geospatial data also wrap up a lot of functionality in to one package:

  •  pay people to go out and drive the roads to keep the database up to date.
  • Find and acquire data from public sources
  •  Process the raw data, doing quality assurance
  •  Ensure that the information is up to date – ie give people someone to sue if it goes wrong
  •  Services and Consulting – ‘analysis and proprietary research’, ‘business plan reviews and testing services.’ from http://www.navteq.com/developer/index.html
  • Geolocated yellow pages – ‘placing your business on the map

Since the commercial databases are not open there is no way to separate out this functionality.  With a collaborative map one could imagine niche companies doing one of them, or new companies that combine some of the processes here with other functions.  Navigation is an obvious one, and indeed TomTom is starting in on this with their MapShare.

So you could have a company that just goes out and drives the roads and turns over the data / adds it to the collaborative map.  Clients who want an area of the map more up to date could pay them directly.  You could also have small businesses who want to be sure that people get accurate directions to them.  We could even imagine ‘a man with a van’, but instead of for moving it’d be for driving roads with a GPS, and perhaps some camera’s strapped to the roof.  There then could be a company whose expertise is processing raw GPS data.  GM or FedEx might sell the data from their vehicles under permissive terms, and then a company could do a bunch of algorithmic analysis on the data to extract roads, and contribute those to a collaborative map.

Then there could be a class of companies who just provide guarantees.  Someone you can call up if there’s a problem, get a service level agreement.  They in turn would have internal people to drive roads, or contract out with the other companies when they needed a certain area to be at the accuracy they’ve agreed to provide others.  You have this in the open source software world, with companies like SpikeSource that test how things work together and give someone a number to call if things go wrong.

And of course the other big open source business model is to provide new services on top of open source software.  See Google and Yahoo! and most new internet companies.  They contribute to the underlying software to run the parts that they keep private.  Indeed those very same companies could become significant contributors collaborative mapping – they already spend significant money in licensing fees from commercial data providers, and if it made economic sense to put the money in to a collaborative map they likely would.

One could also imagine a company whose sole purpose is to do accuracy assessments of collaborative maps.  They would play a very key role, in that they’d be able to answer the question for companies ‘should I invest in collaborative mapping?’.  I maintain there is a tipping point for just about every organization, but it will be very painful if the area they need mapped has very poor coverage and they have a small budget.  So for a small fee there could be a company that lets you know how much investment it would take to get the area of interest to the accuracy that the organization needs.

Ok, this post is already long enough, I’ll continue soon with more on the business case, what steps might evolve us to a place where collaborative mapping is simply the smarter economic choice.  But my main point for this post is that it is possible to decouple the function of ‘ownership’ of a set of geospatial data from the functions that are needed for its upkeep.  Indeed such a decoupling could easily lead to a more efficient market around the upkeep of the data.  One thing we neglected to mention as well is that a collaborative map opens up the potential for non ‘expert’ contributors to do valuable work, as long as the structure is set up to minimize vandalism and the like.