Githubbing the GeoPackage specification

In the past few months one of the main things I’ve spent time on is the new GeoPackage specification being worked on within the OGC. I was involved in the very early days of the conception, before it was picked up for the OWS-9 testbed, as I feel it has the potential to fill a big hole in the geospatial world (James Fee sums up my feelings nicely). We discussed keeping it super lean, accessible to all, and implementable by non-GIS experts.

While I got wrapped up in other things a spec for public comment popped out in January, which I felt had strayed from what I believed were the original core design goals. I sent a huge missive on all the changes I felt could get it back to that lean core. I try to never be one to just criticize though, as it’s easy to cast stones and much harder to build something that is strong enough to hold up to others pelting rocks. And my open source roots teach me that a criticism without offering to put in effort to the alternative is close to worthless. So I decided to join the GeoPackage ‘Standards Working Group’, participating in weekly (and then twice a week) calls, and trying to work with the OGC workflow of wikis and massive Word documents.

One of my main goals was to learn about how the OGC process actually works, and hopefully from a place of knowledge be able to offer some suggestions for improvement from my open source software experience. That’s worth a whole post on its own, so I’ll hold off on much of that for now.

OGC staff has also been great about being open to new ways of working. It’s had to be balanced against actually getting the specification out, as many agencies need it ‘yesterday’. But we achieved an OGC first, putting this version of the specification out on GitHub. My hope is to attract developers who are comfortable with GitHub, who won’t need to learn the whole OGC feedback process just to suggest changes and fixes.

I do believe the specification is ‘pretty good’ – it has improved in almost all the ways I was hoping it would (Pepijn and Keith drove the tech push to ‘simpler’, with Paul doing a great job as Editor and Raj from OGC helping out a bunch). I do believe GeoPackage can improve even more, primarily by feedback from real implementations. My hope is it does not go to version 1.0 without 3 full implementations. That don’t just handle one part of the document, but implement all the core options (particularly Tiles and Features).

For the OGC this is an experiment, so if you do think OGC specifications could benefit by a more open process you can help encourage them by spending some of your technical brainpower on improving GeoPackage using GitHub (big shout out to Even who already has a bunch of suggestions in). Fork the spec and make Pull Requests, on core tech decisions or on editorial tweaks that make it easier to understand. For those without GitHub experience we wrote some tips on how to contribute without needing to figure out git). We also tried to add a mechanism for ‘extensions‘ to encourage innovations on the GeoPackage base, so I’d also love to see attempts to use that and extend for things like Styles (SLD or CSS), Rasters, UTFGrids and Photo Observations.

And if you can’t fully implement it but are interested in integrating, I do encourage you to check out the Apache licensed libgpkg implementation from Luciad. Pepijn has been the technical core of GeoPackage, researching what’s come before and continually building code to test our assumptions. Libgpkg is the result of his work, and the most complete implementation. Justin also has a GeoTools implementation and GeoServer module built on it for the Java crowd. And hopefully someone will implement for GDAL/OGR and PostGIS soon.

As always I’ve written too much before I’ve even said all I wanted to. But I will try to do at least one follow up blog post to dig in to some of the details of GeoPackage. I and the SWG welcome an open dialog on the work done and where to take it next. And I’m happy to explain the thinking that got the spec to where it is, since we wanted to keep the spec itself slim and not include all that detail. We are hoping to eventually publish some ‘best practice’ papers that can dig in to ideas we considered yet discarded as not quite ready.

Advertisements

OpenGeo.org

So within the GeoServer community we’re debating whether it’s kosher to do a fairly blatant ‘commercial’ announcement on the blog about OpenGeo.org. But in the meantime I figured I’d just announce here, since I can do whatever I want on my personal blog 🙂

I’m really excited to present OpenGeo, the newly minted geospatial division of The Open Planning Project. Nothing much is changing internally, but we’re getting serious about our image in the world. We’ve been supporting open source geospatial projects for years, and in the past couple years we’ve offered great consulting services around the projects we work on. But it’s always been confusing for people who don’t already know our work. They might see an openplans.org email address on the lists, and follow that to web tools for community organizers, click from there on The Open Planning Project logo, linking to a high tech non-profit, then maybe click projects, and see ‘GeoServer’, which they’ve been using, and from there click on ‘services’ to realize that yes, you can in fact pay us money to work on this. I’m not convinced that anyone made it that far.

So OpenGeo.org is about giving a more visible face to our services and products, so we can bring the geospatial work in TOPP to economic sustainability with full cost recovery. It also marks the launch of ‘GeoServer Enterprise‘ packages, which bundle web and telephone support, priority bug fixes, discount consulting rates, and a number of implementation hours by the experts. This is on the full OpenGeo stackOpenLayers, GeoWebCache and GeoServer. We’re hoping this makes it easier for more conservative organizations to embrace open source by establishing something much closer to a traditional vendor relationship. It should provide a clear answer to the classic question with open source ‘Who do I call when something goes wrong?’ – with GeoServer Enterprise you call us, the experts who wrote the software. We believe the total package is incredibly competitive with any offering, proprietary or open source, for the level of service and expert access provided – you’re not paying for the intellectual property of something we built in the past, you’re paying directly for our time now. We already have two clients on board, which is very exciting. The website will be improved with demos and more content, but we’re quite pleased with what we’ve got. Recently many contracts have been coming in, and the OpenGeo launch should help us grow even more. So if you’re looking for a job for a geo dot-org drop me a line, we’re definitely hiring all types of roles, and soon should update the website with specifics.

Letting everyone remix web maps

I’ve been meaning to lay down my little vision of how web mapping configuration should work for awhile.  It’s not a big idea, but a nice little way to perhaps bring more participation in to the creation of web maps, making it possible for anyone to make a mash-ups.  I believe Google Mapplets gets at a lot of this, but I’d like to add on standards based services that can bring in real GIS layers and restyle the base maps.

I should start with a view either of a decent base map (like a nice blue marble layer) or else a view that someone else composed.  I can browse around, or I can go to ‘edit mode’, to configure more of what I want to see.  From there I can specify a WMS, a WFS/WCS, a GeoRSS feed or a KML.  If there are multiple layers on the service I should be able to specify which one I want to add.  I can add in the various services, and maybe even add some annotations.  Once I compose a view that I like I can hit ‘save’, and it’ll save the map to my user account, or else export it out as an OWS Context document.  I should also be able to say ’embed’ and have it auto-create the OpenLayers javascript (pointing at the Context document created), that I can put on my blog or webpage.  My annotations will be done as inline KML in the context document.

Next, if the services are WFS/WCS or WMS that allows remote SLD I can change the styles of the layers I brought in.  I might want to emphasize one layer more than others, or even just a particular feature of the layer.  I am able to easily do thematic mapping as well, to specify what attribute to do a quantile or equal intervals on, with the colors I choose.  This remote SLD can be saved to the Context document directly, so I don’t even need permissions on a server.  If I have permissions on the WMS server I can then upload the new SLD as an option for others.

Past that I can get better performance on the map I configured by pointing the configuration at a GeoWebcache or TileCache instance that I have set up or have appropriate rights on.  I can completely drive configuration of GeoWebcache through the web UI, to set which layers to turn on and off.  I can even start it seeding the layers, or have it expire part, based on bounding box and number of zoom levels.

Then if I have a GeoServer set up, or at least layer creation rights on a server, I can start creating new data through WFS-T.  This is beyond annotations: I can create a new layer that persists on the server, centralizing its editing.  This is important because I can also set the edit permissions so other users can start editing the same layer.  I can also choose to turn on versioning, to keep a history of what users make what edits.  I can control the other user permissions, opening it up to everyone or just a select few.  All data I create is available as WMS/WFS/WCS, KML, GeoRSS, GeoJSON ect.

I can also upload a shapefile or geotiff right through the web interface.  Using the same styling utilities I can configure it to look as I like and make that the default style.  I can choose to turn caching on there too.  I can invite others to start editing the shapefile I created.  I can also always export it as a shapefile, or any other format.  All these more advanced layers are also saveable as a ‘view’, as an OWS Context document.

Whenever I put the javascript map anywhere it will always have a ‘configure the map’ button, for others to start to remix.  Once they hit the ‘configure the map’ button they get a nice listing of the layers and data that make up the map.  They can restyle and change layers to their own liking with no permissions.  And if they want to start caching or adding data they can get rights on a server, or just set up their own server.  Through this I hope we can make it incredibly easy for anyone to create their own map, and add a sort of ‘view source’ to maps that anyone can use, not just javascript programmers.  I feel this gets beyond mere ‘wizards’, which take you through a set path to add the portions you want, to a more flexible remixing environment that can hopefully encourage more lightweight maps that aren’t hard to create.  I believe this should also be the way the majority of people configure their GeoServers, though we should also have a more advanced admin interface to change all the little settings.

In terms of technology, we’re actually quite close to this vision.  It really leverages a lot of the standards architecture, utilizing OWS Context, Remote SLD and WMS/WFS.  We need to add the REST admin part of Geowebcache, but the ability to upload a shapefile or GeoTiff is now in a REST interface for GeoServer, thanks to the work of David Winslow.  We just need a front end in openlayers to edit it.  For OWS-5 we did inline KML in Context documents.  Versioning is also in place, though we need a bit more work on granular security permissions, though the framework is there. And we have funding for an ajax SLD editor, which should be the final major piece of this.   Then we just need to iterate through making the UI incredibly intuitive and easy to use.  But I’m excited for this vision, to bring the power of ‘real’ GIS – large datasets and styling of basemaps – to the easy to use style of the GeoWeb.

That’s all for me for now.  Two blog posts in the span of a less than a week, I guess this means I don’t have to post again for months 😉

Slides from GSDI-10

Just finished up a great week at GSDI-10 in Trinidad. I came down with Justin Deoliveira; he put on a great workshop on Monday, and we both had talks on Thursday. Sometime soon we should post the workshop modules on the GeoServer blog, but I wanted to get my talk online as several people asked about it. It was a fun talk, bringing together a lot of what I spent my time thinking about in Zambia around SDI’s, open source and Google Maps/Earth. The conference is great, because it gathers many of heads of national mapping agencies and other important players who work with lots of data and really want to figure out the best ways to share it. I think my talk was really well received, the timing felt right for the message: that we need to figure out how to combine what’s going on in the Google Maps/Virtual Earth world with the broader GSDI initiative in to a single GeoWeb. I hope to turn the thoughts I presented in to a paper at some point, but in the meantime my too literal slides do get across much of my argument (sometime I’ll learn how to make visually compelling presentations…)

You can also download  the powerpoint. Reading the slides you may notice the very soft launch of OpenGeo.org, which is going to be our rebranding of the geospatial division of The Open Planning Project, so it’s more clear that we offer services and support packages around GeoServer and OpenLayers. Still lots of work to be done, but we were excited to at least get a start for this conference. Once we get it ready for launch we’ll announce for real.

Oh, and the license for the slide show is Creative Commons Attribution 3.0.
Creative Commons License

Great post on GeoData licensing

I have some major posts to write, but thankfully there are others out in the world thinking about and writing about the same issues that I care about.  Today I found ‘The license: where are, where we’re going‘ on the OpenGeoData blog.  It is exactly the post I wanted to write after attending the Science Commons meeting last year, and then goes on to add even more information that I wasn’t even aware of.  It’s really great news to hear about progress being made on the Open Data Commons Database Licence.  I really hope they continue to work on it, since I had the same reaction as many OSM community members to the news of Creative Commons’ recently published “protocol” on open access data – it’s great for scientists, but doesn’t help much with what we’re doing in the geospatial community.  So thanks Richard for post and the great work you are all doing over on the OSM legal list.

Producing, not consuming, makes the GeoWeb go ’round.

Aaron on Streetsblog just passed me a link to post by Steven Johnson on what he calls ‘the pothole paradox‘. It’s a very nice introduction to GeoWeb ideas, and clearly explains to laypeople why geospatial metadata on everything could help. But for us working on building this open, interoperable GeoWeb it offers very little, except that their stovepipe system is going to attempt to solve it.

Why do I call outside.in a stovepipe? Because as it stands now it only sucks information in. He asserts that to be successful their system must be structured with two spatial properties:

1. Every single piece of data in the system has to be associated with some kind of machine-readable location, as precisely as possible.

2. Where appropriate, data should be associated with a human-readable place (a bar, school, playground, park, or even neighborhood.)

I would maintain that these are true for the broader GeoWeb, a system of systems. And I’m sure outside.in would love it if the entire web was structured that way, and they would be the one who aggregates the results. Unfortunately they are in a position to potentially do something about it, and as far as I can find they just want it all in their own system.

Why do I draw that conclusion? Mostly because there’s no way to access the geo information that their community is encoding. They have a number of easy options to tag location on blog posts, including the GeoRSS standard that many of us are promoting. And they let people submit non-located stories and easily add location. They are actively adding that meta geospatial data layer, and then keeping it all for themselves as far as I can tell. Why do I say that? Because as far as I can tell there’s no machine readable output of the geospatial locations that they’re adding. If I subscribe to a feed of stories about Brooklyn then there’s no GeoRSS tags. Which means that if I want to make a service that mashes up the geotagged stories from outside.in I have to rewrite all their mechanisms for getting that spatial information.

Yes, this is arguably their core ‘value add’, since they have the community that adds all the geospatial metadata. Outside.in thus would love it if the rest of the web had GeoRSS on everything, and then their users were the ones who got the additional information added to those that the authors did not tag. But if everyone thinks in this way then we’re basically stuck in a prisoner’s dilemma.

Thankfully the big players in the Geo Web have actually been getting out of this in the last couple years. Almost two years ago at a Location Intelligence conference GeoRSS was all the buzz. Microsoft and Yahoo! both announced they would support it, and there was a lot of pressure on Google to do so as well. But all seemed utterly unaware that the key is not to be able to read GeoRSS, it’s to output it by default whenever someone uses their API. Google wasn’t yet doing anything with GeoRSS, still trying to make KML the one true format, but they suffered from the same thing – if you used the Google Maps API it would not output any machine readable format, KML or otherwise.

At the time I stood up and asked the question, and it was weird how something so obvious to me was so hard to explain to them. They thought that just being able to read a standard was the most open thing they could do. But basically none of them were offering their information up to others to read. They all had users who were creating new geospatial information through their API’s, but it wasn’t contributing to ‘the’ GeoWeb, it was just making their own little web better. Each seemed to want to ‘own’ the web, which would constrain it to a bunch of competing projects, instead of a new public infrastructure like the internet. The latter web is the one they all wanted, and why they’re investing so hugely, but none wanted to blink, to be the one to cooperate since the other might defect.

Thankfully the past couple years have seen some great progress on this. I wrote about Google’s progress after Where 2.0, which is impressive as they are the ones in the lead. Microsoft has arguably done even more, they were the first to take up my call to produce GeoRSS from their native ‘Collections’ API. They’ve also launched the ability to search KML (and GeoRSS I think/hope?) and easily import KML, now that it is becoming an open standard. This might be surprising to some, but one should remember that those who are behind always rush to standards. Microsoft IE3 actually was the first full commercial implementation of CSS2. We’ve seen a parallel in the geospatial world, with ESRI talking much more about standards, getting their services to implement OGC standards for real, instead of just paying lip service. It looks like they are also embracing KML.

On Google’s side, their GeoSearch is now crawling GeoRSS, which is great (and would be able to crawl outside.in’s aggregrated feeds if they published them). Unfortunately looking just now it appears that their push to get people to make their Google Maps using KML or GeoRSS is no longer a priority. For awhile they were pushing this in their talks, like at the last Google Dev Day. It makes a ton of sense that it’s in their interest to, since more data produced that way means more data for their GeoSearch. But the documents appear to barely mention it. On the other hand MyMaps has everything that users create as KML, and that is arguably more important because presumably people who just need to throw up some points and lines will use that. The more complicated uses of the Maps API would need more than KML or GeoRSS anyways. MyMaps could of course be more open, if it were to produce GeoRSS as well. Yahoo!’s apis seem to have made no progress towards producing GeoRSS, which is unfortunate as they were the first to support reading it.

Ok, this is longer than I had intended (but that’s why I have a blog, so I don’t have to feel bad about writing too much, dammit!).  But I hope that outside.in starts producing machine readable feeds of the awesome geospatial tags their community is gather, to help lead the growth of the geospatial web instead of just waiting to capture the hyperlocal part of it for their own sandbox.  For the GeoWeb to truly succeed we’re going to need a huge number of organizations taking that leap towards cooperation over ownership.  Thankfully the big guys seem to be providing decent leadership, I’d give them a solid ‘B’ on their progress the last few months (though Microsoft might even deserve an A).

Collaborative Mapping: Tools, cont.

The next major area of tool improvement I see is expanding the wiki notion of editing to more of a merging revision control model, with branches, versions, patches and eventually expanding in to distributed repositories.  The ‘patch‘ is a small piece of code that can be applied to a computer program to fix something.  They are widely used in the open source software world, both to get the latest improvements, and to allow those who have commit rights to a source repository to review outside improvements before putting them in.  This helps create the meritocracy around projects, as they don’t let just anyone in to the repository as they might break the build.  Such a case is less likely with maps, but sometimes core contributors might want to see a couple sample patches before letting a new member in.  In the GeoServer versioning WFS work we have a GetDiff operation that returns a WFS Transaction that can then be applied to another WFS.  This fits in with the technical part of how a patch works – they’re really easy to apply to one’s dataset.  But unfortunately a WFS transaction is not as easy to read as a code patch.  The other great thing about patches is that when leaf nodes are updating their data they can just request the change set – the patches – instead of having to do a full check out.  So I’m still not sure how to solve this problem, the WFS Transaction is the best I’ve got, but I think we can do better, have a nice little format that just describes what changed.

Once we’ve got patches people are going to want the ability to merge changes.  If you made a patch and I made a patch and we both submit them then we need a way to see if they’re compatible.  Ideally you could merge at the feature level – if you change the road type and I change the road length of Interstate 5 then we shouldn’t get a conflict.  Even better, merge at the geometry level, if we changed different points on the road then those should merge nicely.  This will become important as people start to ‘check out’ their geo repositories, do edits, and then try to submit back in.  We could just do locking, which is what WFS-T does, but concurrent versioning is so much nicer – we just have to be able to pull off merging.

Right past merging is full on branches.  Which of course are much easier to pull off if you’ve got nice merging in place.  But branches will let people try out new geographic updates in their own sandbox before putting them on the mainstream.  This can lead to better reviews of the updates.  And with nice branching and merging you would be able to let a number of people work concurrently on their own area of the map, merging them seamlessly.  This is obviously a really hard problem, one that even ArcSDE has trouble with for the things people actually want to do.  I do think we’ll be able to get there in the open source world, indeed I believe we have a better chance of achieving it since once we get close we’ll get a lot of interest in people wanting it completed and meeting their needs, funding the iterative improvements.

The final piece, that I sort of don’t even want to think of yet, since it’s damn hard, is distributed versioning.  I do think it’s extremely important though, to let everyone have their own editing repository, which can flow back in to the main one.  I like the model a lot, and think it has great wins for geospatial.  But since we’ve barely got an SVN equivalent I think it’s wiser to wait a bit on these issues till we sort out what a patch should look like.  Indeed SVK was possible because SVn already existed.  But I’m definitely excited by the possibilities, for every node of the map to have the potential to be edited.  This can be a big win for areas with low bandwidth.

The next category of tool improvements is granular security settings.  Right now there’s not even a way to limit editing the map to only some users.  I think that many maps will flourish with the open to all editing style, making use of rollbacks to prevent vandalism.  But some will likely want to keep the map to set group of committers.  This way one could get commit rights after doing a number of good patches, perhaps ensuring higher quality for some maps.  You also might have different permissions for different users on different layers.  We should be able to get all of that with our current GeoServer security system, we just need to hook up a UI for it.  The trickier thing will be a nice feature, and I think is possible – limiting users to certain geospatial areas or features with specific properties.  Since the security system is integrated at the code level, and lets us use aspects, I think that this should be possible, will just take a bit of work to figure out.

Another area I see a lot of potential innovation is distributed processing of tiles.  Tiles are the clear winner for how to display geospatial information, Google Maps has risen the bar so that anything that isn’t tiled just feels out of date.  But tiling takes a ton of processing power.  Google is all set up to do it, but the rest of us aren’t.  To fully cache http://sigma.openplans.org to zoom level 17 would have taken me about 5 months.  Open Street Map has been making tremendous strides on this with their Tiles@Home initiative, which I am very impressed by.  OSM is lucky in many ways, in that they have a project that people want to devote their spare CPU cycles to.  It could be cool to set up marketplaces for processing of tiles, where companies that are going to keep their data private, or just that don’t have the reputation of OSM, can engage other nodes and give them micropayments for their work.  I think other areas of potential innovation include leveraging Amazon’s EC2 to process huge amounts of tiles.  We’re also going to need to have the collaborative mapping stuff hook up with the tiling efforts, so that when there are massive edits the tiles can expire themselves and get processors started on generating new ones.  We can likely leverage http’s Conditional GET functionality to let browsers and others cache geospatial data, but also get the most up to date data when its available.

The last area I’d like to see improvement on is more granular notification mechanisms.  GeoRSS output is the obvious choice, but could also do email or SMS notifications.  Speaking of which I’d love more innovation on mobile clients, and even super low tech versions like be able to SMS in a new or updated location by just entering cross streets or reading a position from GPS.  But one should be able to have the notifications based on very granular rules – ‘send updates for highways in this bounding box’, or ’email all occurrences of the brown spotted pigeon along this river bank’.  This would be useful not only for preventing vandalism, but also to enable people to take action on up to date reports.  The map becomes not just an artifact of what has happened, but a living thing can help create more up to date information.  If the brown spotted pigeon is seen in one area then it will alert more people who can then add updates on its location and get a more detailed map of its path.

I’m sure there are many more innovations to be had with tools, but this is just a start of the things that we’re starting to work on and the things I’d like to work on in the future.  At TOPP we’re doing this stuff when we don’t have paid client work (or have met revenue targets for the year, since we’re a non-profit), but if there’s anyone out there who wants to see specific areas accelerate we’d be very excited to take on paid work to do any of the things talked about here <end shameless plug/>.