Opening Esri

So I’ve been meaning to write a post on this ever since I had a great talk with Andrew Turner, who recently joined Esri. He was expressing a good bit of optimism over Esri’s willingness to embrace ‘open’. My worry is that they would embrace the language of open, but not actually do the key things that would make a real open ecosystem. It’s easy to take enough action to convince more naive decision makers of their intentions, like esri.github.io, which looks like lots of open source projects, but is mostly code samples and api examples that are useless without pieces of closed software. So I wanted to give to Esri a measurable roadmap of actions to take that would signal to me a real commitment to ‘open’.

That was a couple months ago, but then life got in the way, as tends to happen with lots of planned blog posts. But in the meantime there’s been a fascinating flare-up around the GeoServices REST specification in the OGC (most conversation is behind OGC walls, but see open letters and lengthy discussion on OSGeo). And it similarly makes me want to address ‘Esri’ as an entity. The individuals matter less, I believe that most of them want to do the right thing. But the corporation as a whole takes action in the world, and that big picture matters more than what individuals within are trying to do. So here goes:

Dear Esri,
I imagine you’ve been pretty confused and flummoxed by all the push back to the GeoServices REST specification. It is a pretty solid piece of technology, and theoretically is a great step towards opening up Esri more – putting previously proprietary interfaces in to an open standard.
The key thing to understand though is that unfortunately very few people in this industry trust you. I am actually fairly unique in that I give you a whole lot more benefit of the doubt than most. In my life I try as hard as I can to only judge people based on their direct interactions with me, not on whatever people say about them. And though you aren’t exactly a person I do still try to judge in the same way. And I personally have never had a truly bad interaction with you.
But amazingly just about everyone that I’ve encountered in the broader geospatial industry has. When I bring up that I’ve never been mistreated by you they have the utmost confidence that I inevitably will. I have always been fascinated by this, as I believe it goes far beyond ‘typical competition’, which is one thing people at Esri have told me – something like ‘they’re just jealous’. The amount of venom that otherwise totally decent people will throw out is incredible, and it makes it hard for me to not judge harshly, when so many people whose judgement I trust absolutely have felt really screwed in the past by you. Multiple times. In really unpleasant and unexpected ways. And it’s not just competitors, but partners and clients.
So the fundamental point that needs making with regards to GeoServices REST was articulated well by one of my allies: ‘everyone in the OGC membership is objecting to the REST API because they believe ESRI to be fundamentally conniving and untrustworthy.’ He believes that if the REST interface had been proposed by anyone else then there wouldn’t be a problem – it would likely be an approved OGC standard by now. But people believe it’s a ‘cynical business play by an untrustworthy, well resourced, and predatory business interest.’
I have overall had a relatively positive take on the GeoServices REST API, because I don’t (yet?) believe you to be fundamentally conniving and untrustworthy. You’ve done an amazing amount good for this industry, but I do think you’ve just lost your way a bit. I imagine you think that putting this great interface in to OGC is a great step in the right direction, and it is, but unfortunately it’s too much too fast. It’s as if the schoolyard bully is suddenly super nice to you – you wonder what’s up his sleeve, how he’s going to screw you this time. 
But I believe there’s a path forward, to building up people’s trust so that something like GeoServices REST could be accepted in OGC. It just has to be slower and more incremental. This list is just what I can think of right now. There is a fundamental principle at work though, which is moving towards an architecture that encourages people to mix and match Esri components with other technology. Not merely implementing open standards to check the box, but building for hybrid environments: QGIS exporting to or editing ArcGIS Server, ArcGIS Desktop doing the same to GeoServer, TileMill styling a file geodatabase and publishing to ArcGIS Online, ArcGIS Desktop styling and publishing to Google Maps Engine or CartoDB, etc, etc. Each piece of Esri technology ideally could be used stand alone with other pieces. Stated another way, there should be no lock-in of anything that users create – even their cartography rules. Anything that is created with Esri software should be able to be liberated without extreme pain, in line with the Data Liberation Front  (though I think the google maps engine and earth enterprise teams also need som help from them, they too deserve a similar letter).
I realize this is a big leap, since it is not the absolutely most efficient way to solve the needs of most of your customers, since most of them use only your software. And it is a business risk, since it opens up more potential competition. But it’s also a big business opportunity if done right. And reaches beyond mere business to being a real force for good in the world, becoming a truly loved company, with lots of friends. This list is roughly ordered by easier wins first, and so later ones should probably build on them. If these actions are taken I will start to defend you to my friends a lot more.
  • Enable W*S services by default in ArcGIS Server. You’ve done a pretty great job of doing CITE certified implementations. But as the docs show they are not enabled by default, though GeoServices REST and KML are.
  • Make GeoJSON an output option for the ArcGIS Server REST implementation (and get the clients all reading it). And then get it in the next GeoServices REST specification version.
  • Publish the file formats of file geodatabase. The open API was a really great step, and indeed goes 80% of the way. But many libraries would like to not have to depend on it. We all know the format itself is likely an embarrassing mess, but everyone will forgive, as we’ve all written embarrassing code and formats.
  • Help the coming GeoPackage be the next generation of file geodatabases. Help it evolve to be the main way Esri software moves data between one another. Your team has been great on it so far, but the key step is to make it a top level format not just in mobile but also on ArcGIS Desktop (reading and writing it, like a shapefile), Server and Online (as an output format and upload configuration format for both). And I’m hoping your team can join our plug-fest this week, to get to 3 real implementations before version 1.0 of the specification.
  • Stop the marketing posts about how open you are. Open is action and conversation, not a press release or a case study. No need to talk about how we’ll be ‘surprised’ by what you open in the future. Just start opening a lot and let others do the posts for you, and count on that reaching your customers if you’re doing it well.
  • Openly publish the .lyr file format (and any other formats that define cartography and internal data structures for maps). This is probably an internal format that isn’t pretty, but opening it would enable a lot of cool hybrid architectures.
  • Support CartoCSS as an alternative to .lyr file, and collaborate around advancing it as an open standard to ideally make it the next generation .lyr file. This likely will include a number of vendor extensions as your rendering capabilities are beyond most everyone else. But we can all collaborate on a core. Supporting SLD in desktop would also be great, but we can also just look to the future.
  • Move WFS and GML support out of ‘data interoperability‘ extension and in to core format support
  • Bring support for WFS-T to ArcGIS Desktop, so people can use their desktop to directly edit a WFS-T server. The server side support in ArcGIS Server is great, but the client software also needs to implement the open standards for real interoperability. I think a similar thing might be needed for Desktop to edit directly with GeoServices REST, though maybe that is there now.
  • Support WFS-T in Javascript API and iOS SDK (and I guess flex and silverlight, since you tend to try to have all toolkits the same).
  • Become a true open source citizen. This is another large topic, and as is my tendency I’m already going on too long. So perhaps I will detail it more in another post. I have limited the above to mostly standards, but embracing open source can take you even further. But it is much more about contributing to existing open source libraries and building a real community of outside contributors on your projects, not just releasing lots of code on github. Karl Fogel wrote an excellent book on the subject. You are making some progress on this, but it is just a smidgen if you are actually serious about opening up. I will give props for the flex api, though my cynical friends would say it’s the easiest one to open source as it is dying. So please, do continue to surprise us on the open source front, you just don’t need to talk about it a bunch.
So I believe that doing a majority of these will send a strong signal that Esri is truly changing to be more open and that the submission of the GSR spec and putting code on github is more than a marketing move.
My personal recommendation for you on the GeoServices REST specification is to back down from the OGC standardization for now. Instead work to evolve the spec in the open, and try again in a year or two. I don’t think any argument is going to win over those against it, only real action will. And continuing to try to force it through will only hurt our global community. I do believe there has been really great progress made on the spec through the OGC process, and the resulting document has a much higher chance of being implemented by others. Perhaps we could make it an ‘incubating’ OGC standard, that evolves with OGC process, but does not yet get the marketing stamp. The key to me is to encourage real implementations of it, and continue to advance Esri’s implementation as well, but in dialog with other implementers. Everyone can now start with the latest version, but without the backwards compatibility requirement. Call it GeoServices REST 1.0 – it can be a standard that improved from the OGC process though needs more ‘incubation’ before it deserves the full stamp of interoperability. And aim for 1.1 or 2.0 to happen outside the OGC, and once there are 3 solid implementations all influencing its future then resubmit to OGC.
One thing that I think could help a lot is to publish and work on the spec on github, where people can more easily contribute. Take a more agile process to improving it, that doesn’t depend on a huge contentious vote and lots of OGC process. And when the trust has grown all around submit it to be an official OGC document. I believe this would also be one of the best signals to the wider world of Esri’s commitment to open, to dialog publicly and then iteratively address the concerns of all objectors in a collaborative process. If done right it will fly through the OGC process the next time, or else by then we will all be working together towards the next generation APIs that combine the best of GeoServices REST and W*S, while cutting down complexity. It will be difficult, but will ensure that the geospatial industry remains relevant as the whole world becomes geospatial. So let’s all be as open as we can to figure out how to get there together. 
Sincerely 
Chris Holmes

(Written as a private citizen, who cares about our collective future and believes geospatial has a higher role to play than just an ‘industry’. Views expressed here do not reflect my employer or any other organizations I am associated with)

 

come in, we're open

Letting everyone remix web maps

I’ve been meaning to lay down my little vision of how web mapping configuration should work for awhile.  It’s not a big idea, but a nice little way to perhaps bring more participation in to the creation of web maps, making it possible for anyone to make a mash-ups.  I believe Google Mapplets gets at a lot of this, but I’d like to add on standards based services that can bring in real GIS layers and restyle the base maps.

I should start with a view either of a decent base map (like a nice blue marble layer) or else a view that someone else composed.  I can browse around, or I can go to ‘edit mode’, to configure more of what I want to see.  From there I can specify a WMS, a WFS/WCS, a GeoRSS feed or a KML.  If there are multiple layers on the service I should be able to specify which one I want to add.  I can add in the various services, and maybe even add some annotations.  Once I compose a view that I like I can hit ‘save’, and it’ll save the map to my user account, or else export it out as an OWS Context document.  I should also be able to say ’embed’ and have it auto-create the OpenLayers javascript (pointing at the Context document created), that I can put on my blog or webpage.  My annotations will be done as inline KML in the context document.

Next, if the services are WFS/WCS or WMS that allows remote SLD I can change the styles of the layers I brought in.  I might want to emphasize one layer more than others, or even just a particular feature of the layer.  I am able to easily do thematic mapping as well, to specify what attribute to do a quantile or equal intervals on, with the colors I choose.  This remote SLD can be saved to the Context document directly, so I don’t even need permissions on a server.  If I have permissions on the WMS server I can then upload the new SLD as an option for others.

Past that I can get better performance on the map I configured by pointing the configuration at a GeoWebcache or TileCache instance that I have set up or have appropriate rights on.  I can completely drive configuration of GeoWebcache through the web UI, to set which layers to turn on and off.  I can even start it seeding the layers, or have it expire part, based on bounding box and number of zoom levels.

Then if I have a GeoServer set up, or at least layer creation rights on a server, I can start creating new data through WFS-T.  This is beyond annotations: I can create a new layer that persists on the server, centralizing its editing.  This is important because I can also set the edit permissions so other users can start editing the same layer.  I can also choose to turn on versioning, to keep a history of what users make what edits.  I can control the other user permissions, opening it up to everyone or just a select few.  All data I create is available as WMS/WFS/WCS, KML, GeoRSS, GeoJSON ect.

I can also upload a shapefile or geotiff right through the web interface.  Using the same styling utilities I can configure it to look as I like and make that the default style.  I can choose to turn caching on there too.  I can invite others to start editing the shapefile I created.  I can also always export it as a shapefile, or any other format.  All these more advanced layers are also saveable as a ‘view’, as an OWS Context document.

Whenever I put the javascript map anywhere it will always have a ‘configure the map’ button, for others to start to remix.  Once they hit the ‘configure the map’ button they get a nice listing of the layers and data that make up the map.  They can restyle and change layers to their own liking with no permissions.  And if they want to start caching or adding data they can get rights on a server, or just set up their own server.  Through this I hope we can make it incredibly easy for anyone to create their own map, and add a sort of ‘view source’ to maps that anyone can use, not just javascript programmers.  I feel this gets beyond mere ‘wizards’, which take you through a set path to add the portions you want, to a more flexible remixing environment that can hopefully encourage more lightweight maps that aren’t hard to create.  I believe this should also be the way the majority of people configure their GeoServers, though we should also have a more advanced admin interface to change all the little settings.

In terms of technology, we’re actually quite close to this vision.  It really leverages a lot of the standards architecture, utilizing OWS Context, Remote SLD and WMS/WFS.  We need to add the REST admin part of Geowebcache, but the ability to upload a shapefile or GeoTiff is now in a REST interface for GeoServer, thanks to the work of David Winslow.  We just need a front end in openlayers to edit it.  For OWS-5 we did inline KML in Context documents.  Versioning is also in place, though we need a bit more work on granular security permissions, though the framework is there. And we have funding for an ajax SLD editor, which should be the final major piece of this.   Then we just need to iterate through making the UI incredibly intuitive and easy to use.  But I’m excited for this vision, to bring the power of ‘real’ GIS – large datasets and styling of basemaps – to the easy to use style of the GeoWeb.

That’s all for me for now.  Two blog posts in the span of a less than a week, I guess this means I don’t have to post again for months 😉

Slides from GSDI-10

Just finished up a great week at GSDI-10 in Trinidad. I came down with Justin Deoliveira; he put on a great workshop on Monday, and we both had talks on Thursday. Sometime soon we should post the workshop modules on the GeoServer blog, but I wanted to get my talk online as several people asked about it. It was a fun talk, bringing together a lot of what I spent my time thinking about in Zambia around SDI’s, open source and Google Maps/Earth. The conference is great, because it gathers many of heads of national mapping agencies and other important players who work with lots of data and really want to figure out the best ways to share it. I think my talk was really well received, the timing felt right for the message: that we need to figure out how to combine what’s going on in the Google Maps/Virtual Earth world with the broader GSDI initiative in to a single GeoWeb. I hope to turn the thoughts I presented in to a paper at some point, but in the meantime my too literal slides do get across much of my argument (sometime I’ll learn how to make visually compelling presentations…)

You can also download  the powerpoint. Reading the slides you may notice the very soft launch of OpenGeo.org, which is going to be our rebranding of the geospatial division of The Open Planning Project, so it’s more clear that we offer services and support packages around GeoServer and OpenLayers. Still lots of work to be done, but we were excited to at least get a start for this conference. Once we get it ready for launch we’ll announce for real.

Oh, and the license for the slide show is Creative Commons Attribution 3.0.
Creative Commons License

Great post on GeoData licensing

I have some major posts to write, but thankfully there are others out in the world thinking about and writing about the same issues that I care about.  Today I found ‘The license: where are, where we’re going‘ on the OpenGeoData blog.  It is exactly the post I wanted to write after attending the Science Commons meeting last year, and then goes on to add even more information that I wasn’t even aware of.  It’s really great news to hear about progress being made on the Open Data Commons Database Licence.  I really hope they continue to work on it, since I had the same reaction as many OSM community members to the news of Creative Commons’ recently published “protocol” on open access data – it’s great for scientists, but doesn’t help much with what we’re doing in the geospatial community.  So thanks Richard for post and the great work you are all doing over on the OSM legal list.

Producing, not consuming, makes the GeoWeb go ’round.

Aaron on Streetsblog just passed me a link to post by Steven Johnson on what he calls ‘the pothole paradox‘. It’s a very nice introduction to GeoWeb ideas, and clearly explains to laypeople why geospatial metadata on everything could help. But for us working on building this open, interoperable GeoWeb it offers very little, except that their stovepipe system is going to attempt to solve it.

Why do I call outside.in a stovepipe? Because as it stands now it only sucks information in. He asserts that to be successful their system must be structured with two spatial properties:

1. Every single piece of data in the system has to be associated with some kind of machine-readable location, as precisely as possible.

2. Where appropriate, data should be associated with a human-readable place (a bar, school, playground, park, or even neighborhood.)

I would maintain that these are true for the broader GeoWeb, a system of systems. And I’m sure outside.in would love it if the entire web was structured that way, and they would be the one who aggregates the results. Unfortunately they are in a position to potentially do something about it, and as far as I can find they just want it all in their own system.

Why do I draw that conclusion? Mostly because there’s no way to access the geo information that their community is encoding. They have a number of easy options to tag location on blog posts, including the GeoRSS standard that many of us are promoting. And they let people submit non-located stories and easily add location. They are actively adding that meta geospatial data layer, and then keeping it all for themselves as far as I can tell. Why do I say that? Because as far as I can tell there’s no machine readable output of the geospatial locations that they’re adding. If I subscribe to a feed of stories about Brooklyn then there’s no GeoRSS tags. Which means that if I want to make a service that mashes up the geotagged stories from outside.in I have to rewrite all their mechanisms for getting that spatial information.

Yes, this is arguably their core ‘value add’, since they have the community that adds all the geospatial metadata. Outside.in thus would love it if the rest of the web had GeoRSS on everything, and then their users were the ones who got the additional information added to those that the authors did not tag. But if everyone thinks in this way then we’re basically stuck in a prisoner’s dilemma.

Thankfully the big players in the Geo Web have actually been getting out of this in the last couple years. Almost two years ago at a Location Intelligence conference GeoRSS was all the buzz. Microsoft and Yahoo! both announced they would support it, and there was a lot of pressure on Google to do so as well. But all seemed utterly unaware that the key is not to be able to read GeoRSS, it’s to output it by default whenever someone uses their API. Google wasn’t yet doing anything with GeoRSS, still trying to make KML the one true format, but they suffered from the same thing – if you used the Google Maps API it would not output any machine readable format, KML or otherwise.

At the time I stood up and asked the question, and it was weird how something so obvious to me was so hard to explain to them. They thought that just being able to read a standard was the most open thing they could do. But basically none of them were offering their information up to others to read. They all had users who were creating new geospatial information through their API’s, but it wasn’t contributing to ‘the’ GeoWeb, it was just making their own little web better. Each seemed to want to ‘own’ the web, which would constrain it to a bunch of competing projects, instead of a new public infrastructure like the internet. The latter web is the one they all wanted, and why they’re investing so hugely, but none wanted to blink, to be the one to cooperate since the other might defect.

Thankfully the past couple years have seen some great progress on this. I wrote about Google’s progress after Where 2.0, which is impressive as they are the ones in the lead. Microsoft has arguably done even more, they were the first to take up my call to produce GeoRSS from their native ‘Collections’ API. They’ve also launched the ability to search KML (and GeoRSS I think/hope?) and easily import KML, now that it is becoming an open standard. This might be surprising to some, but one should remember that those who are behind always rush to standards. Microsoft IE3 actually was the first full commercial implementation of CSS2. We’ve seen a parallel in the geospatial world, with ESRI talking much more about standards, getting their services to implement OGC standards for real, instead of just paying lip service. It looks like they are also embracing KML.

On Google’s side, their GeoSearch is now crawling GeoRSS, which is great (and would be able to crawl outside.in’s aggregrated feeds if they published them). Unfortunately looking just now it appears that their push to get people to make their Google Maps using KML or GeoRSS is no longer a priority. For awhile they were pushing this in their talks, like at the last Google Dev Day. It makes a ton of sense that it’s in their interest to, since more data produced that way means more data for their GeoSearch. But the documents appear to barely mention it. On the other hand MyMaps has everything that users create as KML, and that is arguably more important because presumably people who just need to throw up some points and lines will use that. The more complicated uses of the Maps API would need more than KML or GeoRSS anyways. MyMaps could of course be more open, if it were to produce GeoRSS as well. Yahoo!’s apis seem to have made no progress towards producing GeoRSS, which is unfortunate as they were the first to support reading it.

Ok, this is longer than I had intended (but that’s why I have a blog, so I don’t have to feel bad about writing too much, dammit!).  But I hope that outside.in starts producing machine readable feeds of the awesome geospatial tags their community is gather, to help lead the growth of the geospatial web instead of just waiting to capture the hyperlocal part of it for their own sandbox.  For the GeoWeb to truly succeed we’re going to need a huge number of organizations taking that leap towards cooperation over ownership.  Thankfully the big guys seem to be providing decent leadership, I’d give them a solid ‘B’ on their progress the last few months (though Microsoft might even deserve an A).

Collaborative Mapping: Tools, cont.

The next major area of tool improvement I see is expanding the wiki notion of editing to more of a merging revision control model, with branches, versions, patches and eventually expanding in to distributed repositories.  The ‘patch‘ is a small piece of code that can be applied to a computer program to fix something.  They are widely used in the open source software world, both to get the latest improvements, and to allow those who have commit rights to a source repository to review outside improvements before putting them in.  This helps create the meritocracy around projects, as they don’t let just anyone in to the repository as they might break the build.  Such a case is less likely with maps, but sometimes core contributors might want to see a couple sample patches before letting a new member in.  In the GeoServer versioning WFS work we have a GetDiff operation that returns a WFS Transaction that can then be applied to another WFS.  This fits in with the technical part of how a patch works – they’re really easy to apply to one’s dataset.  But unfortunately a WFS transaction is not as easy to read as a code patch.  The other great thing about patches is that when leaf nodes are updating their data they can just request the change set – the patches – instead of having to do a full check out.  So I’m still not sure how to solve this problem, the WFS Transaction is the best I’ve got, but I think we can do better, have a nice little format that just describes what changed.

Once we’ve got patches people are going to want the ability to merge changes.  If you made a patch and I made a patch and we both submit them then we need a way to see if they’re compatible.  Ideally you could merge at the feature level – if you change the road type and I change the road length of Interstate 5 then we shouldn’t get a conflict.  Even better, merge at the geometry level, if we changed different points on the road then those should merge nicely.  This will become important as people start to ‘check out’ their geo repositories, do edits, and then try to submit back in.  We could just do locking, which is what WFS-T does, but concurrent versioning is so much nicer – we just have to be able to pull off merging.

Right past merging is full on branches.  Which of course are much easier to pull off if you’ve got nice merging in place.  But branches will let people try out new geographic updates in their own sandbox before putting them on the mainstream.  This can lead to better reviews of the updates.  And with nice branching and merging you would be able to let a number of people work concurrently on their own area of the map, merging them seamlessly.  This is obviously a really hard problem, one that even ArcSDE has trouble with for the things people actually want to do.  I do think we’ll be able to get there in the open source world, indeed I believe we have a better chance of achieving it since once we get close we’ll get a lot of interest in people wanting it completed and meeting their needs, funding the iterative improvements.

The final piece, that I sort of don’t even want to think of yet, since it’s damn hard, is distributed versioning.  I do think it’s extremely important though, to let everyone have their own editing repository, which can flow back in to the main one.  I like the model a lot, and think it has great wins for geospatial.  But since we’ve barely got an SVN equivalent I think it’s wiser to wait a bit on these issues till we sort out what a patch should look like.  Indeed SVK was possible because SVn already existed.  But I’m definitely excited by the possibilities, for every node of the map to have the potential to be edited.  This can be a big win for areas with low bandwidth.

The next category of tool improvements is granular security settings.  Right now there’s not even a way to limit editing the map to only some users.  I think that many maps will flourish with the open to all editing style, making use of rollbacks to prevent vandalism.  But some will likely want to keep the map to set group of committers.  This way one could get commit rights after doing a number of good patches, perhaps ensuring higher quality for some maps.  You also might have different permissions for different users on different layers.  We should be able to get all of that with our current GeoServer security system, we just need to hook up a UI for it.  The trickier thing will be a nice feature, and I think is possible – limiting users to certain geospatial areas or features with specific properties.  Since the security system is integrated at the code level, and lets us use aspects, I think that this should be possible, will just take a bit of work to figure out.

Another area I see a lot of potential innovation is distributed processing of tiles.  Tiles are the clear winner for how to display geospatial information, Google Maps has risen the bar so that anything that isn’t tiled just feels out of date.  But tiling takes a ton of processing power.  Google is all set up to do it, but the rest of us aren’t.  To fully cache http://sigma.openplans.org to zoom level 17 would have taken me about 5 months.  Open Street Map has been making tremendous strides on this with their Tiles@Home initiative, which I am very impressed by.  OSM is lucky in many ways, in that they have a project that people want to devote their spare CPU cycles to.  It could be cool to set up marketplaces for processing of tiles, where companies that are going to keep their data private, or just that don’t have the reputation of OSM, can engage other nodes and give them micropayments for their work.  I think other areas of potential innovation include leveraging Amazon’s EC2 to process huge amounts of tiles.  We’re also going to need to have the collaborative mapping stuff hook up with the tiling efforts, so that when there are massive edits the tiles can expire themselves and get processors started on generating new ones.  We can likely leverage http’s Conditional GET functionality to let browsers and others cache geospatial data, but also get the most up to date data when its available.

The last area I’d like to see improvement on is more granular notification mechanisms.  GeoRSS output is the obvious choice, but could also do email or SMS notifications.  Speaking of which I’d love more innovation on mobile clients, and even super low tech versions like be able to SMS in a new or updated location by just entering cross streets or reading a position from GPS.  But one should be able to have the notifications based on very granular rules – ‘send updates for highways in this bounding box’, or ’email all occurrences of the brown spotted pigeon along this river bank’.  This would be useful not only for preventing vandalism, but also to enable people to take action on up to date reports.  The map becomes not just an artifact of what has happened, but a living thing can help create more up to date information.  If the brown spotted pigeon is seen in one area then it will alert more people who can then add updates on its location and get a more detailed map of its path.

I’m sure there are many more innovations to be had with tools, but this is just a start of the things that we’re starting to work on and the things I’d like to work on in the future.  At TOPP we’re doing this stuff when we don’t have paid client work (or have met revenue targets for the year, since we’re a non-profit), but if there’s anyone out there who wants to see specific areas accelerate we’d be very excited to take on paid work to do any of the things talked about here <end shameless plug/>.

Collaborative Mapping: Tools

Continuing the collaborative mapping thread, I’d like to think a bit about tools to make this happen. Do a bit of dreaming, and maybe think through how we can get there. Definitely as soon as I start to talk about this people want to do all kinds of crazy synchronization and distributed editing of features. I do think we’ll get there, but I fear going for too much too soon, getting loaded down by over-designing and not addressing the immediate problems. Indeed Open Street Map has proven that if the energy is there the tools just need to do the very basics. I have been putting my energy in to getting a standards based implementation, on top of WFS-T, but that’s more because I know it and I like standards. I don’t think it’s the best way to do things, and I don’t even think it should be the default way to do things – at this point I’d prefer something more RESTful. But I believe in being compatible with as much as possible, and there are already nice clients written against WFS-T. So it should always be a route in to collaborative editing.

First off, I think we need more user friendly options for collaborative editing. Not just putting some points on a map, but being able to get a sense of the history of the map, getting logs of changes and diffs of certain actions. Editing should be a breeze, and there should be a number of tools that enable this. Google’s MyMaps starts to get at the ease of editing, but I want it collaborative, able to track the history of edits and give you a visual diff of what’s changed. Rollbacks should also be a breeze – if you have really easy tools to edit it’s also going to be easier for people to vandalize. So you need to make tools that are even easier to rollback. On the GeoServer extended WFS-T Versioning API we’ve got a rollback operation, that can work against an area of the map, a certain property, or a certain user (or combinations of those). Soon we hope to be working on some tools built on top of openlayers to handle those operations in a nice editing environment.

The next step on user friendly options will be desktop applications that aren’t full GIS, but that lets users easily edit. These can leverage the tools of existing open source GIS desktop environments, like uDig and qgis, but can strip down the interface to just be simple editing environments with a few hard coded background layers. You could have branded environments for specific layers of information. And ideally build other kinds of reporting tools that also leverage the same GIS tools, but in an interface geared towards the task at hand, like search and rescue or tracking birds. The other thing I hope to work on is getting some of the editing hooked up with Google Earth. I just learned there’s a COM API that might allow us to hack something in, or we can try to get Google Earth to support POSTing of KML to arbitrary URLs as Sean suggest

Next I’d like to see integration with ‘power tools’, the full on, expensive ass GIS applications that are the realm of ‘professionals’. Not that I have a huge love for those tools, but I’d really like to engage as many people as possible in to collaborative mapping. GIS professionals are a great target audience, since most of them are already passionate about mapping. They have a lot of expertise to bring to the table. And while some of them can be elitist about collaborative mapping and ‘lesser’ tools, so too can many of the amateurs raise their noses at people who aren’t DIY. At the extremes it can obviously be a major divide, but I think both could have a lot to teach each other if they’re willing to listen. But I believe the first step to get there is to get the ‘power tools’ compatible with the collaborative mapping protocols, so you start them off in collaboration. This is one reason I’m an advocate of the WFS-T approach, as there are plugins for ArcGIS and other heavy desktop GIS’s. I think we could see some professionals get really excited about collaborative mapping, as it could become the thing they are passionate and do in their free time that is fun and helps boost their resume. This is how many open source contributions work now, it’s a complex interplay that includes professional development. Perhaps one’s collaborative mapping contributions could help land jobs in the future.

I’d also like to see more automation available in the process. This is an area that could use a lot of experimentation, how much to automate, how much to let humans collaborate on. But I think there’s an untapped area of figuring out vector geometries from the aggregrated tracks of GPS, cell phones and wifi positioning data. People are generating tons of data every single day, and most of it is not even recorded. It’s great when people take a GPS and decide explicitly to map an area and then go online and digitize it. But we could potentially get even more accurate than just one person’s GPS by aggregating all the data over a road. Good algorithms could extract the vector information, including turn restriction data, since it could figure out that 99% of fast moving tracks are going in the same direction. Of course we’ll still need people to add the valuable attribute information, but this way they’d have a nice geometry already in place.

You could also do feature extraction from satellite and aerial imagery. This is obviously a tough people that many people are working on, but perhaps it could also be improved by the leveraging human collaboration. In a system with good feedback people could perhaps help train the feature extraction to improve over time. It also could be valuable to do automated change detection, which then notifies people that somethings changed in the area, and then they could figure out the proper action.

The final area I think we could improve with automation is prevention of vandalism and silly mistakes. GeoServer had work done by Refractions a few years ago to do an automatic validation engine. Unfortunately this has languished with no documentation, but it’s still part of GeoServer. One can define arbitrary rules to automatically reject bad transactions – geometries that intersect badly, roads with out names, ect. This could also reject things like ‘Chris Rulez’ scrawled over the whole of the US, as it could know that no real roads run in completely straight lines for over 200 miles. I could imagine a whole nice chain of rules to ensure that all edits meet certain quality criteria. And perhaps instead of rejecting straight up any edit that doesn’t follow all rules can go in to a sandbox. I could also imagine some sort of continuous integration system once there is topology to check network validity, and other quality assurance pieces that can’t take place instantly.

Ok, I’ll wrap this post up for now, will continue this thread soon.