Workshop On Linked Spatiotemporal Data 2010 (Call for paper)
Whilst the Web has changed with the advent of the Social Web from mostly authoritative towards increasing amounts of user generated content, it is essentially still about linked documents. These documents provide structure and context for the described data and easy their interpretation. In contrast, the upcoming Data Web is about linking data, not documents. Such data sets are not bound to a specific document but can be easily combined and used outside of the original context. With a growth rate of millions of new facts encoded as RDF-triples per month, the Linked Data cloud allows users to answer complex queries spanning multiple sources. Due to the uncoupling of data from its original creation context, semantic interoperability, identity resolution, and ontologies are central methodologies to ensure consistency and meaningful results. Space and time are fundamental ordering relations to structure such data and provide an implicit context for their interpretation. Prominent geo-related Linked Data hubs include Geonames.org as well as the Linked Geo Data project which provides a RDF serialization of Open Street Map. Furthermore, myriad other Linked Data sources contain location-based references. This workshop aims at introducing the GIScience audience to the Linked Data Web and discuss the relation between the upcoming Linked Data infrastructures and existing OGC services-based Spatial Data Infrastructures. The workshop results will directly contribute to the ongoing work of the NeoGeo Semantic Web Vocabularies Group, an online group focused on the construction of a set of lightweight geospatial ontologies for Linked Data. Overall, the workshop should help to better define the data, knowledge representations, reasoning methodologies, and additional tools needed to link locations seamlessly into the Web of Linked Data. Subsequently, with the advent of “Linked Locations” in Linked Data, the gap between the Semantic Web and the Geo Web will begin to narrow.
More information, including important dates, relevant topics, and submission procedures can be found at the workshop homepage: http://stko.psu.edu/lstd2010/
Haiti Response
I don’t want to call it a trend, but the number of websites trying to help in whatever regarding Haiti are soaring. Reminded me a bit on open source, nowadays every company tries to have its own alibi free software project to be accepted in the “community”. Maybe this weird first impression was the reason I ignored all these news at slashgeo and the various mailing lists. The Haiti Crisis Map aggregates some of the donated data sources. Very interesting, it gives you a good idea what could actually be possible (and it also illustrates on which low level we operate in our research projects…).
we have made our most recent development, the Semantic Annotations Proxy, available as free service. Annotations link from meta-data documents to external, shared vocabularies such as ontologies or gazetteers. The proxy “injects” these references into existing source documents coming from remote Web services without the need to update existing systems. The semantic annotations proxy is available at http://semantic-proxy.appspot.com/
Some its features:
- It is fast: We are working directly on the data streams; sophisticated parsing and caching ensures high-performance.
- It is robust: Changes to the source document, e.g. due to updates to underlying data models, won’t affect the annotations.
- It is reliable: The proxy is running in the cloud. This will ensure scalability and availability.
- It is free: This is a free service, and all code developed for the proxy is open source.
- It is growing: For now we have support for WSDL Web services and the OGC Web Processing Service. Support for other standards is planned, and we will add new document types on request.
For more information, visit the open source project’s website at http://my-trac.assembla.com/sapience.
DOLCE Foundational Ontology in WSML
We are using the DOLCE ontology as framework for most our ontologies, and we use WSML most of the time for writing them down (there are strong reasoners, you are more flexible, and I can do it in Eclipse ;). Well, we always had our own local copy of a WSML Dolce ontology, and I thought its time that we share it. I also created a new project for serving our ontologies (at least temporarily)
E-Book Reader
Last week I bought myself a sony e-book reader (the touch edition). Sometimes I just love reading fiction (especially when it’s snowing outside like now), but I hate buying these books which are then just wasting space in my shelf. The books in my shelf are supposed to be either references or I read them several times. But many books are just like movies: you rent them, watch them, and forget them. Well, so I bought this reader, and I am thouroughly impressed. Double click on any word, and its definition is displayed. It supports highlighting and basic writing functionality (which is enough for e.g. correcting texts). It’s a bit difficult to read complex documents, e.g. PDFs based on Word documents with lots of tables and diagrams. Well, it’s not difficult, it’s simply not possible. But for everything else, it is wonderful.
Sapience & Semantic Annotations in OGC Standards
Our OGC Discussion Paper about Semantic Annotations in Standards is finally online, took us long enough for the final edits. But we are already planning the next version. We discuss how to insert links to external vocabularies or other documentation into existing OGC-compliant metadata (and even data like KML). We realized soon that the theoretical discussion requires some practical implementation. All our implementations, ranging from semantic data integration to semantic validation of service compositions, assume that the Web services or data sets are “already” semantically annotated. We didn’t really care where the semantic annotations come from (we of course thought about user interfaces and so on, but not so much about the implications for existing implementations).
Our new (open source) project sapience has been initiated to address this issue. From the website (I am too lazy to write it all over again):
“The Semantic Annotations API (sapience) comprises libraries giving application developers a simple and fast way to extend their applications with semantic functionality. Existing applications with complex and data models usually lack ways to describe the meaning of the data, which unnecessarily impairs the exchange of data across different applications. This is especially the case for Web services serving arbitrary content which have to be integrated into other applications. Due to our background in Geoinformatics, the libraries have a strong focus on supporting the annotation of geospatial content compliant to standards published by the Open Geospatial Consortium (OGC). The various libraries have been either developed within scientific research projects, or are results of thesis implementations (ranging from BSC to PhD). The implementations have been refactored and simplified for sapience.”
Modelling animal spreading to track down Osama?
Look here: N. 33.901944° E.70.093746°
When we applied a distance-decay model to his last known location from 2001, the FATA – or Federally Administered Tribal Area – of Kurram had the highest probability of hosting bin Laden (98%). There were 26 city islands within a 20-km radius of his last known location in northwestern Kurram. Parachinar figured as the largest and the fourth-least isolated city. Nightlight imagery also shows that Parachinar is the closest city to his last known location and by far the brightest city by nightlight intensity in Kurram. When we undertook a systematic building search in the city of Parachinar, this approach resulted in three structures that meet all six of them and 16 structures that meet five of them.
source: http://web.mit.edu/mitir/2009/online/finding-bin-laden.pdf
Sounds interesting, right? Geography Professor Thomas Gillespie claims that he can pinpoint Obama Bin Laden, using some serious spatial analysis techniques. I just briefly scanned through the paper. He is using distance decay models, and some knowledge about Osama himself (e.g. his has diabetes and needs a dialysis machine, which makes the stories about him hiding in a remote cave not really believable). Proves how important our work really is…
Poster FOIS 2008
On Friday the conference “Formal Ontology in Information Systems” is going to start. This is the first time I’ve prepared a poster about my PhD for a conference. I am usually having talks. I thought it my be good idea to put my thoughts into a poster, since it is not really difficult (for me) to wrap up my ideas into an article. Posters are kind of more challenging (if you take it serious). You have to omit all the difficult stuff and come to the point. You need to have a catchy introduction, but also some more in depth details which keep the viewer interested. Besides that it’s simply more fun to play around with Inkscape.
Today was deadline for Google’s Project 10^100. They collect unique ideas for projects, with the biggest criteria being that it should as many people as possible. Well - “why not just trying it out” - I thought, and applied for it. Since there was an (optional) video link, and the accepted ideas are getting voted by the community, I installed Camtasia again and produced the following video. What do you think?
According to Wikipedia, desire lines (or social trails) are the trails which manifest on the surface if people heading for a certain destination take a shortcut through the grass. This great post further explains the concept of desire lines, and discusses why landscape architects should take this human behavior into account before planning pathways. The map of the Michigan State University shows an example were architects waited until desire lines emerged and paved them afterwards.
Sometimes we GIScientists struggle with the phenomenon of the rise of geospatial applications in the web (i.e. Google Maps, OpenStreetMaps, Flickr, …). In the past we were told that research is targeting the theoretical foundations of GIScience, and that the stuff we develop is years ahead of the products available on the market. But the web, Virtual Globes, or the abundance of GPS devices changed everything. This all developed so fast that we have a hard time to catch up. In fact, all we can do is to react and try to understand what’s going on.
I like the analogy of desire lines and the ongoing social phenomena of the web. In the past we were the architects who decided how people interact with the concept of space, today we need to study the human behaviour in the web and study the emerging desire lines. And once we know what the people care about we can try to pave the ways and provide the theoretical foundations for the geospatial web.