Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

May 25, 2012

Image compositing in TileMill

Filed under: Geographic Data,GIS,Maps — Patrick Durusau @ 10:40 am

Image compositing in TileMill by Kim Rees.

From the post:

TileMill is a tool that makes it easy to create interactive maps. Soon they will be adding some new features that will treat maps more like images in terms of modifying the look and feel. This will allow you to apply blending to polygons and GIS data.

BTW, a direct link for TileMill.

On brief glance, the TileMill site is very impressive.

Are you tying topic maps to GIS or other types of maps?

May 23, 2012

New UMBEL Release Gains schema.org, GeoNames Capabilities

Filed under: Geographic Data,GeoNames,Schema.org,UMBEL — Patrick Durusau @ 2:12 pm

New UMBEL Release Gains schema.org, GeoNames Capabilities by Mike Bergman.

From the post:

We are pleased to announce the release of version 1.05 of UMBEL, which now has linkages to schema.org [6] and GeoNames [1]. UMBEL has also been split into ‘core’ and ‘geo’ modules. The resulting smaller size of UMBEL ‘core’ — now some 26,000 reference concepts — has also enabled us to create a full visualization of UMBEL’s content graph.

Mapping to schema.org

The first notable change in UMBEL v. 1.05 is its mapping to schema.org. schema.org is a collection of schema (usable as HTML tags) that webmasters can use to markup their pages in ways recognized by major search providers. schema.org was first developed and organized by the major search engines of Bing, Google and Yahoo!; later Yandex joined as a sponsor. Now many groups are supporting schema.org and contributing vocabularies and schema.

You will appreciate the details of the writeup and like the visualization. Quite impressive!

PS: As if you didn’t know:

http://umbel.org/

This is the official Web site for the UMBEL Vocabulary and Reference Concept Ontology (namespace: umbel). UMBEL is the Upper Mapping and Binding Exchange Layer, designed to help content interoperate on the Web.

April 7, 2012

Rediscovering the World: Gridded Cartograms of Human and Physical Space

Filed under: Geographic Data,Mapping,Maps — Patrick Durusau @ 7:43 pm

Rediscovering the World: Gridded Cartograms of Human and Physical Space by Benjamin Hennig.

Abstract:

We need new maps’ is the central claim made in this thesis. In a world increasingly influenced by human action and interaction, we still rely heavily on mapping techniques that were invented to discover unknown places and explore our physical environment. Although the traditional concept of a map is currently being revived in digital environments, the underlying mapping approaches are not capable of making the complexity of human-environment relationships fully comprehensible.

Starting from how people can be put on the map in new ways, this thesis outlines the development of a novel technique that stretches a map according to quantitative data, such as population. The new maps are called gridded cartograms as the method is based on a grid onto which a density-equalising cartogram technique is applied. The underlying grid ensures the preservation of an accurate geographic reference to the real world. It allows the gridded cartograms to be used as basemaps onto which other information can be mapped. This applies to any geographic information from the human and physical environment. As demonstrated through the examples presented in this thesis, the new maps are not limited to showing population as a defining element for the transformation, but can show any quantitative geospatial data, such as wealth, rainfall, or even the environmental conditions of the oceans. The new maps also work at various scales, from a global perspective down to the scale of urban environments.

The gridded cartogram technique is proposed as a new global and local map projection that is a viable and versatile alternative to other conventional map projections. The maps based on this technique open up a wide range of potential new applications to rediscover the diverse geographies of the world. They have the potential to allow us to gain new perspectives through detailed cartographic depictions.

I found the reference to this dissertation in Fast Thinking and Slow Thinking Visualisation and thought it merited a high profile.

If you are interested in mapping, the history of mapping, or proposals for new ways to think about mapping projections, you will really appreciate this work.

March 18, 2012

Gisgraphy

Filed under: Geo Analytics,Geographic Data,Geographic Information Retrieval,Gisgraphy — Patrick Durusau @ 8:53 pm

Gisgraphy

From the website:

Gisgraphy is a free, open source framework that offers the possibility to do geolocalisation and geocoding via Java APIs or REST webservices. Because geocoding is nothing without data, it provides an easy to use importer that will automagically download and import the necessary (free) data to your local database (Geonames and OpenStreetMap : 42 million entries). You can also add your own data with the Web interface or the importer connectors provided. Gisgraphy is production ready, and has been designed to be scalable(load balanced), performant and used in other languages than just java : results can be output in XML, JSON, PHP, Python, Ruby, YAML, GeoRSS, and Atom. One of the most popular GPS tracking System (OpenGTS) also includes a Gisgraphy client…read more

Free webservices:

  • Geocoding
  • Street Search
  • Fulltext Search
  • Reverse geocoding / street search
  • Find nearby
  • Address parser

Services that you could use with smart phone apps or in creating topic map based collections of data that involve geographic spaces.

February 22, 2012

Look But Don’t Touch

Filed under: Data,Geographic Data,Government Data,Transparency — Patrick Durusau @ 4:48 pm

I would describe the Atlanta GIS Data Catalog as a Look But Don’t Touch system. A contrast to the efforts of DC at transparency.

From the webpage:

GIS Data Catalog

Atlanta GIS creates and maintains many GIS data sets (also known as” layers” because of the way they are layered one on top another to create a map) and collects others from external sources, mostly other government agencies. Each layer represents some class of geographic feature. The features represented can be physical, such as roads, buildings and streams, or they can be conceptual, such as neighbor boundaries, property lines and the locations of crimes.

The GIS Data Catalog is an on-line compilation of information on GIS layers used by the CIty. The catalog allows you to quickly locate GIS data by searching by keyword. You can also view metadata for each data layer in the catalog. All data in the catalog represent the best and most current GIS data maintained or used by the city. The city’s GIS metadata is maintained in conformance with a standard defined by the Federal Geographic Data Committee (FGDC) .

The data layers themselves are not available for download from the catalog. Data can be requested by contacting the originating department or agency. More specific contact information is available within the metadata for many data layers. (emphasis added)

I am sure most agencies would supply the data on request, but why require the request?

To add a request processing position to the agency payroll and to have procedures for processing requests, along with meetings on request granting, plus an appeals process if the request is rejected, with record keeping for all of the foregoing plus more?

That doesn’t sound like transparent government or effective use of tax dollars to me.

February 11, 2012

GeoMapApp

Filed under: Geo Analytics,Geographic Data,Geographic Information Retrieval — Patrick Durusau @ 7:53 pm

GeoMapApp

From the webpage:

GeoMapApp is an earth science exploration and visualization application that is continually being expanded as part of the Marine Geoscience Data System (MGDS) at the Lamont-Doherty Earth Observatory of Columbia University. The application provides direct access to the Global Multi-Resolution Topography (GMRT) compilation that hosts high resolution (~100 m node spacing) bathymetry from multibeam data for ocean areas and ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) and NED (National Elevation Dataset) topography datasets for the global land masses.

See YouTube: GeoMapApp (21 video tutorial)

More data for your merging pleasure. Not to mention a resource on how others prefer to understand/view their data.

2011 Research Tools (Geo Data)

Filed under: Geo Analytics,Geographic Data,Python — Patrick Durusau @ 7:52 pm

2011 Research Tools

A very good course/refresher on handling data sets for the Earth sciences. There are videos, podcasts, notes, etc.

From the description:

For the UNH Earth Science (ESCI) 895-03 class, I created extra videos. This is a part of the UNH Center for Coastal and Ocean Mapping (CCOM) / Joint Hydrographic Center (JHC). The class web page is: http://vislab-ccom.unh.edu/~schwehr/Classes/2011/esci895-researchtools/

January 5, 2012

Baltimore gun offenders and where academics don’t live

Filed under: Data Analysis,Geographic Data,Statistics — Patrick Durusau @ 4:06 pm

Baltimore gun offenders and where academics don’t live

An interesting plotting of the residential addresses (not crime locations) of gun offenders. You need to see the post to observe how stark the “island” of academics appears on the map.

Illustration of non-causation, unless you want to contend that the presence of academics in a neighborhood drives out gun offenders. Which would argue in favor of more employment and wider residential patterns for academics. I would favor that but suspect that is personal bias.

A cross between this map and a map of gun offenses would be a good guide for housing prospects in Baltimore.

What other data would be useful for such a map? Education, libraries, fire protection, other crime rates…. Easy enough since there are geographic boundaries as the binding points but “summing up” information as you zoom out might be interesting.

That is say crime statistics are on a police district basis and as you zoom out, you want information from multiple districts merged and resorted. Or you have overlapping districts for water, electricity, police, fire, etc. Having a geographic grid becomes your starting place but only a starting place.

December 22, 2011

Opening Up the Domesday Book

Filed under: Census Data,Dataset,Domesday Book,Geographic Data — Patrick Durusau @ 7:38 pm

Opening Up the Domesday Book by Sam Leon.

From the post:

Domesday Book might be one of the most famous government datasets ever created. Which makes it all the stranger that it’s not freely available online – at the National Archives, you have to pay £2 per page to download copies of the text.

Domesday is pretty much unique. It records the ownership of almost every acre of land in England in 1066 and 1086 – a feat not repeated in modern times. It records almost every household. It records the industrial resources of an entire nation, from castles to mills to oxen.

As an event, held in the traumatic aftermath of the Norman conquest, the Domesday inquest scarred itself deeply into the mindset of the nation – and one historian wrote that on his deathbed, William the Conqueror regretted the violence required to complete it. As a historical dataset, it is invaluable and fascinating.

In my spare time, I’ve been working on making Domesday Book available online at Open Domesday. In this, I’ve been greatly aided by the distinguished Domesday scholar Professor John Palmer, and his geocoded dataset of settlements and people in Domesday, created with AHRC funding in the 1990s.

I guess it really is all a matter of perspective. I have never thought of the Domesday Book as a “government dataset….” 😉

Certainly would make an interesting basis for a chronological topic map tracing the ownership and fate of “…almost every acre of land in England….”

December 4, 2011

All the software a geoscientist needs. For free!

Filed under: Geo Analytics,Geographic Data,Geographic Information Retrieval — Patrick Durusau @ 8:17 pm

All the software a geoscientist needs. For free! by John A. Stevenson.

It is quite an impressive list and what’s more, John has provided a script to install it on a Linux machine.

If you any mapping or geoscience type needs, you would do well to consider some of the software listed here.

A handy set of tools if you are working with geoscience types on topic map applications as well.

November 25, 2011

GeoIQ API Overview

Filed under: Geo Analytics,Geographic Data,Geographic Information Retrieval — Patrick Durusau @ 4:29 pm

GeoIQ API Overview

From the webpage:

GeoIQ is the engine that powers the GeoCommons Community. GeoIQ includes a full Application Programming Interface (API) that allows developers to build unique and powerful domain specific applications. The API provides capability for uploading and download data, searching for data and maps, building, embedding, and theming maps or charts, as well as general user, group, and permissions management.

The GeoIQ API consists of a REST API and a JavaScript API. REST means that it uses simple URL’s and HTTP methods to perform all of the actions. For example, a dataset is a specific endpoint that a user can create, read, update or delete (CRUD).

Another resource for topic mappers who want to link information to “real” locations. 😉

November 24, 2011

Leaflet & GeoCommons JSON

Filed under: Geographic Data,Geographic Information Retrieval — Patrick Durusau @ 3:50 pm

Leaflet & GeoCommons JSON by Tim Waters.

From the post:

Hi, in this quick tutorial we will have a look at a new JavaScript mapping library, Leaflet using it to help load JSON features from a GeoCommons dataset. We will add our Acetate tile layer to the map, and use the cool API feature filtering functionalities to get just the features we want from the server, show them on a Leaflet map, add popups to the features, style the features according to what the feature is, and add some further interactivity. This blog follows up from two posts on my personal blog, showing GeoCommons features with OpenLayers and with Polymaps.

We have all read about tweets being used to plot reports or locations from and about the various “occupy” movements. I suspect that effective civil unrest is going to require greater planning for the distribution of support and resources in particular locales. Conveniently, current authorities have created or allowed to be created, maps and other resources that can be used for such purposes. This is one of those resources.

I don’t know of any research on such algorithms but occupiers might want to search for clusters of dense and confusing paths in urban areas. Those proved effective at times in struggles in Medieval times for control of walled cities. Once the walls were breached, would-be occupiers were confronted with warrens of narrow and confusing paths. As opposed to broad, open pathways that would enable a concentration of forces.

Is there an algorithm for longest, densest path?

However discovered, annotating a cluster of dense and confusing paths with tactical information and location of resources would be a natural use of topic maps. Or what to anticipate in such areas, if one is on the “other” side.

The Lazy Developer’s Guide to Loading Datasets into GeoCommons

Filed under: Geographic Data — Patrick Durusau @ 3:47 pm

The Lazy Developer’s Guide to Loading Datasets into GeoCommons

From the post:

Loading KML Files

So lets say you have a bunch of kml files you want to load into Geocommons. Of course, its fairly easy to load these through the web UI, but if you need to do this often enough, it would be nice to have a program to do it for you – after all, as Larry Wall said, laziness is one of the three virtues of great programmers.

Frankly, its not exactly obvious from our API documentation what the best way to do this is. And if you aren’t familiar with Curl, the examples are probably not going to help you much, so I’ll be doing this code in Java. Of course, we here at GeoIQ are Ruby programmers, and thus have a natural disdain for anything to do with Java, so I’m probably losing serious Ruby street cred just posting this, but anything for the good of the cause. We will be using the occasionally obtuse Geocommons REST API, but I’ll try to steer you around some of the not so obvious pitfalls.

The ability to load datasets into GeoCommons is one that may come in handy.

ASTER Global Digital Elevation Model (ASTER GDEM)

Filed under: Geographic Data,Geographic Information Retrieval,Mapping — Patrick Durusau @ 3:42 pm

ASTER Global Digital Elevation Model (ASTER GDEM)

From the webpage:

ASTER GDEM is an easy-to-use, highly accurate DEM covering all the land on earth, and available to all users regardless of size or location of their target areas.

Anyone can easily use the ASTER GDEM to display a bird’s-eye-view map or run a flight simulation, and this should realize visually sophisticated maps. By utilizing the ASTER GDEM as a platform, institutions specialized in disaster monitoring, hydrology, energy, environmental monitoring etc. can perform more advanced analysis.

In addition to the data, there is a GDEM viewer (freeware) at this site.

All that is missing is your topic map and you.

November 6, 2011

piecemeal geodata

Filed under: Geographic Data,Geographic Information Retrieval,Maps,Visualization — Patrick Durusau @ 5:43 pm

piecemeal geodata

Michal Migurski on the difficulties of using OpenStreetMap data:

Two weeks ago, I attended the 5th annual OpenStreetMap conference in Denver, State of the Map. My second talk was called Piecemeal Geodata, and I hoped to communicate some of the pain (and opportunity) in dealing with OpenStreetMap data as a consumer of the information, downstream from the mappers but hoping to make maps or work with the dataset. Harry Wood took notes that suggested I didn’t entirely miss the mark, but after I was done Tom MacWright congratulated me on my “excellent stealth rage talk”. It wasn’t really supposed to be ragey as such, so here are some of my slides and notes along with some followup to the problems I talked about.

Topic maps are in use in a number of commercial and governmental venues but aren’t the sort of thing you hear about like Twitter or Blackberries (mostly about outages).

Anticipating more civil disturbances over the next several years, do topic maps have something to offer when coupled with a technology like Google Maps or OSM?

It is one thing to indicate your location using an app, but can you report movement of forces in a way that updates the maps of some colleagues? In a secure manner?

What features would a topic map need for such an environment?

high road, for better OSM cartography

Filed under: Geographic Data,Geographic Information Retrieval,Maps,Visualization — Patrick Durusau @ 5:43 pm

high road, for better OSM cartography

From the post:

High Road is a framework for normalizing the rendering of highways from OSM data, a critical piece of every OSM-based road map we’ve ever designed at Stamen. Deciding exactly which kinds of roads appear at each zoom level can really be done just once, and ideally shouldn’t be part of a lengthy database query in your stylesheet. In Cascadenik and regular Mapnik’s XML-based layer definitions, long queries balloon the size of a style until it’s impossible to scan quickly. In Carto’s JSON-based layer definitions the multiline-formatting of a complex query is completely out of the question. Further, each system has its own preferred way of helping you handle road casings.

Useful rendering of geographic maps (and the data you attach to them) is likely to be useful in a number of topic map contexts.

PS: OSM = OpenStreetMap.

October 28, 2011

Factual Resolve

Factual Resolve

Factual has a new API – Resolve:

From the post:

The Internet is awash with data. Where ten years ago developers had difficulty finding data to power applications, today’s difficulty lies in making sense of its abundance, identifying signal amidst the noise, and understanding its contextual relevance. To address these problems Factual is today launching Resolve — an entity resolution API that makes partial records complete, matches one entity against another, and assists in de-duping and normalizing datasets.

The idea behind Resolve is very straightforward: you tell us what you know about an entity, and we, in turn, tell you everything we know about it. Because data is so commonly fractured and heterogeneous, we accept fragments of an entity and return the matching entity in its entirety. Resolve allows you to do a number of things that will make your data engineering tasks easier:

  • enrich records by populating missing attributes, including category, lat/long, and address
  • de-dupe your own place database
  • convert multiple daily deal and coupon feeds into a single normalized, georeferenced feed
  • identify entities unequivocally by their attributes

For example: you may be integrating data from an app that provides only the name of a place and an imprecise location. Pass what you know to Factual Resolve via a GET request, with the attributes included as JSON-encoded key/value pairs:

I particularly like the line:

identify entities unequivocally by their attributes

I don’t know about the “unequivocally” part but the rest of it rings true. At least in my experience.

Radical Cartography

Filed under: Cartography,Geographic Data,Maps,Visualization — Patrick Durusau @ 3:12 pm

Radical Cartography

You have to choose categories from the left-hand menu to see any content.

A wide variety of content, some of which may be familiar, some of which may not be.

I was particularly amused by the “Center of the World” map. Look for New York and you will find it.

To me it explains why 9/11 retains currency while the poisoning of a large area in Japan with radiation has slipped from view, at least in the United States. (To pick only one event that merits more informed coverage and attention that it has gotten in the United States.)

October 22, 2011

A history of the world in 100 seconds

Filed under: Data Mining,Geographic Data,Visualization — Patrick Durusau @ 3:17 pm

A history of the world in 100 seconds by Gareth Lloyd.

From the post:

Many Wikipedia articles are tagged with geographic coordinates. Many have references to historic events. Cross referencing these two subsets and plotting them year on year adds up to a dynamic visualization of Wikipedia’s view of world history.

The ‘spotlight’ is an overlay on the video that tries to keep about 90% of the datapoints within the bright area. It takes a moving average of all the latitudes and longitudes over the past 50 or so years and centres on the mean coordinate. I love the way it opens up, first resembling medieval maps of “The World” which included only Europe and some of Asia, then encompassing “The New World” and finally resembling a modern map.

This is based on the thing that me and Tom Martin built at Matt Patterson’s History Hackday. To make it, I built a python SAX Parser that sliced and diced an xml dump of all wikipedia articles (30Gb) and pulled out 424,000 articles with coordinates and 35,000 references to events. We managed to pair up 14,238 events with locations, and Tom wrote some Java to fiddle about with the coordinates and output frames. I’ve hacked around some more to add animation, because, you know, why not?

I wanted to point this post out separately for several reasons.

First, it is a good example of re-use of existing data in a new and/or interesting way. That avoids you having to spend time collecting up the original data.

Second, Gareth provides both the source code and data so you can verify his results for yourself or decide that some other visualization suits your fancy.

Third, you should read some of the comments about this work. That sort of thing is going to occur no matter what resource or visualization you make available. If you had a super-Wiki with 10 million articles in the top ten languages of the world, some wag would complain that X language wasn’t represented. Not that they would contribute to making it available, but they have the time to complain that you didn’t.

October 21, 2011

Towards georeferencing archival collections

Towards georeferencing archival collections

From the post:

One of the most effective ways to associate objects in archival collections with related objects is with controlled access terms: personal, corporate, and family names; places; subjects. These associations are meaningless if chosen arbitrarily. With respect to machine processing, Thomas Jefferson and Jefferson, Thomas are not seen as the same individual when judging by the textual string alone. While EADitor has incorporated authorized headings from LCSH and local vocabulary (scraped from terms found in EAD files currently in the eXist database) almost since its inception, it has not until recently interacted with other controlled vocabulary services. Interacting with EAC-CPF and geographical services is high on the development priority list.

geonames.org

Over the last week, I have been working on incorporating geonames.org queries into the XForms application. Geonames provides stable URIs for more than 7.5 million place names internationally. XML representations of each place are accessible through various REST APIs. These XML datastreams also include the latitude and longitude, which will make it possible to georeference archival collections as a whole or individual items within collections (an item-level indexing strategy will be offered in EADitor as an alternative to traditional, collection-based indexing soon).

This looks very interesting.

Details:

EADitor project site (Google Code): http://code.google.com/p/eaditor/
Installation instructions (specific for Ubuntu but broadly applies to all Unix-based systems): http://code.google.com/p/eaditor/wiki/UbuntuInstallation
Google Group: http://groups.google.com/group/eaditor

October 19, 2011

First experiences with GeoCouch

Filed under: Geographic Data,Geographic Information Retrieval,Humor — Patrick Durusau @ 3:15 pm

First experiences with GeoCouch by tbuchwaldt.

From the post:

To learn some new stuff about cool databases and geo-aware services we started fiddling with GeoCouch, a CouchDB extension. To have a real scenario we could work on, we designed a small project: A CouchDB database contains documents with descriptions of fastfood restaurants. We agreed on 3 types of restaurants: KFC, Mc Donalds & Burgerking. We gave them some additonal information, namely opening and closing times and a boolean called “supersize”.

It sounds to me like this sort of service, coupled with a topic map of campus locations/services, could prove to be very amusing during “rush” week when directions and locations are not well known.

October 18, 2011

Geological Survey Austria launches thesaurus project

Filed under: Geographic Data,Geographic Information Retrieval,Maps,Thesaurus — Patrick Durusau @ 2:41 pm

Geological Survey Austria launches thesaurus project by Helmut Nagy.

From the post:

Throughout the last year the Semantic Web Company team has supported the Geological Survey of Austria (GBA) in setting up their thesaurusA thesaurus is a book that lists words grouped together according to similarity of meaning, in contrast to a dictionary, which contains definitions and pronunciations. The largest thesaurus in the world is the Historical Thesaurus of the Oxford English Dictionary, which contains more than … project. It started with a workshop in summer 2010 where we discussed use cases for using semantic web technologies as means to fulfill the INSPIRE directive. Now in fall 2011 GBA published their first thesauri as Linked Data using PoolParty’s new Linked Data front-end.

The Thesaurus Project of the GBA aims to create controlled vocabularies for the semantic harmonization of map-based geodata. The content-related realization of this project is governed by the Thesaurus Editorial Team, which consists of domain experts from the Geological Survey of Austria. With the development of semantically and technically interoperable geo-data the Geological Survey of Austria implements its legal obligation defined by the EU-Directive 2007/2/EC INSPIRE and the national “Geodateninfrastrukturgesetz” (GeoDIG), respectively.

I wonder if their “controlled vocabularies” are going to map to the terminology used over the history of Europe, in maps, art, accounts, histories, and other recorded materials?

If not, I wonder if there would be any support to tie that history into current efforts or do they plan on simply cutting off the historical record and starting with their new thesaurus?

October 1, 2011

Neo4j Spatial: Why Should You Care?

Filed under: Geographic Data,Neo4j,Spatial Index — Patrick Durusau @ 8:26 pm

Neo4j Spatial: Why Should You Care? by Peter Neubauer at SamGIS 2011.

A very nice slide deck from Peter Neubauer on Neo4j Spatial! Great images!

September 29, 2011

Indexed Nearest Neighbour Search in PostGIS

Filed under: Geographic Data,Geographic Information Retrieval,PostgreSQL — Patrick Durusau @ 6:36 pm

Indexed Nearest Neighbour Search in PostGIS

From the post:

An always popular question on the PostGIS users mailing list has been “how do I find the N nearest things to this point?”.

To date, the answer has generally been quite convoluted, since PostGIS supports bounding box index searches, and in order to get the N nearest things you need a box large enough to capture at least N things. Which means you need to know how big to make your search box, which is not possible in general.

PostgreSQL has the ability to return ordered information where an index exists, but the ability has been restricted to B-Tree indexes until recently. Thanks to one of our clients, we were able to directly fund PostgreSQL developers Oleg Bartunov and Teodor Sigaev in adding the ability to return sorted results from a GiST index. And since PostGIS indexes use GiST, that means that now we can also return sorted results from our indexes.

This feature (the PostGIS side of it) was funded by Vizzuality, and hopefully it comes in useful in their CartoDB work.

You will need PostgreSQL 9.1 and the PostGIS source code from the repository, but this is what a nearest neighbour search looks like:

PostgreSQL? Isn’t that SQL? 🙂

Indexed nearest neighbour search is a question of results, not ideology.

Better targeting through technology.

September 17, 2011

GRASS: Geographic Resources Analysis Support System

GRASS: Geographic Resources Analysis Support System

The post about satellite imagery analysis for Syria made me curious about tools for use for automated analysis of satellite images.

From the webpage:

Commonly referred to as GRASS, this is free Geographic Information System (GIS) software used for geospatial data management and analysis, image processing, graphics/maps production, spatial modeling, and visualization. GRASS is currently used in academic and commercial settings around the world, as well as by many governmental agencies and environmental consulting companies. GRASS is an official project of the Open Source Geospatial Foundation.

You may also want to visit the Open Dragon project.

From the Open Dragon site:

Availability of good software for teaching Remote Sensing and GIS has always been a problem. Commercial software, no matter how good a discount is offered, remains expensive for a developing country, cannot be distributed to students, and may not be appropriate for education. Home-grown and university-sourced software lacks long-term support and the needed usability and robustness engineering.

The OpenDragon Project was established in the Department of Computer Engineering of KMUTT in December of 2004. The primary objective of this project is to develop, enhance, and maintain a high-quality, commercial-grade software package for remote sensing and GIS analysis that can be distributed free to educational organizations within Thailand. This package, OpenDragon, is based on the Version 5 of the commercial Dragon/ips® software developed and marketed by Goldin-Rudahl Systems, Inc.

As of 2010, Goldin-Rudahl Systems has agreed that the Open Dragon software, based on Dragon version 5, will be open source for non-commercial use. The software source code should be available on this server by early 2011.

And there is always the commercial side, if you have funding ArcGIS. The makers of ArcGIS, Esri support a several open source GIS projects.

The results of using these or other software packages can be tied to other information using topic maps.

September 12, 2011

LinkedGeoData Release 2

LinkedGeoData Release 2

From the webpage:

The aim of the LinkedGeoData (LGD) project is to make the OpenStreetMap (OSM) datasets easily available as RDF. As such the main target audience is the Semantic Web community, however it may turn out to be useful to a much larger audience. Additionally, we are providing interlinking with DBpedia and GeoNames and integration of class labels from translatewiki and icons from the Brian Quinion Icon Collection.

The result is a rich, open, and integrated dataset which we hope to be useful for research and application development. The datasets can be publicly accessed via downloads, Linked Data, and SPARQL-endpoints. We have also launched an experimental “Live-SPARQL-endpoint” that is synchronized with the minutely updates from OSM whereas the changes to our store are republished as RDF.

More geographic data.

September 9, 2011

Authoritative URIs for Geo locations? Multi-lingual labels?

Filed under: Geographic Data,Linked Data,RDF — Patrick Durusau @ 7:14 pm

Some Geo location and label links that came up on the pub-lod list:

Not a complete list nor does it include historical references or designations used over the millenia. Still, you may find it useful.

August 1, 2011

Neo4j Spatial

Filed under: Geographic Data,Neo4j — Patrick Durusau @ 3:54 pm

Neo4j Spatial – GIS for the rest of us by Peter Neubauer.

Impressive demonstration of the power of Neo4j!

Watch the slide deck and then see: Neo4j Spatial for more details.

And then, Neo4j Spatial Blog Ideas.

Now couple the idea of merging other data sources, say for example traffic, fire, or other public reports, current or historical onto a map. The potential to create or possibly prevent disruption of services seems unlimited.

July 13, 2011

GeoCommons Enterprise Features – Free!

Filed under: Geo Analytics,Geographic Data,Geographic Information Retrieval — Patrick Durusau @ 7:30 pm

GeoCommons Enterprise Features – Free!

From the email announcement:

  • Analytics: Easy-to-use, advanced spatial analytics that users and groups can utilize to answer mission-critical questions. Select among numerous analyses such as filtering, buffers, spatial aggregation and predictive analysis.
  • Private Data Support: Keep proprietary data private and unsearchable by others. Now you can upload proprietary data, analyze it with other data and create compelling maps, charts and graphs all within a secure interface.
  • Groups and Permissions: Allow others in your group or organization to access and collaborate with you. Enable permissions at various levels to limit or expand data sharing. See a step-by-step guide of how to create groups and make your data private here from @seangorman.

For groups and private data, see: Private Data and Groups for GeoCommons!!

GeoCommons has 70,000 datasets.

If you look around you might find something you like.

Topic mappers should ask themselves: Why does this work? (more on that anon)

June 22, 2011

Weave – Web-based Analysis and Visualization Environment

Filed under: Analytics,Geographic Data,Visualization — Patrick Durusau @ 6:40 pm

Weave – Web-based Analysis and Visualization Environment

From the webpage:

Weave (BETA 1.0) is a new web-based visualization platform designed to enable visualization of any available data by anyone for any purpose. Weave is an application development platform supporting multiple levels of user proficiency – novice to advanced – as well as the ability to integrate, disseminate and visualize data at “nested” levels of geography.

Weave has been developed at the Institute for Visualization and Perception Research of the University of Massachusetts Lowell in partnership with the Open Indicators Consortium, a nine member national collaborative of public and nonprofit organizations working to improve access to more and higher quality data.

The installation videos are something to point at if you have users doing their own installations of MySQL, Java, Tomcat, or Flash for any reason.

I would quibble with the installation of Tomcat using “root” and “password,” as the username and password for the admin page of Tomcat. Good security is hard enough to teach without really bad examples of security practices in tutorial materials.

The visualization capabilities look quite nice.

Originally saw this in a tweet from Lutz Maicher.

« Newer PostsOlder Posts »

Powered by WordPress