Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 26, 2017

Geocomputation with R – Open Book in Progress – Contribute

Filed under: Geographic Data,Geography,Geospatial Data,R — Patrick Durusau @ 8:57 pm

Geocomputation with R by Robin Lovelace, Jakub Nowosad, Jannes Muenchow.

Welcome to the online home of Geocomputation with R, a forthcoming book with CRC Press.

Development

p>Inspired by bookdown and other open source projects we are developing this book in the open. Why? To encourage contributions, ensure reproducibility and provide access to the material as it evolves.

The book’s development can be divided into four main phases:

  1. Foundations
  2. Basic applications
  3. Geocomputation methods
  4. Advanced applications

Currently the focus is on Part 2, which we aim to be complete by December. New chapters will be added to this website as the project progresses, hosted at geocompr.robinlovelace.net and kept up-to-date thanks to Travis….

Speaking of R and geocomputation, I’ve been trying to remember to post about Geocomputation with R since I encountered it a week or more ago. Not what I expect from CRC Press. That got my attention right away!

Part II, Basic Applications has two chapters, 7 Location analysis and 8 Transport applications.

Layering display of data from different sources should be included under Basic Applications. For example, relying on but not displaying topographic data to calculate line of sight between positions. Perhaps the base display is a high-resolution image overlaid with GPS coordinates at intervals and structures have the line of site colored on their structures.

Other “basic applications” you would suggest?

Looking forward to progress on this volume!

All targets have spatial-temporal locations.

Filed under: Geographic Data,Geography,Geophysical,Geospatial Data,R,Spatial Data — Patrick Durusau @ 5:29 pm

r-spatial

From the about page:

r-spatial.org is a website and blog for those interested in using R to analyse spatial or spatio-temporal data.

Posts in the last six months to whet your appetite for this blog:

The budget of a government for spatial-temporal software is no indicator of skill with spatial and spatial-temporal data.

How are yours?

October 26, 2017

2nd International Electronic Conference on Remote Sensing – March 22 – April 5, 2018

2nd International Electronic Conference on Remote Sensing

From the webpage:

We are very pleased to announce that the 2nd International Electronic Conference on Remote Sensing (ECRS-2) will be held online, between 22 March and 5 April 2018.

Today, remote sensing is already recognised as an important tool for monitoring our planet and assessing the state of our environment. By providing a wealth of information that is used to make sound decisions on key issues for humanity such as climate change, natural resource monitoring and disaster management, it changes our world and affects the way we think.

Nevertheless, it is very inspirational that we continue to witness a constant growth of amazing new applications, products and services in different fields (e.g. archaeology, agriculture, forestry, environment, climate change, natural and anthropogenic hazards, weather, geology, biodiversity, coasts and oceans, topographic mapping, national security, humanitarian aid) which are based on the use of satellite and other remote sensing data. This growth can be attributed to the following: large number (larger than ever before) of available platforms for data acquisition, new sensors with improved characteristics, progress in computer technology (hardware, software), advanced data analysis techniques, and access to huge volumes of free and commercial remote sensing data and related products.

Following the success of the 1st International Electronic Conference on Remote Sensing (http://sciforum.net/conference/ecrs-1), ECRS-2 aims to cover all recent advances and developments related to this exciting and rapidly changing field, including innovative applications and uses.

We are confident that participants of this unique multidisciplinary event will have the opportunity to get involved in discussions on theoretical and applied aspects of remote sensing that will contribute to shaping the future of this discipline.

ECRS-2 (http://sciforum.net/conference/ecrs-2) is hosted on sciforum, the platform developed by MDPI for organising electronic conferences and discussion groups, and is supported by Section Chairs and a Scientific Committee comprised of highly reputable experts from academia.

It should be noted that there is no cost for active participation and attendance of this virtual conference. Experts from different parts of the world are encouraged to submit their work and take the exceptional opportunity to present it to the remote sensing community.

I have a less generous view of remote sensing, seeing it used to further exploit/degrade the environment, manipulate regulatory processes, and to generally disadvantage those not skilled in its use.

Being aware of the latest developments in remote sensing is a first step towards developing your ability to question, defend and even use remote sensing data for your own ends.

ECRS-2 (http://sciforum.net/conference/ecrs-2) is a great opportunity to educate yourself about remote sensing. Enjoy!

While electronic conferences lack the social immediacy of physical gatherings, one wonders why more data technologies aren’t holding electronic conferences? Thoughts?

September 28, 2017

Global Forest Change 2000–2015

Filed under: Geographic Data,Geography — Patrick Durusau @ 10:53 am

Global Forest Change 2000–2015

From the webpage:

Results from time-series analysis of Landsat images in characterizing global forest extent and change from 2000 through 2015. For additional information about these results, please see the associated journal article (Hansen et al., Science 2013).

Web-based visualizations of these results are also available at our main site:

http://earthenginepartners.appspot.com/science-2013-global-forest

Please use that URL when linking to this dataset.

We anticipate releasing updated versions of this dataset. To keep up to date with the latest updates, and to help us better understand how these data are used, please register as a user. Thanks!

User Notes for Version 1.3 Update

Some examples of improved change detection in the 2011–2015 update include the following:

  1. Improved detection of boreal forest loss due to fire.
  2. Improved detection of smallholder rotation agricultural clearing in dry and humid tropical forests.
  3. Improved detection of selective logging.
  4. Improved detection of the clearing of short cycle plantations in sub-tropical and tropical ecozones.

Detecting deforestation is the first step in walking up the chain of responsibility for this global scourge. One hopes with consequences at every level.

USGS Global Land Cover Characteristics Data Base Version 2.0

Filed under: Geographic Data,Geography — Patrick Durusau @ 10:14 am

Global Land Cover Characteristics Data Base Version 2.0

From the introduction:

The U.S. Geological Survey’s (USGS) National Center for Earth Resources Observation and Science (EROS), the University of Nebraska-Lincoln (UNL) and the Joint Research Centre of the European Commission have generated a 1-km resolution global land cover characteristics data base for use in a wide range of environmental research and modeling applications (Loveland and others, 2000). The land cover characterization effort is part of the National Aeronautics and Space Administration (NASA) Earth Observing System Pathfinder Program and the International Geosphere-Biosphere Programme-Data and Information System focus 1 activity. Funding for the project is provided by the USGS, NASA, U.S. Environmental Protection Agency, National Oceanic and Atmospheric Administration, U.S. Forest Service, and the United Nations Environment Programme.

The data set is derived from 1-km Advanced Very High Resolution Radiometer (AVHRR) data spanning a 12-month period (April 1992-March 1993) and is based on a flexible data base structure and seasonal land cover regions concepts. Seasonal land cover regions provide a framework for presenting the temporal and spatial patterns of vegetation in the database. The regions are composed of relatively homogeneous land cover associations (for example, similar floristic and physiognomic characteristics) which exhibit distinctive phenology (that is, onset, peak, and seasonal duration of greenness), and have common levels of primary production.

Rather than being based on precisely defined mapping units in a predefined land cover classification scheme, the seasonal land cover regions serve as summary units for both descriptive and quantitative attributes. The attributes may be considered as spreadsheets of region characteristics and permit updating, calculating, or transforming the entries into new parameters or classes. This provides the flexibility for using the land cover characteristics data base in a variety of models without extensive modification of model inputs.

The analytical strategy for global land cover characterization has evolved from methods initially tested during the development of a prototype 1-km land cover characteristics data base for the conterminous United States (Loveland and others, 1991, 1995; Brown and others, 1993). In the U.S. study, multitemporal AVHRR data, combined with other ancillary data sets, were used to produce a prototype land cover characteristics data base.

An older data set (April 1992-March 1993) at 1-km resolution, but still useful for data training, as historical data and you can imagine other planning uses.

Enjoy!

FAO GeoNETWORK

Filed under: Geographic Data,Geography — Patrick Durusau @ 10:05 am

FAO GeoNETWORK

From the about page:

The FAO GeoNetwork provides Internet access to interactive maps, satellite imagery and related spatial databases maintained by FAO and its partners.

It’s purpose is to improve access to and integrated use of spatial data and information.

Through this website FAO facilitates multidisciplinary approaches to sustainable development and supports decision making in agriculture, forestry, fisheries and food security.

Maps, including those derived from satellite imagery, are effective communicational tools and play an important role in the work of various types of users:

  • Decision Makers: e.g. Sustainable development planners and humanitarian and emergency managers in need of quick, reliable and up to date user-friendly cartographic products as a basis for action and better plan and monitor their activities.
  • GIS Experts in need of exchanging consistent and updated geographical data.
  • Spatial Analysts in need of multidisciplinary data to perform preliminary geographical analysis and reliable forecasts to better set up appropriate interventions in vulnerable areas.

The FAO GeoNetwork allows to easily share spatial data among different FAO Units, other UN Agencies, NGO’s and other institutions.

The FAO GeoNetwork site is powered by GeoNetwork opensource.

FAO and WFP, UNEP and more recently OCHA, have combined their research and mapping expertise to develop GeoNetwork opensource as a common strategy to effectively share their spatial databases including digital maps, satellite images and related statistics. The three agencies make extensive use of computer-based data visualization tools, known as Geographic Information System (GIS) and Remote Sensing (RS) software, mostly to create maps that combine various layers of information. GeoNetwork opensource provides them with the capacity to access a wide selection of maps and other spatial information stored in different databases around the world through a single entry point.

GeoNetwork opensource has been developed to connect spatial information communities and their data using a modern architecture, which is at the same time powerful and low cost, based on the principles of Free and Open Source Software (FOSS) and International and Open Standards for services and protocols (a.o. from ISO/TC211 and OGC).

For more information contact us at GeoNetwork@fao.org)

Apologies for the acronym heavy writing. Hard to say if it is meant as shorthand, as in scientific writing or to make ordinary writing opaque.

FAO – Food and Agriculture Organization of the United Nations

OCHA -United Nations Office for the Coordination of Humanitarian Affairs

OGC – Open Geospatial Consortium

UNEP – UN Environment

WFP – World Food Programme

Extremely rich collection of resources, not to mention opensource software for its use.

A site to bookmark in hopes your dreams of regime change evolve beyond spray paint and random acts of violence.

The CIA advises on such matters but their loyalty and motivations are highly suspect. Not to mention being subject to the whim and caprice of American politics.

Trust is ok, but independent analysis and verification is much better.

September 26, 2017

Global Land Survey (GLS) [Weaponizing Data]

Filed under: Geographic Data,Geography,Maps — Patrick Durusau @ 4:33 pm

Global Land Survey (GLS) is part of a collection I discovered at: 12 Sources to Download FREE Land Cover and Land Use Data. To use that collection you have to wade through pages of ads.

I am covering the sources separately and including their original descriptions.

From the GLS webpage:

The U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA) collaborated from 2009 to 2011 to create the Global Land Surveys (GLS) datasets. Each of these collections were created using the primary Landsat sensor in use at the time for each collection epoch. The scenes used were a pre-collection format that met strict quality and cloud cover standards at the time the GLS files were created.

Additional details about the Global Land Survey collection can be found at http://landsat.usgs.gov/global-land-surveys-gls.

The Global Land Survey collection consists of images acquired from 1972 to 2012 combined into one dataset.

All Global Land Survey datasets contain the standard Landsat bands designated for each sensor. Band Designations can be found at http://landsat.usgs.gov/what-are-band-designations-landsat-satellites.

[data notes]

Global Land Survey data are available to search and download through EarthExplorer and GloVis. The collection can be found under the Global Land Survey category in EarthExplorer.

Users can download the full resolution LandsatLook jpg images http://landsat.usgs.gov/landsatlook-images, and the Level 1 Data Products http://landsat.usgs.gov/landsat-data-access.

Fifteen meter resolution in the panchromatic band. Nearly as accurate as someone stepping across a compound to establish target coordinates.

Which do you find more amazing: 1) Free access to data to weaponize or, 2) Lack of use of data as a weapon by NGOs?

September 15, 2017

Landsat Viewer

Filed under: Geographic Data,Geophysical,Geospatial Data,Image Processing,Mapping,Maps — Patrick Durusau @ 10:32 am

Landsat Viewer by rcarmichael-esristaff.

From the post:

Landsat Viewer Demonstration

The lab has just completed an experimental viewer designed to sort, filter and extract individual Landsat scenes. The viewer is a web application developed using Esri‘s JavaScript API and a three.js-based external renderer.

 

Click here for the live application.

Click here for the source code.

 

The application has a wizard-like workflow. First, the user is prompted to sketch a bounding box representation the area of interest. The next step defines the imagery source and minimum selection criteria for the image scenes. For example, in the screenshot below the user is interested in any scene taken over the past 45+ years but those scenes must have 10% or less cloud cover.

 

Other Landsat resources:

Landsat homepage

Landsat FAQ

Landsat 7 Science Data Users Handbook

Landsat 8 Science Data Users Handbook

Enjoy!

I first saw this at: Landsat satellite imagery browser by Nathan Yau.

July 28, 2017

Open Source GPS Tracking System: Traccar (Super Glue + Burner Phone)

Filed under: Geographic Data,GPS — Patrick Durusau @ 10:33 am

Open Source GPS Tracking System: Traccar

From the post:

Traccar is an open source GPS tracking system for various GPS tracking devices. This Maven Project is written in Java and works on most platforms with installed Java Runtime Environment. System supports more than 80 different communication protocols from popular vendors. It includes web interface to manage tracking devices online… Traccar is the best free and open source GPS tracking system software offers self hosting real time online vehicle fleet management and personal tracking… Traccar supports more than 80 GPS communication protocols and more than 600 models of GPS tracking devices.

(image omitted)

To start using Traccar Server follow instructions below:

  • Download and install Traccar
  • Reboot system, Traccar will start automatically
  • Open web interface (http://localhost:8082)
  • Log in as administrator (user – admin, password – admin) or register a new user
  • Add new device with unique identifier (see section below)
  • Configure your device to use appropriate address and port (see section below)

With nearly omnipresent government surveillance of citizens, citizens should return the favor by surveillance of government officers.

Super Glue plus a burner phone enables GPS tracking of government vehicles.

For those with greater physical access, introducing a GPS device into vehicle wiring is also an option.

You may want to restrict access to Traccar as public access to GPS location data will alert targets to GPS tracking of their vehicles.

It’s a judgment call when the loss of future tracking data is offset by the value of accumulated tracking data for a specific purpose.

What if you tracked all county police car locations for a year and patterns emerge from that data? What forums are best for summarized (read aggregated) presentation of the data? When/where is it best to release the detailed data? How do you sign released data to verify future analysis is using the same data?

Hard questions but better hard questions than no tracking data for government agents at all. 😉

February 21, 2017

ESA Affirms Open Access Policy For Images, Videos And Data

Filed under: Astroinformatics,Geographic Data,Open Access — Patrick Durusau @ 5:34 pm

ESA Affirms Open Access Policy For Images, Videos And Data

From the post:

ESA today announced it has adopted an Open Access policy for its content such as still images, videos and selected sets of data.

For more than two decades, ESA has been sharing vast amounts of information, imagery and data with scientists, industry, media and the public at large via digital platforms such as the web and social media. ESA’s evolving information management policy increases these opportunities.

In particular, a new Open Access policy for ESA’s information and data will now facilitate broadest use and reuse of the material for the general public, media, the educational sector, partners and anybody else seeking to utilise and build upon it.

“This evolution in opening access to ESA’s images, information and knowledge is an important element of our goal to inform, innovate, interact and inspire in the Space 4.0 landscape,” said Jan Woerner, ESA Director General.

“It logically follows the free and open data policies we have already established and accounts for the increasing interest of the general public, giving more insight to the taxpayers in the member states who fund the Agency.”

A website pointing to sets of content already available under Open Access, a set of Frequently Asked Questions and further background information can be found at http://open.esa.int.

More information on the ESA Digital Agenda for Space is available at http://www.esa.int/digital.

A great trove of images and data for exploration and development of data skills.

Launched on 1 March 2002 on an Ariane-5 rocket from Europe’s spaceport in French Guyana, Envisat was the largest Earth observation spacecraft ever built. The eight-tonne satellite orbited Earth more than 50 000 times over 10 years – twice its planned lifetime. The mission delivered thousands of images and a wealth of data used to study the workings of the Earth system, including insights into factors contributing to climate change. The end of the mission was declared on 9 May 2012, but ten years of Envisat’s archived data continues to be exploited for studying our planet.

With immediate effect, all 476 public Envisat MERIS or ASAR or AATSR images are released under the Creative Commons CC BY-SA 3.0 IGO licence, hence the credit for all images is: ESA, CC BY-SA 3.0 IGO. Follow this link.

The 476 images mentioned in the news release are images prepared over the years for public release.

For addition Envisat data under the Open Access license, see: EO data distributed by ESA.

I registered for an ESA Earth Observation Single User account, quite easy as registration forms go.

I’ll wander about for a bit and report back on the resources I find.

Enjoy!

PS: Not only should you use and credit the ESA as a data source, laudatory comments about the Open Access license may encourage others to do the same.

January 17, 2017

#DisruptJ20 – 3 inch resolution aerial imagery Washington, DC @J20protests

Filed under: Geographic Data,Image Understanding,MapBox,Mapping,Maps — Patrick Durusau @ 4:22 pm

3 inch imagery resolution for Washington, DC by Jacques Tardie.

From the post:

We updated our basemap in Washington, DC with aerial imagery at 3 inch (7.5 cm) resolution. The source data is openly licensed by DC.gov, thanks to the District’s open data initiative.

If you aren’t familiar with Mapbox, there is no time like the present!

If you are interested in the just the 3 inch resolution aerial imagery, see: http://opendata.dc.gov/datasets?keyword=imagery.

Enjoy!

August 26, 2016

Restricted U.S. Army Geospatial Intelligence Handbook

Restricted U.S. Army Geospatial Intelligence Handbook

From the webpage:

This training circular provides GEOINT guidance for commanders, staffs, trainers, engineers, and military intelligence personnel at all echelons. It forms the foundation for GEOINT doctrine development. It also serves as a reference for personnel who are developing doctrine; tactics, techniques, and procedures; materiel and force structure; and institutional and unit training for intelligence operations.

1-1. Geospatial intelligence is the exploitation and analysis of imagery and geospatial information to describe, assess, and visually depict physical features and geographically referenced activities on the Earth. Geospatial intelligence consists of imagery, imagery intelligence, and geospatial information (10 USC 467).

Note. TC 2-22.7 further implements that GEOINT consists of any one or any combination of the following components: imagery, IMINT, or GI&S.

1-2. Imagery is the likeness or presentation of any natural or manmade feature or related object or activity, and the positional data acquired at the same time the likeness or representation was acquired, including: products produced by space-based national intelligence reconnaissance systems; and likenesses and presentations produced by satellites, aircraft platforms, unmanned aircraft vehicles, or other similar means (except that such term does not include handheld or clandestine photography taken by or on behalf of human intelligence collection organizations) (10 USC 467).

1-3. Imagery intelligence is the technical, geographic, and intelligence information derived through the interpretation or analysis of imagery and collateral materials (10 USC 467).

1-4. Geospatial information and services refers to information that identifies the geographic location and characteristics of natural or constructed features and boundaries on the Earth, including: statistical data and information derived from, among other things, remote sensing, mapping, and surveying technologies; and mapping, charting, geodetic data, and related products (10 USC 467).

geospatial-intel-1-460

You may not have the large fixed-wing assets described in this handbook, the “value-added layers” are within your reach with open data.

geospatial-intel-2-460

In localized environments, your value-added layers may be more current and useful than those produced on longer time scales.

Topic maps can support geospatial collations of information along side other views of the same data.

A great opportunity to understand how a modern military force understands and uses geospatial intelligence.

Not to mention testing your ability to recreate that geospatial intelligence without dedicated tools.

August 23, 2016

Spatial Module in OrientDB 2.2

Filed under: Geographic Data,Geography,Geospatial Data,GIS,Mapping,Maps,OrientDB — Patrick Durusau @ 2:51 pm

Spatial Module in OrientDB 2.2

From the post:

In versions prior to 2.2, OrientDB had minimal support for storing and retrieving GeoSpatial data. The support was limited to a pair of coordinates (latitude, longitude) stored as double in an OrientDB class, with the possibility to create a spatial index against those 2 coordinates in order to speed up a geo spatial query. So the support was limited to Point.
In OrientDB v.2.2 we created a brand new Spatial Module with support for different types of Geometry objects stored as embedded objects in a user defined class

  • Point (OPoint)
  • Line (OLine)
  • Polygon (OPolygon)
  • MultiPoint (OMultiPoint)
  • MultiLine (OMultiline)
  • MultiPolygon (OMultiPlygon)
  • Geometry Collections

Along with those data types, the module extends OrientDB SQL with a subset of SQL-MM functions in order to support spatial data.The module only supports EPSG:4326 as Spatial Reference System. This blog post is an introduction to the OrientDB spatial Module, with some examples of its new capabilities. You can find the installation guide here.

Let’s start by loading some data into OrientDB. The dataset is about points of interest in Italy taken from here. Since the format is ShapeFile we used QGis to export the dataset in CSV format (geometry format in WKT) and import the CSV into OrientDB with the ETL in the class Points and the type geometry field is OPoint.

The enhanced spatial functions for OrientDB 2.2 reminded me of this passage in “Silences and Secrecy: The Hidden Agenda of Cartography in Early Modern Europe:”

Some of the most clear-cut cases of an increasing state concern with control and restriction of map knowledge are associated with military or strategic considerations. In Europe in the sixteenth and seventeenth centuries hardly a year passed without some war being fought. Maps were an object of military intelligence; statesmen and princes collected maps to plan, or, later, to commemorate battles; military textbooks advocated the use of maps. Strategic reasons for keeping map knowledge a secret included the need for confidentiality about the offensive and defensive operations of state armies, the wish to disguise the thrust of external colonization, and the need to stifle opposition within domestic populations when developing administrative and judicial systems as well as the more obvious need to conceal detailed knowledge about fortifications. (reprinted in: The New Nature of Maps: Essays in the History of Cartography, by J.B. Harley: Paul Laxton, John Hopkins, 2001. page 89)

I say “reminded me,” better to say increased my puzzling over the widespread access to geographic data that once upon a time had military value.

Is it the case that “ordinary maps,” maps of streets, restaurants, hotels, etc., aren’t normally imbued (merged?) with enough other information to make them “dangerous?”

If that’s true, the lack of commonly available “dangerous maps” is a disadvantage to emergency and security planners.

You can’t plan for the unknown.

Or to paraphrase Dibert: “Ignorance is not a reliable planning guide.”

How would you cure the ignorance of “ordinary” maps?

PS: While hunting for the quote, I ran across The Power of Maps by Denis Wood; with John Fels. Which has been up-dated: Rethinking the power of maps by Denis Wood; with John Fels and John Krygier. I am now re-reading the first edition and awaiting for the updated version to arrive.

Neither book is a guide to making “dangerous” maps but may awaken in you a sense of the power of maps and map making.

January 3, 2016

Searching for Geolocated Posts On YouTube

Filed under: Geographic Data,Geography,Journalism,News,Reporting,Searching — Patrick Durusau @ 10:29 pm

Searching for Geolocated Posts On YouTube (video) by First Draft News.

Easily the most information filled 1 minutes and 18 seconds of the holiday season!

Illustrates searching for geolocated post to YouTube, despite YouTube not offering that option!

New tool in development may help!

Visit: http://youtube.github.io/geo-search-tool/search.html

Both the video and site are worth a visit!

Don’t forget to check out First Draft News as well!

October 16, 2015

Planet Platform Beta & Open California:…

Planet Platform Beta & Open California: Our Data, Your Creativity by Will Marshall.

From the post:

At Planet Labs, we believe that broad coverage frequent imagery of the Earth can be a significant tool to address some of the world’s challenges. But this can only happen if we democratise access to it. Put another way, we have to make data easy to access, use, and buy. That’s why I recently announced at the United Nations that Planet Labs will provide imagery in support of projects to advance the Sustainable Development Goals.

Today I am proud to announce that we’re releasing a beta version of the Planet Platform, along with our imagery of the state of California under an open license.

The Planet Platform Beta will enable a pioneering cohort of developers, image analysts, researchers, and humanitarian organizations to get access to our data, web-based tools and APIs. The goal is to provide a “sandbox” for people to start developing and testing their apps on a stack of openly available imagery, with the goal of jump-starting a developer community; and collecting data feedback on Planet’s data, tools, and platform.

Our Open California release includes two years of archival imagery of the whole state of California from our RapidEye satellites and 2 months of data from the Dove satellite archive; and will include new data collected from both constellations on an ongoing basis, with a two-week delay. The data will be under an open license, specifically CC BY-SA 4.0. The spirit of the license is to encourage R&D and experimentation in an “open data” context. Practically, this means you can do anything you want, but you must “open” your work, just as we are opening ours. It will enable the community to discuss their experiments and applications openly, and thus, we hope, establish the early foundation of a new geospatial ecosystem.

California is our first Open Region, but shall not be the last. We will open more of our data in the future. This initial release will inform how we deliver our data set to a global community of customers.

Resolution for the Dove satellites is 3-5 meters and the RapidEye satellites is 5 meters.

Not quite goldfish bowl or Venice Beach resolution but useful for other purposes.

Now would be a good time to become familiar with managing and annotating satellite imagery. Higher resolutions, public and private are only a matter of time.

July 13, 2015

Visualising Geophylogenies in Web Maps Using GeoJSON

Filed under: Geographic Data,GeoJSON — Patrick Durusau @ 6:54 pm

Visualising Geophylogenies in Web Maps Using GeoJSON by Roderic Page.

Abstract:

This article describes a simple tool to display geophylogenies on web maps including Google Maps and OpenStreetMap. The tool reads a NEXUS format file that includes geographic information, and outputs a GeoJSON format file that can be displayed in a web map application.

From the introduction (with footnotes omitted):

The increasing number of georeferenced sequences in GenBank [ftnt omitted] and the growth of DNA barcoding [ftnt omitted] means that the raw material to create geophylogenies [ftnt omitted] is readily available. However, constructing visualisations of phylogenies and geography together can be tedious. Several early efforts at visualising geophylogenies focussed on using existing GIS software [ftnt omitted], or tools such as Google Earth [ftnt omitted]. While the 3D visualisations enabled by Google Earth are engaging, it’s not clear that they are easy to interpret. Another tool, GenGIS [ftnt omitted], supports 2D visualisations where the phylogeny is drawn flat on the map, avoiding some of the problems of Google Earth visualisations. However, like Google Earth, GenGIS requires the user to download and install additional software on their computer.

By comparison, web maps such as Google Maps [ftnt omitted] are becoming ubiquitous and work in most modern web browsers. They support displaying user-supplied data, including geometrical information encoded in formats such as GeoJSON, making them a light weight alternative to 3D geophylogeny viewers. This paper describes a tool that makes use of the GeoJSON format and the capabilities of web maps to create quick and simple visualisations of geophylogenies.

Whether you are interested in geophylogenies or in the use of GeoJSON, this is a post for you.

Enjoy!

April 24, 2015

Animation of Gerrymandering?

Filed under: Geographic Data,Geospatial Data,Government,Mapping,Maps — Patrick Durusau @ 1:45 pm

United States Congressional District Shapefiles by Jeffrey B. Lewis, Brandon DeVine, and Lincoln Pritcher with Kenneth C. Martis.

From the description:

This site provides digital boundary definitions for every U.S. Congressional District in use between 1789 and 2012. These were produced as part of NSF grant SBE-SES-0241647 between 2009 and 2013.

The current release of these data is experimental. We have had done a good deal of work to validate all of the shapes. However, it is quite likely that some irregulaties remain. Please email jblewis@ucla.edu with questions or suggestions for improvement. We hope to have a ticketing system for bugs and a versioning system up soon. The district definitions currently available should be considered an initial-release version.

Many districts were formed by aggregragating complete county shapes obtained from the National Historical Geographic Information System (NHGIS) project and the Newberry Library’s Atlas of Historical County Boundaries. Where Congressional district boundaries did not coincide with county boundaries, district shapes were constructed district-by-district using a wide variety of legal and cartographic resources. Detailed descriptions of how particular districts were constructed and the authorities upon which we relied are available (at the moment) by request and described below.

Every state districting plan can be viewed quickly at https://github.com/JeffreyBLewis/congressional-district-boundaries (clicking on any of the listed file names will create a map window that can be paned and zoomed). GeoJSON definitions of the districts can also be downloaded from the same URL. Congress-by-Congress district maps in ERSI ShapefileA format can be downloaded below. Though providing somewhat lower resolution than the shapefiles, the GeoJSON files contain additional information about the members who served in each district that the shapefiles do not (Congress member information may be useful for creating web applications with, for example, Google Maps or Leaflet).

Project Team

The Principal Investigator on the project was Jeffrey B. Lewis. Brandon DeVine and Lincoln Pitcher researched district definitions and produced thousands of digital district boundaries. The project relied heavily on Kenneth C. Martis’ The Historical Atlas of United States Congressional Districts: 1789-1983. (New York: The Free Press, 1982). Martis also provided guidance, advice, and source materials used in the project.

How to cite

Jeffrey B. Lewis, Brandon DeVine, Lincoln Pitcher, and Kenneth C. Martis. (2013) Digital Boundary Definitions of United States Congressional Districts, 1789-2012. [Data file and code book]. Retrieved from http://cdmaps.polisci.ucla.edu on [date of
download].

An impressive resource for anyone interested in the history of United States Congressional Districts and their development. An animation of gerrymandering of congressional districts was the first use case that jumped to mind. 😉

Enjoy!

I first saw this in a tweet by Larry Mullen.

April 21, 2015

Imagery Processing Pipeline Launches!

Filed under: Geographic Data,Geography,Geophysical,Image Processing,Maps — Patrick Durusau @ 7:37 pm

Imagery Processing Pipeline Launches!

From the post:

Our imagery processing pipeline is live! You can search the Landsat 8 imagery catalog, filter by date and cloud coverage, then select any image. The image is instantly processed, assembling bands and correcting colors, and loaded into our API. Within minutes you will have an email with a link to the API end point that can be loaded into any web or mobile application.

Our goal is to make it fast for anyone to find imagery for a news story after a disaster, easy for any planner to get the the most recent view of their city, and any developer to pull in thousands of square KM of processed imagery for their precision agriculture app. All directly using our API

There are two ways to get started: via the imagery browser fetch.astrodigital.com, or directly via the the Search and Publish APIs. All API documentation is on astrodigital.com/api. You can either use the API to programmatically pull imagery though the pipeline or build your own UI on top of the API, just like we did.

The API provides direct access to more than 300TB of satellite imagery from Landsat 8. Early next year we’ll make our own imagery available once our own Landmapper constellation is fully commissioned.

Hit us up @astrodigitalgeo or sign up at astrodigital.com to follow as we build. Huge thanks to our partners at Development Seed who is leading our development and for the infinitively scalable API from Mapbox.

If you are interested in Earth images, you really need to check this out!

I haven’t tried the API but did get a link to an image of my city and surrounding area.

Definitely worth a long look!

February 7, 2015

Geojournalism.org

Filed under: Geographic Data,Geography,Geospatial Data,Journalism,Mapping,Maps — Patrick Durusau @ 3:05 pm

Geojournalism.org

From the webpage:

Geojournalism.org provides online resources and training for journalists, designers and developers to dive into the world of data visualization using geographic data.

From the about page:

Geojournalism.org is made for:

Journalists

Reporters, editors and other professionals involved on the noble mission of producing relevant news for their audiences can use Geojournalism.org to produce multimedia stories or simple maps and data visualization to help creating context for complex environmental issues

Developers

Programmers and geeks using a wide variety of languages and tools can drink on the vast knowledge of our contributors. Some of our tutorials explore open source libraries to make maps, infographics or simply deal with large geographical datasets

Designers

Graphic designers and experts on data visualizations find in the Geojournalism.org platform a large amount of resources and tips. They can, for example, improve their knowledge on the right options for coloring maps or how to set up simple charts to depict issues such as deforestation and climate change

It is one thing to have an idea or even a story and quite another to communicate it effectively to a large audience. Geojournalism is designed as a community site that will help you communicate geophysical data to a non-technical audience.

I think it is clear that most governments are shy about accurate and timely communication with their citizens. Are you going to be one of those who fills in the gaps? Geojournalism.org is definitely a site you will be needing.

November 20, 2014

Geospatial Data in Python

Filed under: Geographic Data,Geospatial Data,Python — Patrick Durusau @ 2:31 pm

Geospatial Data in Python by Carson Farmer.

Materials for the tutorial: Geospatial Data in Python: Database, Desktop, and the Web by Carson Farmer (Associate Director of CARSI lab).

Important skills if you are concerned about projects such as the Keystone XL Pipeline:

keystone pipeline route

This is an instance where having the skills to combine geospatial, archaeological, and other data together will empower local communities to minimize the damage they will suffer from this project.

Having a background in the processing geophysical data is the first step in that process.

August 22, 2014

Getty Thesaurus of Geographic Names (TGN)

Filed under: Geographic Data,Geography,Thesaurus — Patrick Durusau @ 3:07 pm

Getty Thesaurus of Geographic Names Released as Linked Open Data by James Cuno.

From the post:

We’re delighted to announce that the Getty Research Institute has released the Getty Thesaurus of Geographic Names (TGN)® as Linked Open Data. This represents an important step in the Getty’s ongoing work to make our knowledge resources freely available to all.

Following the release of the Art & Architecture Thesaurus (AAT)® in February, TGN is now the second of the four Getty vocabularies to be made entirely free to download, share, and modify. Both data sets are available for download at vocab.getty.edu under an Open Data Commons Attribution License (ODC BY 1.0).

What Is TGN?

The Getty Thesaurus of Geographic Names is a resource of over 2,000,000 names of current and historical places, including cities, archaeological sites, nations, and physical features. It focuses mainly on places relevant to art, architecture, archaeology, art conservation, and related fields.

TGN is powerful for humanities research because of its linkages to the three other Getty vocabularies—the Union List of Artist Names, the Art & Architecture Thesaurus, and the Cultural Objects Name Authority. Together the vocabularies provide a suite of research resources covering a vast range of places, makers, objects, and artistic concepts. The work of three decades, the Getty vocabularies are living resources that continue to grow and improve.

Because they serve as standard references for cataloguing, the Getty vocabularies are also the conduits through which data published by museums, archives, libraries, and other cultural institutions can find and connect to each other.

A resource where you could loose some serious time!

Try this entry for London.

Or Paris.

Bear in mind the data that underlies this rich display is now available for free downloading.

June 16, 2014

Digital Mapping + Geospatial Humanities

Filed under: Geographic Data,GIS,Humanities,Mapping,Maps — Patrick Durusau @ 3:59 pm

Digital Mapping + Geospatial Humanities by Fred Gibbs.

From the course description:

We are in the midst of a major paradigm shift in human consciousness and society caused by our ubiquitous connectedness via the internet and smartphones. These globalizing forces have telescoped space and time to an unprecedented degree, while paradoxically heightening the importance of local places.

The course explores the technologies, tools, and workflows that can help collect, connect, and present online interpretations of the spaces around us. Throughout the week, we’ll discuss the theoretical and practical challenges of deep mapping (producing rich, interactive maps with multiple layers of information). Woven into our discussions will be numerous technical tutorials that will allow us to tell map-based stories about Albuquerque’s fascinating past.


This course combines cartography, geography, GIS, history, sociology, ethnography, computer science, and graphic design. While we cover some of the basics of each of these, the course eschews developing deep expertise in any of these in favor of exploring their intersections with each other, and formulating critical questions that span these normally disconnected disciplines. By the end, you should be able to think more critically about maps, place, and our online experiences with them.


We’ll move from creating simple maps with Google Maps/Earth to creating your own custom, interactive online maps with various open source tools like QGIS, Open Street Map, and D3 that leverage the power of open data from local and national repositories to provide new perspectives on the built environment. We’ll also use various mobile apps for data collection, online exhibit software, (physical and digital) historical archives at the Center for Southwest Research. Along the way we’ll cover the various data formats (KML, XML, GeoJSON, TopoJSON) used by different tools and how to move between them, allowing you to craft the most efficient workflow for your mapping purposes.

Course readings that aren’t freely availabe online (and even some that are) can be accessed via the course Zotero Library. You’ll need to be invited to join the group since we use it to distribute course readings. If you are not familiar with Zotero, here are some instructions.

All of that in a week! This week as a matter of fact.

One of the things I miss about academia are the occasions when you can concentrate on one subject to the exclusion of all else. Of course, being unmarried at that age, unemployed, etc. may have contributed to the ability to focus. 😉

Just sampled some of the readings and this appears to be a really rocking course!

May 11, 2014

Twitter User Targeting Data

Filed under: Geographic Data,Geography,Georeferencing,Tweets — Patrick Durusau @ 2:59 pm

Geotagging One Hundred Million Twitter Accounts with Total Variation Minimization by Ryan Compton, David Jurgens, and, David Allen.

Abstract:

Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data.

Our method infers an unknown user’s location by examining their friend’s locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user’s ego network can be used as a per-user accuracy measure, allowing us to discard poor location inferences and control the overall error of our approach.

Leave-many-out evaluation shows that our method is able to infer location for 101,846,236 Twitter users at a median error of 6.33 km, allowing us to geotag roughly 89\% of public tweets.

If 6.33 km sounds like a lot of error, check out NUKEMAP by Alex Wellerstein.

April 5, 2014

GeoCanvas

Filed under: Geographic Data,Geography,Maps,Visualization — Patrick Durusau @ 7:34 pm

Synthicity Releases 3D Spatial Data Visualization Tool, GeoCanvas by Dean Meyers.

From the post:

Synthicity has released a free public beta version of GeoCanvas, its 3D spatial data visualization tool. The software provides a streamlined toolset for exploring geographic data, lowering the barrier to learning and using geographic information systems.

GeoCanvas is not limited to visualizing parcels in cities. By supporting data formats such as the widely available shapefile for spatial geometry and text files for attribute data, it opens the possibility of rapid 3D spatial data visualization for a wide range of uses and users. The software is expected to be a great addition to the toolkits of students, researchers, and practitioners in fields as diverse as data science, geography, planning, real estate analysis, and market research. A set of video tutorials explaining the basic concepts and a range of examples have been made available to showcase the possibilities.

The public beta version of GeoCanvas is available as a free download from www.synthicity.com.

Well, rats! I haven’t installed a VM with Windows 7/8 or Max OS X 10.8 or later.

Sounds great!

Comments from actual experience?

February 12, 2014

…Open GIS Mapping Data To The Public

Filed under: Geographic Data,GIS,Maps,Open Data — Patrick Durusau @ 9:13 pm

Esri Allows Federal Agencies To Open GIS Mapping Data To The Public by Alexander Howard.

From the post:

A debate in the technology world that’s been simmering for years, about whether mapping vendor Esri will allow public geographic information systems (GIS) to access government customers’ data, finally has an answer: The mapping software giant will take an unprecedented step, enabling thousands of government customers around the U.S. to make their data on the ArcGIS platform open to the public with a click of a mouse.

“Everyone starting to deploy ArcGIS can now deploy an open data site,” Andrew Turner, chief technology officer of Esri’s Research and Development Center in D.C., said in an interview. “We’re in a unique position here. Users can just turn it on the day it becomes public.”

Government agencies can use the new feature to turn geospatial information systems data in Esri’s format into migratable, discoverable, and accessible open formats, including CSVs, KML and GeoJSON. Esri will demonstrate the ArcGIS feature in ArcGIS at the Federal Users Conference in Washington, D.C. According to Turner, the new feature will go live in March 2014.

I’m not convinced that GIS data alone is going to make government more transparent but it is a giant step in the right direction.

To have even partial transparency in government, not only would you need GIS data but to have that correlated with property sales and purchases going back decades, along with tracing the legal ownership of property past shell corporations and holding companies, to say nothing of the social, political and professional relationships of those who benefited from various decisions. For a start.

Still, the public may be a better starting place to demand transparency with this type of data.

February 11, 2014

Neo4j Spatial Part 2

Filed under: Geographic Data,Georeferencing,Graphs,Neo4j — Patrick Durusau @ 2:27 pm

Neo4j Spatial Part 2 by Max De Marzi.

Max finishes up part 1 with sample spatial data on for restaurants and deploying his proof of concept using GrapheneDB on Heroku.

Restaurants are typical cellphone app fare but if I were in Kiev, I’d want an app with geo-locations of ingredients for a proper Molotov cocktail.

A jar filled with gasoline and a burning rag is nearly as dangerous to the thrower as the target.

Of course, substitutions for ingredients, in what quantities, in different languages, could be added features of such an app.

Data management is a weapon within the reach of all sides.

January 21, 2014

Geospatial (distance) faceting…

Filed under: Facets,Geographic Data,Georeferencing,Lucene — Patrick Durusau @ 7:32 pm

Geospatial (distance) faceting using Lucene’s dynamic range facets by Mike McCandless.

From the post:

There have been several recent, quiet improvements to Lucene that, taken together, have made it surprisingly simple to add geospatial distance faceting to any Lucene search application, for example:

  < 1 km (147)
  < 2 km (579)
  < 5 km (2775)

Such distance facets, which allow the user to quickly filter their search results to those that are close to their location, has become especially important lately since most searches are now from mobile smartphones.

In the past, this has been challenging to implement because it’s so dynamic and so costly: the facet counts depend on each user’s location, and so cannot be cached and shared across users, and the underlying math for spatial distance is complex.

But several recent Lucene improvements now make this surprisingly simple!

As always, Mike is right on the edge so wait for Lucene 4.7 to try his code out or download the current source.

Distance might not be the only consideration. What if you wanted the shortest distance that did not intercept a a known patrol? Or known patrol within some window of variation.

Distance is still going to be a factor but the search required maybe more complex than just distance.

December 8, 2013

Mapping the open web using GeoJSON

Filed under: Geo Analytics,Geographic Data,Geographic Information Retrieval,JSON,NSA — Patrick Durusau @ 5:59 pm

Mapping the open web using GeoJSON by Sean Gillies.

From the post:

GeoJSON is an open format for encoding information about geographic features using JSON. It has much in common with older GIS formats, but also a few new twists: GeoJSON is a text format, has a flexible schema, and is specified in a single HTML page. The specification is informed by standards such as OGC Simple Features and Web Feature Service and streamlines them to suit the way web developers actually build software today.

Promoted by GitHub and used in the Twitter API, GeoJSON has become a big deal in the open web. We are huge fans of the little format that could. GeoJSON suits the web and suits us very well; it plays a major part in our libraries, services, and products.

A short but useful review of why GeoJSON is important to MapBox and why it should be important to you.

A must read if you are interested in geo-locating data of interest to your users to maps.

Sean mentions that Github promotes GeoJSON but I’m curious if the NSA uses/promotes it as well? 😉

October 10, 2013

Geocode the world…

Filed under: Geographic Data,Geography,GeoNames,Maps — Patrick Durusau @ 3:29 pm

Geocode the world with the new Data Science Toolkit by Pete Warden.

From the post:

I’ve published a new version of the Data Science Toolkit, which includes David Blackman’s awesome TwoFishes city-level geocoder. Largely based on data from the Geonames project, the biggest improvement is that the Google-style geocoder now handles millions of places around the world in hundreds of languages:

Who or what do you want to locate? 😉

August 16, 2013

ST_Geometry Aggregate Functions for Hive…

Filed under: Geographic Data,Geographic Information Retrieval,Hadoop,Hive — Patrick Durusau @ 4:00 pm

ST_Geometry Aggregate Functions for Hive in Spatial Framework for Hadoop by Jonathan Murphy.

From the post:

We are pleased to announce that the ST_Geometry aggregate functions are now available for Hive, in the Spatial Framework for Hadoop. The aggregate functions can be used to perform a convex-hull, intersection, or union operation on geometries from multiple records of a dataset.

While the non-aggregate ST_ConvexHull function returns the convex hull of the geometries passed like a single function call, the ST_Aggr_ConvexHull function accumulates the geometries from the rows selected by a query, and performs a convex hull operation over those geometries. Likewise, ST_Aggr_Intersection and ST_Aggr_Union aggregrate the geometries from multiple selected rows, to perform intersection and union operations, respectively.

The example given covers earthquake data and California-county data.

I have a weakness for aggregating functions as you know. 😉

The other point this aggregate functions illustrates is that sometimes you want subjects to be treated as independent of each other and sometimes you want to treat them as a group.

Depends upon your requirements.

There really isn’t a one size fits all granularity of subject identity for all situations.

Older Posts »

Powered by WordPress