Archive for the ‘Geographic Data’ Category

Global Forest Change 2000–2015

Thursday, September 28th, 2017

Global Forest Change 2000–2015

From the webpage:

Results from time-series analysis of Landsat images in characterizing global forest extent and change from 2000 through 2015. For additional information about these results, please see the associated journal article (Hansen et al., Science 2013).

Web-based visualizations of these results are also available at our main site:

http://earthenginepartners.appspot.com/science-2013-global-forest

Please use that URL when linking to this dataset.

We anticipate releasing updated versions of this dataset. To keep up to date with the latest updates, and to help us better understand how these data are used, please register as a user. Thanks!

User Notes for Version 1.3 Update

Some examples of improved change detection in the 2011–2015 update include the following:

  1. Improved detection of boreal forest loss due to fire.
  2. Improved detection of smallholder rotation agricultural clearing in dry and humid tropical forests.
  3. Improved detection of selective logging.
  4. Improved detection of the clearing of short cycle plantations in sub-tropical and tropical ecozones.

Detecting deforestation is the first step in walking up the chain of responsibility for this global scourge. One hopes with consequences at every level.

USGS Global Land Cover Characteristics Data Base Version 2.0

Thursday, September 28th, 2017

Global Land Cover Characteristics Data Base Version 2.0

From the introduction:

The U.S. Geological Survey’s (USGS) National Center for Earth Resources Observation and Science (EROS), the University of Nebraska-Lincoln (UNL) and the Joint Research Centre of the European Commission have generated a 1-km resolution global land cover characteristics data base for use in a wide range of environmental research and modeling applications (Loveland and others, 2000). The land cover characterization effort is part of the National Aeronautics and Space Administration (NASA) Earth Observing System Pathfinder Program and the International Geosphere-Biosphere Programme-Data and Information System focus 1 activity. Funding for the project is provided by the USGS, NASA, U.S. Environmental Protection Agency, National Oceanic and Atmospheric Administration, U.S. Forest Service, and the United Nations Environment Programme.

The data set is derived from 1-km Advanced Very High Resolution Radiometer (AVHRR) data spanning a 12-month period (April 1992-March 1993) and is based on a flexible data base structure and seasonal land cover regions concepts. Seasonal land cover regions provide a framework for presenting the temporal and spatial patterns of vegetation in the database. The regions are composed of relatively homogeneous land cover associations (for example, similar floristic and physiognomic characteristics) which exhibit distinctive phenology (that is, onset, peak, and seasonal duration of greenness), and have common levels of primary production.

Rather than being based on precisely defined mapping units in a predefined land cover classification scheme, the seasonal land cover regions serve as summary units for both descriptive and quantitative attributes. The attributes may be considered as spreadsheets of region characteristics and permit updating, calculating, or transforming the entries into new parameters or classes. This provides the flexibility for using the land cover characteristics data base in a variety of models without extensive modification of model inputs.

The analytical strategy for global land cover characterization has evolved from methods initially tested during the development of a prototype 1-km land cover characteristics data base for the conterminous United States (Loveland and others, 1991, 1995; Brown and others, 1993). In the U.S. study, multitemporal AVHRR data, combined with other ancillary data sets, were used to produce a prototype land cover characteristics data base.

An older data set (April 1992-March 1993) at 1-km resolution, but still useful for data training, as historical data and you can imagine other planning uses.

Enjoy!

FAO GeoNETWORK

Thursday, September 28th, 2017

FAO GeoNETWORK

From the about page:

The FAO GeoNetwork provides Internet access to interactive maps, satellite imagery and related spatial databases maintained by FAO and its partners.

It’s purpose is to improve access to and integrated use of spatial data and information.

Through this website FAO facilitates multidisciplinary approaches to sustainable development and supports decision making in agriculture, forestry, fisheries and food security.

Maps, including those derived from satellite imagery, are effective communicational tools and play an important role in the work of various types of users:

  • Decision Makers: e.g. Sustainable development planners and humanitarian and emergency managers in need of quick, reliable and up to date user-friendly cartographic products as a basis for action and better plan and monitor their activities.
  • GIS Experts in need of exchanging consistent and updated geographical data.
  • Spatial Analysts in need of multidisciplinary data to perform preliminary geographical analysis and reliable forecasts to better set up appropriate interventions in vulnerable areas.

The FAO GeoNetwork allows to easily share spatial data among different FAO Units, other UN Agencies, NGO’s and other institutions.

The FAO GeoNetwork site is powered by GeoNetwork opensource.

FAO and WFP, UNEP and more recently OCHA, have combined their research and mapping expertise to develop GeoNetwork opensource as a common strategy to effectively share their spatial databases including digital maps, satellite images and related statistics. The three agencies make extensive use of computer-based data visualization tools, known as Geographic Information System (GIS) and Remote Sensing (RS) software, mostly to create maps that combine various layers of information. GeoNetwork opensource provides them with the capacity to access a wide selection of maps and other spatial information stored in different databases around the world through a single entry point.

GeoNetwork opensource has been developed to connect spatial information communities and their data using a modern architecture, which is at the same time powerful and low cost, based on the principles of Free and Open Source Software (FOSS) and International and Open Standards for services and protocols (a.o. from ISO/TC211 and OGC).

For more information contact us at GeoNetwork@fao.org)

Apologies for the acronym heavy writing. Hard to say if it is meant as shorthand, as in scientific writing or to make ordinary writing opaque.

FAO – Food and Agriculture Organization of the United Nations

OCHA -United Nations Office for the Coordination of Humanitarian Affairs

OGC – Open Geospatial Consortium

UNEP – UN Environment

WFP – World Food Programme

Extremely rich collection of resources, not to mention opensource software for its use.

A site to bookmark in hopes your dreams of regime change evolve beyond spray paint and random acts of violence.

The CIA advises on such matters but their loyalty and motivations are highly suspect. Not to mention being subject to the whim and caprice of American politics.

Trust is ok, but independent analysis and verification is much better.

Global Land Survey (GLS) [Weaponizing Data]

Tuesday, September 26th, 2017

Global Land Survey (GLS) is part of a collection I discovered at: 12 Sources to Download FREE Land Cover and Land Use Data. To use that collection you have to wade through pages of ads.

I am covering the sources separately and including their original descriptions.

From the GLS webpage:

The U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA) collaborated from 2009 to 2011 to create the Global Land Surveys (GLS) datasets. Each of these collections were created using the primary Landsat sensor in use at the time for each collection epoch. The scenes used were a pre-collection format that met strict quality and cloud cover standards at the time the GLS files were created.

Additional details about the Global Land Survey collection can be found at http://landsat.usgs.gov/global-land-surveys-gls.

The Global Land Survey collection consists of images acquired from 1972 to 2012 combined into one dataset.

All Global Land Survey datasets contain the standard Landsat bands designated for each sensor. Band Designations can be found at http://landsat.usgs.gov/what-are-band-designations-landsat-satellites.

[data notes]

Global Land Survey data are available to search and download through EarthExplorer and GloVis. The collection can be found under the Global Land Survey category in EarthExplorer.

Users can download the full resolution LandsatLook jpg images http://landsat.usgs.gov/landsatlook-images, and the Level 1 Data Products http://landsat.usgs.gov/landsat-data-access.

Fifteen meter resolution in the panchromatic band. Nearly as accurate as someone stepping across a compound to establish target coordinates.

Which do you find more amazing: 1) Free access to data to weaponize or, 2) Lack of use of data as a weapon by NGOs?

Landsat Viewer

Friday, September 15th, 2017

Landsat Viewer by rcarmichael-esristaff.

From the post:

Landsat Viewer Demonstration

The lab has just completed an experimental viewer designed to sort, filter and extract individual Landsat scenes. The viewer is a web application developed using Esri‘s JavaScript API and a three.js-based external renderer.

 

Click here for the live application.

Click here for the source code.

 

The application has a wizard-like workflow. First, the user is prompted to sketch a bounding box representation the area of interest. The next step defines the imagery source and minimum selection criteria for the image scenes. For example, in the screenshot below the user is interested in any scene taken over the past 45+ years but those scenes must have 10% or less cloud cover.

 

Other Landsat resources:

Landsat homepage

Landsat FAQ

Landsat 7 Science Data Users Handbook

Landsat 8 Science Data Users Handbook

Enjoy!

I first saw this at: Landsat satellite imagery browser by Nathan Yau.

Open Source GPS Tracking System: Traccar (Super Glue + Burner Phone)

Friday, July 28th, 2017

Open Source GPS Tracking System: Traccar

From the post:

Traccar is an open source GPS tracking system for various GPS tracking devices. This Maven Project is written in Java and works on most platforms with installed Java Runtime Environment. System supports more than 80 different communication protocols from popular vendors. It includes web interface to manage tracking devices online… Traccar is the best free and open source GPS tracking system software offers self hosting real time online vehicle fleet management and personal tracking… Traccar supports more than 80 GPS communication protocols and more than 600 models of GPS tracking devices.

(image omitted)

To start using Traccar Server follow instructions below:

  • Download and install Traccar
  • Reboot system, Traccar will start automatically
  • Open web interface (http://localhost:8082)
  • Log in as administrator (user – admin, password – admin) or register a new user
  • Add new device with unique identifier (see section below)
  • Configure your device to use appropriate address and port (see section below)

With nearly omnipresent government surveillance of citizens, citizens should return the favor by surveillance of government officers.

Super Glue plus a burner phone enables GPS tracking of government vehicles.

For those with greater physical access, introducing a GPS device into vehicle wiring is also an option.

You may want to restrict access to Traccar as public access to GPS location data will alert targets to GPS tracking of their vehicles.

It’s a judgment call when the loss of future tracking data is offset by the value of accumulated tracking data for a specific purpose.

What if you tracked all county police car locations for a year and patterns emerge from that data? What forums are best for summarized (read aggregated) presentation of the data? When/where is it best to release the detailed data? How do you sign released data to verify future analysis is using the same data?

Hard questions but better hard questions than no tracking data for government agents at all. 😉

ESA Affirms Open Access Policy For Images, Videos And Data

Tuesday, February 21st, 2017

ESA Affirms Open Access Policy For Images, Videos And Data

From the post:

ESA today announced it has adopted an Open Access policy for its content such as still images, videos and selected sets of data.

For more than two decades, ESA has been sharing vast amounts of information, imagery and data with scientists, industry, media and the public at large via digital platforms such as the web and social media. ESA’s evolving information management policy increases these opportunities.

In particular, a new Open Access policy for ESA’s information and data will now facilitate broadest use and reuse of the material for the general public, media, the educational sector, partners and anybody else seeking to utilise and build upon it.

“This evolution in opening access to ESA’s images, information and knowledge is an important element of our goal to inform, innovate, interact and inspire in the Space 4.0 landscape,” said Jan Woerner, ESA Director General.

“It logically follows the free and open data policies we have already established and accounts for the increasing interest of the general public, giving more insight to the taxpayers in the member states who fund the Agency.”

A website pointing to sets of content already available under Open Access, a set of Frequently Asked Questions and further background information can be found at http://open.esa.int.

More information on the ESA Digital Agenda for Space is available at http://www.esa.int/digital.

A great trove of images and data for exploration and development of data skills.

Launched on 1 March 2002 on an Ariane-5 rocket from Europe’s spaceport in French Guyana, Envisat was the largest Earth observation spacecraft ever built. The eight-tonne satellite orbited Earth more than 50 000 times over 10 years – twice its planned lifetime. The mission delivered thousands of images and a wealth of data used to study the workings of the Earth system, including insights into factors contributing to climate change. The end of the mission was declared on 9 May 2012, but ten years of Envisat’s archived data continues to be exploited for studying our planet.

With immediate effect, all 476 public Envisat MERIS or ASAR or AATSR images are released under the Creative Commons CC BY-SA 3.0 IGO licence, hence the credit for all images is: ESA, CC BY-SA 3.0 IGO. Follow this link.

The 476 images mentioned in the news release are images prepared over the years for public release.

For addition Envisat data under the Open Access license, see: EO data distributed by ESA.

I registered for an ESA Earth Observation Single User account, quite easy as registration forms go.

I’ll wander about for a bit and report back on the resources I find.

Enjoy!

PS: Not only should you use and credit the ESA as a data source, laudatory comments about the Open Access license may encourage others to do the same.

#DisruptJ20 – 3 inch resolution aerial imagery Washington, DC @J20protests

Tuesday, January 17th, 2017

3 inch imagery resolution for Washington, DC by Jacques Tardie.

From the post:

We updated our basemap in Washington, DC with aerial imagery at 3 inch (7.5 cm) resolution. The source data is openly licensed by DC.gov, thanks to the District’s open data initiative.

If you aren’t familiar with Mapbox, there is no time like the present!

If you are interested in the just the 3 inch resolution aerial imagery, see: http://opendata.dc.gov/datasets?keyword=imagery.

Enjoy!

Restricted U.S. Army Geospatial Intelligence Handbook

Friday, August 26th, 2016

Restricted U.S. Army Geospatial Intelligence Handbook

From the webpage:

This training circular provides GEOINT guidance for commanders, staffs, trainers, engineers, and military intelligence personnel at all echelons. It forms the foundation for GEOINT doctrine development. It also serves as a reference for personnel who are developing doctrine; tactics, techniques, and procedures; materiel and force structure; and institutional and unit training for intelligence operations.

1-1. Geospatial intelligence is the exploitation and analysis of imagery and geospatial information to describe, assess, and visually depict physical features and geographically referenced activities on the Earth. Geospatial intelligence consists of imagery, imagery intelligence, and geospatial information (10 USC 467).

Note. TC 2-22.7 further implements that GEOINT consists of any one or any combination of the following components: imagery, IMINT, or GI&S.

1-2. Imagery is the likeness or presentation of any natural or manmade feature or related object or activity, and the positional data acquired at the same time the likeness or representation was acquired, including: products produced by space-based national intelligence reconnaissance systems; and likenesses and presentations produced by satellites, aircraft platforms, unmanned aircraft vehicles, or other similar means (except that such term does not include handheld or clandestine photography taken by or on behalf of human intelligence collection organizations) (10 USC 467).

1-3. Imagery intelligence is the technical, geographic, and intelligence information derived through the interpretation or analysis of imagery and collateral materials (10 USC 467).

1-4. Geospatial information and services refers to information that identifies the geographic location and characteristics of natural or constructed features and boundaries on the Earth, including: statistical data and information derived from, among other things, remote sensing, mapping, and surveying technologies; and mapping, charting, geodetic data, and related products (10 USC 467).

geospatial-intel-1-460

You may not have the large fixed-wing assets described in this handbook, the “value-added layers” are within your reach with open data.

geospatial-intel-2-460

In localized environments, your value-added layers may be more current and useful than those produced on longer time scales.

Topic maps can support geospatial collations of information along side other views of the same data.

A great opportunity to understand how a modern military force understands and uses geospatial intelligence.

Not to mention testing your ability to recreate that geospatial intelligence without dedicated tools.

Spatial Module in OrientDB 2.2

Tuesday, August 23rd, 2016

Spatial Module in OrientDB 2.2

From the post:

In versions prior to 2.2, OrientDB had minimal support for storing and retrieving GeoSpatial data. The support was limited to a pair of coordinates (latitude, longitude) stored as double in an OrientDB class, with the possibility to create a spatial index against those 2 coordinates in order to speed up a geo spatial query. So the support was limited to Point.
In OrientDB v.2.2 we created a brand new Spatial Module with support for different types of Geometry objects stored as embedded objects in a user defined class

  • Point (OPoint)
  • Line (OLine)
  • Polygon (OPolygon)
  • MultiPoint (OMultiPoint)
  • MultiLine (OMultiline)
  • MultiPolygon (OMultiPlygon)
  • Geometry Collections

Along with those data types, the module extends OrientDB SQL with a subset of SQL-MM functions in order to support spatial data.The module only supports EPSG:4326 as Spatial Reference System. This blog post is an introduction to the OrientDB spatial Module, with some examples of its new capabilities. You can find the installation guide here.

Let’s start by loading some data into OrientDB. The dataset is about points of interest in Italy taken from here. Since the format is ShapeFile we used QGis to export the dataset in CSV format (geometry format in WKT) and import the CSV into OrientDB with the ETL in the class Points and the type geometry field is OPoint.

The enhanced spatial functions for OrientDB 2.2 reminded me of this passage in “Silences and Secrecy: The Hidden Agenda of Cartography in Early Modern Europe:”

Some of the most clear-cut cases of an increasing state concern with control and restriction of map knowledge are associated with military or strategic considerations. In Europe in the sixteenth and seventeenth centuries hardly a year passed without some war being fought. Maps were an object of military intelligence; statesmen and princes collected maps to plan, or, later, to commemorate battles; military textbooks advocated the use of maps. Strategic reasons for keeping map knowledge a secret included the need for confidentiality about the offensive and defensive operations of state armies, the wish to disguise the thrust of external colonization, and the need to stifle opposition within domestic populations when developing administrative and judicial systems as well as the more obvious need to conceal detailed knowledge about fortifications. (reprinted in: The New Nature of Maps: Essays in the History of Cartography, by J.B. Harley: Paul Laxton, John Hopkins, 2001. page 89)

I say “reminded me,” better to say increased my puzzling over the widespread access to geographic data that once upon a time had military value.

Is it the case that “ordinary maps,” maps of streets, restaurants, hotels, etc., aren’t normally imbued (merged?) with enough other information to make them “dangerous?”

If that’s true, the lack of commonly available “dangerous maps” is a disadvantage to emergency and security planners.

You can’t plan for the unknown.

Or to paraphrase Dibert: “Ignorance is not a reliable planning guide.”

How would you cure the ignorance of “ordinary” maps?

PS: While hunting for the quote, I ran across The Power of Maps by Denis Wood; with John Fels. Which has been up-dated: Rethinking the power of maps by Denis Wood; with John Fels and John Krygier. I am now re-reading the first edition and awaiting for the updated version to arrive.

Neither book is a guide to making “dangerous” maps but may awaken in you a sense of the power of maps and map making.

Searching for Geolocated Posts On YouTube

Sunday, January 3rd, 2016

Searching for Geolocated Posts On YouTube (video) by First Draft News.

Easily the most information filled 1 minutes and 18 seconds of the holiday season!

Illustrates searching for geolocated post to YouTube, despite YouTube not offering that option!

New tool in development may help!

Visit: http://youtube.github.io/geo-search-tool/search.html

Both the video and site are worth a visit!

Don’t forget to check out First Draft News as well!

Planet Platform Beta & Open California:…

Friday, October 16th, 2015

Planet Platform Beta & Open California: Our Data, Your Creativity by Will Marshall.

From the post:

At Planet Labs, we believe that broad coverage frequent imagery of the Earth can be a significant tool to address some of the world’s challenges. But this can only happen if we democratise access to it. Put another way, we have to make data easy to access, use, and buy. That’s why I recently announced at the United Nations that Planet Labs will provide imagery in support of projects to advance the Sustainable Development Goals.

Today I am proud to announce that we’re releasing a beta version of the Planet Platform, along with our imagery of the state of California under an open license.

The Planet Platform Beta will enable a pioneering cohort of developers, image analysts, researchers, and humanitarian organizations to get access to our data, web-based tools and APIs. The goal is to provide a “sandbox” for people to start developing and testing their apps on a stack of openly available imagery, with the goal of jump-starting a developer community; and collecting data feedback on Planet’s data, tools, and platform.

Our Open California release includes two years of archival imagery of the whole state of California from our RapidEye satellites and 2 months of data from the Dove satellite archive; and will include new data collected from both constellations on an ongoing basis, with a two-week delay. The data will be under an open license, specifically CC BY-SA 4.0. The spirit of the license is to encourage R&D and experimentation in an “open data” context. Practically, this means you can do anything you want, but you must “open” your work, just as we are opening ours. It will enable the community to discuss their experiments and applications openly, and thus, we hope, establish the early foundation of a new geospatial ecosystem.

California is our first Open Region, but shall not be the last. We will open more of our data in the future. This initial release will inform how we deliver our data set to a global community of customers.

Resolution for the Dove satellites is 3-5 meters and the RapidEye satellites is 5 meters.

Not quite goldfish bowl or Venice Beach resolution but useful for other purposes.

Now would be a good time to become familiar with managing and annotating satellite imagery. Higher resolutions, public and private are only a matter of time.

Visualising Geophylogenies in Web Maps Using GeoJSON

Monday, July 13th, 2015

Visualising Geophylogenies in Web Maps Using GeoJSON by Roderic Page.

Abstract:

This article describes a simple tool to display geophylogenies on web maps including Google Maps and OpenStreetMap. The tool reads a NEXUS format file that includes geographic information, and outputs a GeoJSON format file that can be displayed in a web map application.

From the introduction (with footnotes omitted):

The increasing number of georeferenced sequences in GenBank [ftnt omitted] and the growth of DNA barcoding [ftnt omitted] means that the raw material to create geophylogenies [ftnt omitted] is readily available. However, constructing visualisations of phylogenies and geography together can be tedious. Several early efforts at visualising geophylogenies focussed on using existing GIS software [ftnt omitted], or tools such as Google Earth [ftnt omitted]. While the 3D visualisations enabled by Google Earth are engaging, it’s not clear that they are easy to interpret. Another tool, GenGIS [ftnt omitted], supports 2D visualisations where the phylogeny is drawn flat on the map, avoiding some of the problems of Google Earth visualisations. However, like Google Earth, GenGIS requires the user to download and install additional software on their computer.

By comparison, web maps such as Google Maps [ftnt omitted] are becoming ubiquitous and work in most modern web browsers. They support displaying user-supplied data, including geometrical information encoded in formats such as GeoJSON, making them a light weight alternative to 3D geophylogeny viewers. This paper describes a tool that makes use of the GeoJSON format and the capabilities of web maps to create quick and simple visualisations of geophylogenies.

Whether you are interested in geophylogenies or in the use of GeoJSON, this is a post for you.

Enjoy!

Animation of Gerrymandering?

Friday, April 24th, 2015

United States Congressional District Shapefiles by Jeffrey B. Lewis, Brandon DeVine, and Lincoln Pritcher with Kenneth C. Martis.

From the description:

This site provides digital boundary definitions for every U.S. Congressional District in use between 1789 and 2012. These were produced as part of NSF grant SBE-SES-0241647 between 2009 and 2013.

The current release of these data is experimental. We have had done a good deal of work to validate all of the shapes. However, it is quite likely that some irregulaties remain. Please email jblewis@ucla.edu with questions or suggestions for improvement. We hope to have a ticketing system for bugs and a versioning system up soon. The district definitions currently available should be considered an initial-release version.

Many districts were formed by aggregragating complete county shapes obtained from the National Historical Geographic Information System (NHGIS) project and the Newberry Library’s Atlas of Historical County Boundaries. Where Congressional district boundaries did not coincide with county boundaries, district shapes were constructed district-by-district using a wide variety of legal and cartographic resources. Detailed descriptions of how particular districts were constructed and the authorities upon which we relied are available (at the moment) by request and described below.

Every state districting plan can be viewed quickly at https://github.com/JeffreyBLewis/congressional-district-boundaries (clicking on any of the listed file names will create a map window that can be paned and zoomed). GeoJSON definitions of the districts can also be downloaded from the same URL. Congress-by-Congress district maps in ERSI ShapefileA format can be downloaded below. Though providing somewhat lower resolution than the shapefiles, the GeoJSON files contain additional information about the members who served in each district that the shapefiles do not (Congress member information may be useful for creating web applications with, for example, Google Maps or Leaflet).

Project Team

The Principal Investigator on the project was Jeffrey B. Lewis. Brandon DeVine and Lincoln Pitcher researched district definitions and produced thousands of digital district boundaries. The project relied heavily on Kenneth C. Martis’ The Historical Atlas of United States Congressional Districts: 1789-1983. (New York: The Free Press, 1982). Martis also provided guidance, advice, and source materials used in the project.

How to cite

Jeffrey B. Lewis, Brandon DeVine, Lincoln Pitcher, and Kenneth C. Martis. (2013) Digital Boundary Definitions of United States Congressional Districts, 1789-2012. [Data file and code book]. Retrieved from http://cdmaps.polisci.ucla.edu on [date of
download].

An impressive resource for anyone interested in the history of United States Congressional Districts and their development. An animation of gerrymandering of congressional districts was the first use case that jumped to mind. 😉

Enjoy!

I first saw this in a tweet by Larry Mullen.

Imagery Processing Pipeline Launches!

Tuesday, April 21st, 2015

Imagery Processing Pipeline Launches!

From the post:

Our imagery processing pipeline is live! You can search the Landsat 8 imagery catalog, filter by date and cloud coverage, then select any image. The image is instantly processed, assembling bands and correcting colors, and loaded into our API. Within minutes you will have an email with a link to the API end point that can be loaded into any web or mobile application.

Our goal is to make it fast for anyone to find imagery for a news story after a disaster, easy for any planner to get the the most recent view of their city, and any developer to pull in thousands of square KM of processed imagery for their precision agriculture app. All directly using our API

There are two ways to get started: via the imagery browser fetch.astrodigital.com, or directly via the the Search and Publish APIs. All API documentation is on astrodigital.com/api. You can either use the API to programmatically pull imagery though the pipeline or build your own UI on top of the API, just like we did.

The API provides direct access to more than 300TB of satellite imagery from Landsat 8. Early next year we’ll make our own imagery available once our own Landmapper constellation is fully commissioned.

Hit us up @astrodigitalgeo or sign up at astrodigital.com to follow as we build. Huge thanks to our partners at Development Seed who is leading our development and for the infinitively scalable API from Mapbox.

If you are interested in Earth images, you really need to check this out!

I haven’t tried the API but did get a link to an image of my city and surrounding area.

Definitely worth a long look!

Geojournalism.org

Saturday, February 7th, 2015

Geojournalism.org

From the webpage:

Geojournalism.org provides online resources and training for journalists, designers and developers to dive into the world of data visualization using geographic data.

From the about page:

Geojournalism.org is made for:

Journalists

Reporters, editors and other professionals involved on the noble mission of producing relevant news for their audiences can use Geojournalism.org to produce multimedia stories or simple maps and data visualization to help creating context for complex environmental issues

Developers

Programmers and geeks using a wide variety of languages and tools can drink on the vast knowledge of our contributors. Some of our tutorials explore open source libraries to make maps, infographics or simply deal with large geographical datasets

Designers

Graphic designers and experts on data visualizations find in the Geojournalism.org platform a large amount of resources and tips. They can, for example, improve their knowledge on the right options for coloring maps or how to set up simple charts to depict issues such as deforestation and climate change

It is one thing to have an idea or even a story and quite another to communicate it effectively to a large audience. Geojournalism is designed as a community site that will help you communicate geophysical data to a non-technical audience.

I think it is clear that most governments are shy about accurate and timely communication with their citizens. Are you going to be one of those who fills in the gaps? Geojournalism.org is definitely a site you will be needing.

Geospatial Data in Python

Thursday, November 20th, 2014

Geospatial Data in Python by Carson Farmer.

Materials for the tutorial: Geospatial Data in Python: Database, Desktop, and the Web by Carson Farmer (Associate Director of CARSI lab).

Important skills if you are concerned about projects such as the Keystone XL Pipeline:

keystone pipeline route

This is an instance where having the skills to combine geospatial, archaeological, and other data together will empower local communities to minimize the damage they will suffer from this project.

Having a background in the processing geophysical data is the first step in that process.

Getty Thesaurus of Geographic Names (TGN)

Friday, August 22nd, 2014

Getty Thesaurus of Geographic Names Released as Linked Open Data by James Cuno.

From the post:

We’re delighted to announce that the Getty Research Institute has released the Getty Thesaurus of Geographic Names (TGN)® as Linked Open Data. This represents an important step in the Getty’s ongoing work to make our knowledge resources freely available to all.

Following the release of the Art & Architecture Thesaurus (AAT)® in February, TGN is now the second of the four Getty vocabularies to be made entirely free to download, share, and modify. Both data sets are available for download at vocab.getty.edu under an Open Data Commons Attribution License (ODC BY 1.0).

What Is TGN?

The Getty Thesaurus of Geographic Names is a resource of over 2,000,000 names of current and historical places, including cities, archaeological sites, nations, and physical features. It focuses mainly on places relevant to art, architecture, archaeology, art conservation, and related fields.

TGN is powerful for humanities research because of its linkages to the three other Getty vocabularies—the Union List of Artist Names, the Art & Architecture Thesaurus, and the Cultural Objects Name Authority. Together the vocabularies provide a suite of research resources covering a vast range of places, makers, objects, and artistic concepts. The work of three decades, the Getty vocabularies are living resources that continue to grow and improve.

Because they serve as standard references for cataloguing, the Getty vocabularies are also the conduits through which data published by museums, archives, libraries, and other cultural institutions can find and connect to each other.

A resource where you could loose some serious time!

Try this entry for London.

Or Paris.

Bear in mind the data that underlies this rich display is now available for free downloading.

Digital Mapping + Geospatial Humanities

Monday, June 16th, 2014

Digital Mapping + Geospatial Humanities by Fred Gibbs.

From the course description:

We are in the midst of a major paradigm shift in human consciousness and society caused by our ubiquitous connectedness via the internet and smartphones. These globalizing forces have telescoped space and time to an unprecedented degree, while paradoxically heightening the importance of local places.

The course explores the technologies, tools, and workflows that can help collect, connect, and present online interpretations of the spaces around us. Throughout the week, we’ll discuss the theoretical and practical challenges of deep mapping (producing rich, interactive maps with multiple layers of information). Woven into our discussions will be numerous technical tutorials that will allow us to tell map-based stories about Albuquerque’s fascinating past.


This course combines cartography, geography, GIS, history, sociology, ethnography, computer science, and graphic design. While we cover some of the basics of each of these, the course eschews developing deep expertise in any of these in favor of exploring their intersections with each other, and formulating critical questions that span these normally disconnected disciplines. By the end, you should be able to think more critically about maps, place, and our online experiences with them.


We’ll move from creating simple maps with Google Maps/Earth to creating your own custom, interactive online maps with various open source tools like QGIS, Open Street Map, and D3 that leverage the power of open data from local and national repositories to provide new perspectives on the built environment. We’ll also use various mobile apps for data collection, online exhibit software, (physical and digital) historical archives at the Center for Southwest Research. Along the way we’ll cover the various data formats (KML, XML, GeoJSON, TopoJSON) used by different tools and how to move between them, allowing you to craft the most efficient workflow for your mapping purposes.

Course readings that aren’t freely availabe online (and even some that are) can be accessed via the course Zotero Library. You’ll need to be invited to join the group since we use it to distribute course readings. If you are not familiar with Zotero, here are some instructions.

All of that in a week! This week as a matter of fact.

One of the things I miss about academia are the occasions when you can concentrate on one subject to the exclusion of all else. Of course, being unmarried at that age, unemployed, etc. may have contributed to the ability to focus. 😉

Just sampled some of the readings and this appears to be a really rocking course!

Twitter User Targeting Data

Sunday, May 11th, 2014

Geotagging One Hundred Million Twitter Accounts with Total Variation Minimization by Ryan Compton, David Jurgens, and, David Allen.

Abstract:

Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data.

Our method infers an unknown user’s location by examining their friend’s locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user’s ego network can be used as a per-user accuracy measure, allowing us to discard poor location inferences and control the overall error of our approach.

Leave-many-out evaluation shows that our method is able to infer location for 101,846,236 Twitter users at a median error of 6.33 km, allowing us to geotag roughly 89\% of public tweets.

If 6.33 km sounds like a lot of error, check out NUKEMAP by Alex Wellerstein.

GeoCanvas

Saturday, April 5th, 2014

Synthicity Releases 3D Spatial Data Visualization Tool, GeoCanvas by Dean Meyers.

From the post:

Synthicity has released a free public beta version of GeoCanvas, its 3D spatial data visualization tool. The software provides a streamlined toolset for exploring geographic data, lowering the barrier to learning and using geographic information systems.

GeoCanvas is not limited to visualizing parcels in cities. By supporting data formats such as the widely available shapefile for spatial geometry and text files for attribute data, it opens the possibility of rapid 3D spatial data visualization for a wide range of uses and users. The software is expected to be a great addition to the toolkits of students, researchers, and practitioners in fields as diverse as data science, geography, planning, real estate analysis, and market research. A set of video tutorials explaining the basic concepts and a range of examples have been made available to showcase the possibilities.

The public beta version of GeoCanvas is available as a free download from www.synthicity.com.

Well, rats! I haven’t installed a VM with Windows 7/8 or Max OS X 10.8 or later.

Sounds great!

Comments from actual experience?

…Open GIS Mapping Data To The Public

Wednesday, February 12th, 2014

Esri Allows Federal Agencies To Open GIS Mapping Data To The Public by Alexander Howard.

From the post:

A debate in the technology world that’s been simmering for years, about whether mapping vendor Esri will allow public geographic information systems (GIS) to access government customers’ data, finally has an answer: The mapping software giant will take an unprecedented step, enabling thousands of government customers around the U.S. to make their data on the ArcGIS platform open to the public with a click of a mouse.

“Everyone starting to deploy ArcGIS can now deploy an open data site,” Andrew Turner, chief technology officer of Esri’s Research and Development Center in D.C., said in an interview. “We’re in a unique position here. Users can just turn it on the day it becomes public.”

Government agencies can use the new feature to turn geospatial information systems data in Esri’s format into migratable, discoverable, and accessible open formats, including CSVs, KML and GeoJSON. Esri will demonstrate the ArcGIS feature in ArcGIS at the Federal Users Conference in Washington, D.C. According to Turner, the new feature will go live in March 2014.

I’m not convinced that GIS data alone is going to make government more transparent but it is a giant step in the right direction.

To have even partial transparency in government, not only would you need GIS data but to have that correlated with property sales and purchases going back decades, along with tracing the legal ownership of property past shell corporations and holding companies, to say nothing of the social, political and professional relationships of those who benefited from various decisions. For a start.

Still, the public may be a better starting place to demand transparency with this type of data.

Neo4j Spatial Part 2

Tuesday, February 11th, 2014

Neo4j Spatial Part 2 by Max De Marzi.

Max finishes up part 1 with sample spatial data on for restaurants and deploying his proof of concept using GrapheneDB on Heroku.

Restaurants are typical cellphone app fare but if I were in Kiev, I’d want an app with geo-locations of ingredients for a proper Molotov cocktail.

A jar filled with gasoline and a burning rag is nearly as dangerous to the thrower as the target.

Of course, substitutions for ingredients, in what quantities, in different languages, could be added features of such an app.

Data management is a weapon within the reach of all sides.

Geospatial (distance) faceting…

Tuesday, January 21st, 2014

Geospatial (distance) faceting using Lucene’s dynamic range facets by Mike McCandless.

From the post:

There have been several recent, quiet improvements to Lucene that, taken together, have made it surprisingly simple to add geospatial distance faceting to any Lucene search application, for example:

  < 1 km (147)
  < 2 km (579)
  < 5 km (2775)

Such distance facets, which allow the user to quickly filter their search results to those that are close to their location, has become especially important lately since most searches are now from mobile smartphones.

In the past, this has been challenging to implement because it’s so dynamic and so costly: the facet counts depend on each user’s location, and so cannot be cached and shared across users, and the underlying math for spatial distance is complex.

But several recent Lucene improvements now make this surprisingly simple!

As always, Mike is right on the edge so wait for Lucene 4.7 to try his code out or download the current source.

Distance might not be the only consideration. What if you wanted the shortest distance that did not intercept a a known patrol? Or known patrol within some window of variation.

Distance is still going to be a factor but the search required maybe more complex than just distance.

Mapping the open web using GeoJSON

Sunday, December 8th, 2013

Mapping the open web using GeoJSON by Sean Gillies.

From the post:

GeoJSON is an open format for encoding information about geographic features using JSON. It has much in common with older GIS formats, but also a few new twists: GeoJSON is a text format, has a flexible schema, and is specified in a single HTML page. The specification is informed by standards such as OGC Simple Features and Web Feature Service and streamlines them to suit the way web developers actually build software today.

Promoted by GitHub and used in the Twitter API, GeoJSON has become a big deal in the open web. We are huge fans of the little format that could. GeoJSON suits the web and suits us very well; it plays a major part in our libraries, services, and products.

A short but useful review of why GeoJSON is important to MapBox and why it should be important to you.

A must read if you are interested in geo-locating data of interest to your users to maps.

Sean mentions that Github promotes GeoJSON but I’m curious if the NSA uses/promotes it as well? 😉

Geocode the world…

Thursday, October 10th, 2013

Geocode the world with the new Data Science Toolkit by Pete Warden.

From the post:

I’ve published a new version of the Data Science Toolkit, which includes David Blackman’s awesome TwoFishes city-level geocoder. Largely based on data from the Geonames project, the biggest improvement is that the Google-style geocoder now handles millions of places around the world in hundreds of languages:

Who or what do you want to locate? 😉

ST_Geometry Aggregate Functions for Hive…

Friday, August 16th, 2013

ST_Geometry Aggregate Functions for Hive in Spatial Framework for Hadoop by Jonathan Murphy.

From the post:

We are pleased to announce that the ST_Geometry aggregate functions are now available for Hive, in the Spatial Framework for Hadoop. The aggregate functions can be used to perform a convex-hull, intersection, or union operation on geometries from multiple records of a dataset.

While the non-aggregate ST_ConvexHull function returns the convex hull of the geometries passed like a single function call, the ST_Aggr_ConvexHull function accumulates the geometries from the rows selected by a query, and performs a convex hull operation over those geometries. Likewise, ST_Aggr_Intersection and ST_Aggr_Union aggregrate the geometries from multiple selected rows, to perform intersection and union operations, respectively.

The example given covers earthquake data and California-county data.

I have a weakness for aggregating functions as you know. 😉

The other point this aggregate functions illustrates is that sometimes you want subjects to be treated as independent of each other and sometimes you want to treat them as a group.

Depends upon your requirements.

There really isn’t a one size fits all granularity of subject identity for all situations.

Server-side clustering of geo-points…

Sunday, August 4th, 2013

Server-side clustering of geo-points on a map using Elasticsearch by Gianluca Ortelli.

From the post:

Plotting markers on a map is easy using the tooling that is readily available. However, what if you want to add a large number of markers to a map when building a search interface? The problem is that things start to clutter and it’s hard to view the results. The solution is to group results together into one marker. You can do that on the client using client-side scripting, but as the number of results grows, this might not be the best option from a performance perspective.

This blog post describes how to do server-side clustering of those markers, combining them into one marker (preferably with a counter indicating the number of grouped results). It provides a solution to the “too many markers” problem with an Elasticsearch facet.

The Problem

The image below renders quite well the problem we were facing in a project:

clustering

The mass of markers is so dense that it replicates the shape of the Netherlands! These items represent monuments and other things of general interest in the Netherlands; for an application we developed for a customer we need to manage about 200,000 of them and they are especially concentrated in the cities, as you can see in this case in Amsterdam: The “draw everything” strategy doesn’t help much here.

Server-side clustering of geo-points will be useful for representing dense geo-points.

Such as an Interactive Surveillance Map.

Or if you were building a map of police and security force sightings over multiple days to build up a pattern database.

Visualizing Web Scale Geographic Data…

Wednesday, July 10th, 2013

Visualizing Web Scale Geographic Data in the Browser in Real Time: A Meta Tutorial by Sean Murphy.

From the post:

Visualizing geographic data is a task many of us face in our jobs as data scientists. Often, we must visualize vast amounts of data (tens of thousands to millions of data points) and we need to do so in the browser in real time to ensure the widest-possible audience for our efforts and we often want to do this leveraging free and/or open software.

Luckily for us, Google offered a series of fascinating talks at this year’s (2013) IO that show one particular way of solving this problem. Even better, Google discusses all aspects of this problem: from cleaning the data at scale using legacy C++ code to providing low latency yet web-scale data storage and, finally, to rendering efficiently in the browser. Not surprisingly, Google’s approach highly leverages **alot** of Google’s technology stack but we won’t hold that against them.

(…)

Sean sets the background for two presentations:

All the Ships in the World: Visualizing Data with Google Cloud and Maps (36 minutes)

and,

Google Maps + HTML5 + Spatial Data Visualization: A Love Story (60 minutes) (source code: https://github.com/brendankenny)

Both are well worth your time.

JQVMAP

Saturday, June 8th, 2013

JQVMAP

From the webpage:

JQVMap is a jQuery plugin that renders Vector Maps. It uses resizable Scalable Vector Graphics (SVG) for modern browsers like Firefox, Safari, Chrome, Opera and Internet Explorer 9. Legacy support for older versions of Internet Explorer 6-8 is provided via VML.

Whatever your source of data, cellphone location data, user observation, etc., rendering it to a geographic display may be useful.