Archive for the ‘GPS’ Category

Another Betrayal By Cellphone – Personal Identity

Sunday, June 26th, 2016

Normal operation of the cell phone in your pocket betrays your physical location. Your location is calculated by a process known as cell phone tower triangulation. In addition to giving away your location, research shows your cell phone can betray your personal identity as well.

The abstract from: Person Identification Based on Hand Tremor Characteristics by Oana Miu, Adrian Zamfir, Corneliu Florea, reads:

A plethora of biometric measures have been proposed in the past. In this paper we introduce a new potential biometric measure: the human tremor. We present a new method for identifying the user of a handheld device using characteristics of the hand tremor measured with a smartphone built-in inertial sensors (accelerometers and gyroscopes). The main challenge of the proposed method is related to the fact that human normal tremor is very subtle while we aim to address real-life scenarios. To properly address the issue, we have relied on weighted Fourier linear combiner for retrieving only the tremor data from the hand movement and random forest for actual recognition. We have evaluated our method on a database with 10 000 samples from 17 persons reaching an accuracy of 76%.

The authors emphasize the limited size of their dataset and unexplored issues, but with an accuracy of 76% in identification mode and 98% in authentication (matching tremor to user in the database) mode, this approach merits further investigation.

Recording tremor data required no physical modification of the cell phones, only installation of an application that captured gyroscope and accelerometer data.

Before the targeting community gets too excited about having cell phone location and personal identify via tremor data, the authors do point out that personal tremor data can be recorded and used to defeat identification.

It maybe that hand tremor isn’t the killer identification mechanism but what if it were considered to be one factor of identification?

That is that hand tremor, plus location (say root terminal), plus a password, are all required for a successful login.

Building on our understanding from topic maps that identification isn’t ever a single factor, but can be multiple factors in different perspectives.

In that sense, two-factor identification demonstrates how lame our typical understanding of identity is in fact.

Superhuman Neural Network – Urban War Fighters Take Note

Wednesday, February 24th, 2016

Google Unveils Neural Network with “Superhuman” Ability to Determine the Location of Almost Any Image

From the post:

Here’s a tricky task. Pick a photograph from the Web at random. Now try to work out where it was taken using only the image itself. If the image shows a famous building or landmark, such as the Eiffel Tower or Niagara Falls, the task is straightforward. But the job becomes significantly harder when the image lacks specific location cues or is taken indoors or shows a pet or food or some other detail.

Nevertheless, humans are surprisingly good at this task. To help, they bring to bear all kinds of knowledge about the world such as the type and language of signs on display, the types of vegetation, architectural styles, the direction of traffic, and so on. Humans spend a lifetime picking up these kinds of geolocation cues.

So it’s easy to think that machines would struggle with this task. And indeed, they have.

Today, that changes thanks to the work of Tobias Weyand, a computer vision specialist at Google, and a couple of pals. These guys have trained a deep-learning machine to work out the location of almost any photo using only the pixels it contains.

Their new machine significantly outperforms humans and can even use a clever trick to determine the location of indoor images and pictures of specific things such as pets, food, and so on that have no location cues.

The full paper: PlaNet—Photo Geolocation with Convolutional Neural Networks.

Abstract:

Is it possible to build a system to determine the location where a photo was taken using just its pixels? In general, the problem seems exceptionally difficult: it is trivial to construct situations where no location can be inferred. Yet images often contain informative cues such as landmarks, weather patterns, vegetation, road markings, and architectural details, which in combination may allow one to determine an approximate location and occasionally an exact location. Websites such as GeoGuessr and View from your Window suggest that humans are relatively good at integrating these cues to geolocate images, especially en-masse. In computer vision, the photo geolocation problem is usually approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. While previous approaches only recognize landmarks or perform approximate matching using global image descriptors, our model is able to use and integrate multiple visible cues. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman levels of accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, we demonstrate that this model achieves a 50% performance improvement over the single-image model.

You might think that with GPS engaged that the location of images is a done deal.

Not really. You can be facing in any direction from a particular GPS location and in a dynamic environment, analysts or others don’t have the time to sort out which images are relevant from those that are just noise.

Urban warfare does not occur on a global scale, bringing home the lesson it isn’t the biggest data set but the most relevant and timely data set that is important.

Relevantly oriented images and feeds are a natural outgrowth of this work. Not to mention pairing those images with other relevant data.

PS: Before I forget, enjoy paying the game at: www.geoguessr.com.

Open Street Map GPS users mapped

Wednesday, April 11th, 2012

Open Street Map GPS users mapped

From the post:

Open Street Map is the data source that keeps on giving. Most recently, the latest release has been a dump of GPS data from its contributors. These are the track files from Sat Nav systems which they users have sourced for the raw data behind OSM.

It’s a huge dataset: 55GB and 2.8bn items. And Guardian Datastore Flickr group user Steven Kay decided to try to visualise it.

This is the result – and it’s only an random sample of the whole. The heatmap shows a random sample of 1% of the points and their distribution, to show where GPS is used to upload data to OSM.

There are just short of 2.8 billion points, so the sample is nearly 28 million points. Red cells have the most points, blue cells have the fewest.

Great data set on its own but possibly the foundation for something even more interesting.

The intelligence types, who can’t analyze a small haystack effectively, want to build a bigger one: Building a Bigger Haystack.

Why not use GPS data such as this to create an “Intelligence Big Data Mining Test?” That is we assign significance to patterns in the data and see of the intelligence side can come up with the same answers. We can tell them what the answers are because they must still demonstrate how they got there, not just the answer.