Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

August 8, 2017

Radio Navigation, Dodging Government GPS

Filed under: Government,GPS,Privacy — Patrick Durusau @ 4:26 pm

Radio navigation set to make global return as GPS backup, because cyber by Sean Gallagher.

From the post:

Way back in the 1980s, when I was a young naval officer, the Global Positioning System was still in its experimental stage. If you were in the middle of the ocean on a cloudy night, there was pretty much only one reliable way to know where you were: Loran-C, the hyperbolic low-frequency radio navigation system. Using a global network of terrestrial radio beacons, Loran-C gave navigators aboard ships and aircraft the ability to get a fix on their location within a few hundred feet by using the difference in the timing of two or more beacon signals.

An evolution of World War II technology (LORAN was an acronym for long-range navigation), Loran-C was considered obsolete by many once GPS was widely available. In 2010, after the US Coast Guard declared that it was no longer required, the US and Canada shut down their Loran-C beacons. Between 2010 and 2015, nearly everyone else shut down their radio beacons, too. The trial of an enhanced Loran service called eLoran that was accurate within 20 meters (65 feet) also wrapped up during this time.

But now there’s increasing concern about over-reliance in the navigational realm on GPS. Since GPS signals from satellites are relatively weak, they are prone to interference, accidental or deliberate. And GPS can be jammed or spoofed—portable equipment can easily drown them out or broadcast fake signals that can make GPS receivers give incorrect position data. The same is true of the Russian-built GLONASS system.

Sean focuses on the “national security” needs for a backup to GPS but it isn’t North Koreans, Chinese or Russians who are using Stingray devices against US citizens.

No, those are all in use by agents of the federal and/or state governments. Ditto for anyone spoofing your GPS in the United States.

You need a GPS backup, but your adversary is quite close to home.

The new protocol is call eLoran and Sean has a non-technical overview of it.

You would have unusual requirements to need a private eLoran but so you have an idea of what is possible:


eLoran technology has been available since the mid-1990s and is still available today. In fact, the state-of-the-art of eLoran continues to advance along with other 21st-century technology. eLoran system technology can be broken down into a few simple components: transmitting site, control and monitor site, differential reference station site and user equipment.

Modern transmitting site equipment consists of a high-power, modular, fully redundant, hot-swappable and software configurable transmitter, and sophisticated timing and control equipment. Standard transmitter configurations are available in power ranges from 125 kilowatts to 1.5 megawatts. The timing and control equipment includes a variety of external timing inputs to a remote time scale, and a local time scale consisting of three ensembled cesium-based primary reference standards. The local time scale is not directly coupled to the remote time scale. Having a robust local time scale while still monitoring many types of external time sources provides a unique ability to provide proof-of-position and proof-of-time. Modern eLoran transmitting site equipment is smaller, lighter, requires less input power, and generates significantly less waste heat than previously used Loran-C equipment.

The core technology at a differential eLoran reference station site consists of three differential eLoran reference station or integrity monitors (RSIMs) configurable as reference station (RS) or integrity monitor (IM) or hot standby (RS or IM). The site includes electric field (E-field) antennas for each of the three RSIMs.

Modern eLoran receivers are really software-defined radios, and are backward compatible with Loran-C and forward compatible, through firmware or software changes. ASF tables are included in the receivers, and can be updated via the Loran data channel. eLoran receivers can be standalone or integrated with GNSS, inertial navigation systems, chip-scale atomic clocks, barometric altimeters, sensors for signals-of-opportunity, and so on. Basically, any technology that can be integrated with GPS can also be integrated with eLoran.
Innovation: Enhanced Loran, GPS World (May, 2015)

Some people are happy with government controlled services. Other people, not so much.

Who is determining your location?

July 28, 2017

Open Source GPS Tracking System: Traccar (Super Glue + Burner Phone)

Filed under: Geographic Data,GPS — Patrick Durusau @ 10:33 am

Open Source GPS Tracking System: Traccar

From the post:

Traccar is an open source GPS tracking system for various GPS tracking devices. This Maven Project is written in Java and works on most platforms with installed Java Runtime Environment. System supports more than 80 different communication protocols from popular vendors. It includes web interface to manage tracking devices online… Traccar is the best free and open source GPS tracking system software offers self hosting real time online vehicle fleet management and personal tracking… Traccar supports more than 80 GPS communication protocols and more than 600 models of GPS tracking devices.

(image omitted)

To start using Traccar Server follow instructions below:

  • Download and install Traccar
  • Reboot system, Traccar will start automatically
  • Open web interface (http://localhost:8082)
  • Log in as administrator (user – admin, password – admin) or register a new user
  • Add new device with unique identifier (see section below)
  • Configure your device to use appropriate address and port (see section below)

With nearly omnipresent government surveillance of citizens, citizens should return the favor by surveillance of government officers.

Super Glue plus a burner phone enables GPS tracking of government vehicles.

For those with greater physical access, introducing a GPS device into vehicle wiring is also an option.

You may want to restrict access to Traccar as public access to GPS location data will alert targets to GPS tracking of their vehicles.

It’s a judgment call when the loss of future tracking data is offset by the value of accumulated tracking data for a specific purpose.

What if you tracked all county police car locations for a year and patterns emerge from that data? What forums are best for summarized (read aggregated) presentation of the data? When/where is it best to release the detailed data? How do you sign released data to verify future analysis is using the same data?

Hard questions but better hard questions than no tracking data for government agents at all. 😉

June 26, 2016

Another Betrayal By Cellphone – Personal Identity

Filed under: Cybersecurity,Government,GPS,Identification — Patrick Durusau @ 3:15 pm

Normal operation of the cell phone in your pocket betrays your physical location. Your location is calculated by a process known as cell phone tower triangulation. In addition to giving away your location, research shows your cell phone can betray your personal identity as well.

The abstract from: Person Identification Based on Hand Tremor Characteristics by Oana Miu, Adrian Zamfir, Corneliu Florea, reads:

A plethora of biometric measures have been proposed in the past. In this paper we introduce a new potential biometric measure: the human tremor. We present a new method for identifying the user of a handheld device using characteristics of the hand tremor measured with a smartphone built-in inertial sensors (accelerometers and gyroscopes). The main challenge of the proposed method is related to the fact that human normal tremor is very subtle while we aim to address real-life scenarios. To properly address the issue, we have relied on weighted Fourier linear combiner for retrieving only the tremor data from the hand movement and random forest for actual recognition. We have evaluated our method on a database with 10 000 samples from 17 persons reaching an accuracy of 76%.

The authors emphasize the limited size of their dataset and unexplored issues, but with an accuracy of 76% in identification mode and 98% in authentication (matching tremor to user in the database) mode, this approach merits further investigation.

Recording tremor data required no physical modification of the cell phones, only installation of an application that captured gyroscope and accelerometer data.

Before the targeting community gets too excited about having cell phone location and personal identify via tremor data, the authors do point out that personal tremor data can be recorded and used to defeat identification.

It maybe that hand tremor isn’t the killer identification mechanism but what if it were considered to be one factor of identification?

That is that hand tremor, plus location (say root terminal), plus a password, are all required for a successful login.

Building on our understanding from topic maps that identification isn’t ever a single factor, but can be multiple factors in different perspectives.

In that sense, two-factor identification demonstrates how lame our typical understanding of identity is in fact.

February 24, 2016

Superhuman Neural Network – Urban War Fighters Take Note

Filed under: GPS,Machine Learning,Mapping,Maps,Neural Networks — Patrick Durusau @ 9:38 pm

Google Unveils Neural Network with “Superhuman” Ability to Determine the Location of Almost Any Image

From the post:

Here’s a tricky task. Pick a photograph from the Web at random. Now try to work out where it was taken using only the image itself. If the image shows a famous building or landmark, such as the Eiffel Tower or Niagara Falls, the task is straightforward. But the job becomes significantly harder when the image lacks specific location cues or is taken indoors or shows a pet or food or some other detail.

Nevertheless, humans are surprisingly good at this task. To help, they bring to bear all kinds of knowledge about the world such as the type and language of signs on display, the types of vegetation, architectural styles, the direction of traffic, and so on. Humans spend a lifetime picking up these kinds of geolocation cues.

So it’s easy to think that machines would struggle with this task. And indeed, they have.

Today, that changes thanks to the work of Tobias Weyand, a computer vision specialist at Google, and a couple of pals. These guys have trained a deep-learning machine to work out the location of almost any photo using only the pixels it contains.

Their new machine significantly outperforms humans and can even use a clever trick to determine the location of indoor images and pictures of specific things such as pets, food, and so on that have no location cues.

The full paper: PlaNet—Photo Geolocation with Convolutional Neural Networks.

Abstract:

Is it possible to build a system to determine the location where a photo was taken using just its pixels? In general, the problem seems exceptionally difficult: it is trivial to construct situations where no location can be inferred. Yet images often contain informative cues such as landmarks, weather patterns, vegetation, road markings, and architectural details, which in combination may allow one to determine an approximate location and occasionally an exact location. Websites such as GeoGuessr and View from your Window suggest that humans are relatively good at integrating these cues to geolocate images, especially en-masse. In computer vision, the photo geolocation problem is usually approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. While previous approaches only recognize landmarks or perform approximate matching using global image descriptors, our model is able to use and integrate multiple visible cues. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman levels of accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, we demonstrate that this model achieves a 50% performance improvement over the single-image model.

You might think that with GPS engaged that the location of images is a done deal.

Not really. You can be facing in any direction from a particular GPS location and in a dynamic environment, analysts or others don’t have the time to sort out which images are relevant from those that are just noise.

Urban warfare does not occur on a global scale, bringing home the lesson it isn’t the biggest data set but the most relevant and timely data set that is important.

Relevantly oriented images and feeds are a natural outgrowth of this work. Not to mention pairing those images with other relevant data.

PS: Before I forget, enjoy paying the game at: www.geoguessr.com.

April 11, 2012

Open Street Map GPS users mapped

Filed under: GPS,Mapping,Maps,Open Street Map — Patrick Durusau @ 6:15 pm

Open Street Map GPS users mapped

From the post:

Open Street Map is the data source that keeps on giving. Most recently, the latest release has been a dump of GPS data from its contributors. These are the track files from Sat Nav systems which they users have sourced for the raw data behind OSM.

It’s a huge dataset: 55GB and 2.8bn items. And Guardian Datastore Flickr group user Steven Kay decided to try to visualise it.

This is the result – and it’s only an random sample of the whole. The heatmap shows a random sample of 1% of the points and their distribution, to show where GPS is used to upload data to OSM.

There are just short of 2.8 billion points, so the sample is nearly 28 million points. Red cells have the most points, blue cells have the fewest.

Great data set on its own but possibly the foundation for something even more interesting.

The intelligence types, who can’t analyze a small haystack effectively, want to build a bigger one: Building a Bigger Haystack.

Why not use GPS data such as this to create an “Intelligence Big Data Mining Test?” That is we assign significance to patterns in the data and see of the intelligence side can come up with the same answers. We can tell them what the answers are because they must still demonstrate how they got there, not just the answer.

Powered by WordPress