Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

October 11, 2012

Conflict History: All Human Conflicts on a Single Map [Battle of Jericho -1399-04-20?]

Filed under: Geography,History,Mapping,Maps — Patrick Durusau @ 3:44 pm

Conflict History: All Human Conflicts on a Single Map

From the post:

Conflict History [conflicthistory.com], developed by TecToys, summarizes all major human conflicts onto a single world map – from the historical wars way before the birth of Christ, until the drone attacks in Pakistan that are still happening today. The whole interactive map is build upon data retrieved from Google and Freebase open data services.

The world map is controlled by an interactive timeline. An additional search box allows more focused exploration by names or events, while individual conflict titles or icons can be selected to reveal more detailed information, all geographically mapped.

I had to run it back a good ways before I could judge its coverage.

I am not sure about the Battle of Jericho occurring on 04-20 in 1399 BCE. That seems a tad precise.

Still, it is an interesting demonstration of mapping technology.

For Eurocentric points, can you name the longest continuous period of peace (according to European historians)?

…[A] Common Operational Picture with Google Earth (webcast)

Filed under: Geographic Data,Geographic Information Retrieval,Google Earth,Mapping,Maps — Patrick Durusau @ 10:01 am

Joint Task Force – Homeland Defense Builds a Common Operational Picture with Google Earth

October 25, 2012 at 02:00 PM Eastern Daylight Time

The security for the Asia-Pacific Economic Collaboration summit in 2011 in Honolulu, Hawaii involved many federal, state & local agencies. The complex task of coordinating information sharing among agencies was the responsibility of Joint Task Force – Homeland Defense (JTF-HD). JTF-HD turned to Google Earth technology to build a visualization capability that enabled all agencies to share information easily & ensure a safe and secure summit.

What you will learn:

  • Best practices for sharing geospatial information among federal, state & local agencies
  • How to incorporate data from many sources into your own Google Earth globe
  • How do get accurate maps with limited bandwidth or no connection at all.

Speaker: Marie Kennedy, Joint Task Force – Homeland Defense

Sponsored by Google.

In addition to the techniques demonstrated, I suspect the main lesson will be leveraging information/services that already exist.

Or information integration if you prefer a simpler description.

Information can be integrated by conversion or mapping.

Which one you choose depends upon your requirements and the information.

Reusable information integration (RI2), where you leverage your own investment, well, that’s another topic altogether. 😉

Ask: Are you spending money to be effective or spending money to maintain your budget relative to other departments?

If the former, consider topic maps. If the latter, carry on.

October 7, 2012

The Forgotten Mapmaker: Nokia… [Lessons for Semantic Map Making?]

Filed under: Mapping,Maps,Semantics — Patrick Durusau @ 7:57 pm

The Forgotten Mapmaker: Nokia Has Better Maps Than Apple and Maybe Even Google by Alexis C. Madrigal.

What’s Nokia’s secret? Twelve billion probe data points a month, including data from FedEx and other logistic companies.

Notice that the logistic companies are not collecting mapping data per se, they are delivering goods.

Nokia is building maps based on data collected for another purpose, a completely routine and unrelated purpose to map making.

Does that suggest something to you about semantic map making?

That we need to capture semantics as users travel through data for other purposes?

If I knew what those opportunities were I would have put them at the top of this post. Suggestions?

PS: Sam Hunting pointed me towards this article.

October 4, 2012

Google Maps: A Prelude to Broader Predictive Search

Filed under: Interface Research/Design,Mapping,Maps — Patrick Durusau @ 2:01 pm

Google Maps: A Prelude to Broader Predictive Search by Stephen E. Arnold.

From the post:

Short honk. Google’s MoreThanaMap subsite signals an escalation in the map wars. You will want to review the information at www.morethanamap.com. The subsite presents the new look of Google’s more important features and services. The demonstrations are front and center.The focus is on visualization of mashed up data; that is, compound displays. The real time emphasis is clear as swell. The links point to developers and another “challenge.” It is clear that Google wants to make it desirable for programmers and other technically savvy individuals to take advantage of Google’s mapping capabilities. After a few clicks, Google has done a good job of making clear that findability and information access shift a map from a location service to a new interface.

You really need to see the demos to appreciate what can be done with the Google Map API.

Although, I remember the flight from Atlanta to Gatwick (London) as being longer than it seems in the demo. 😉

October 1, 2012

Google Maps Goes Deep-Sea Diving to Chart the World’s Ocean Floors

Filed under: Mapping,Maps — Patrick Durusau @ 2:47 pm

Google Maps Goes Deep-Sea Diving to Chart the World’s Ocean Floors by David Gianatasio.

A quick blurb about Google Maps adding select sea beds to its map collection.

Suggestions on what other ocean floor data is commonly available?

And with data in hand, what other data would you merge it with?

Apple Maps: By the “big data” short hairs

Filed under: BigData,Mapping,Maps — Patrick Durusau @ 10:48 am

Mike Loukides in Apple’s maps: Apple’s maps problem isn’t about software or design. It’s about data nails the problem with Apple Maps. It’s the data stupid!

Here’s the difficulty. As Stephen O’Grady has pointed out, the problem with maps is really a data problem, not a software or design problem. If Apple’s maps app was ugly or had a poor user interface, it would be fixed within a month. But Apple is really looking at a data problem: bad data, incomplete data, conflicting data, poor quality data, incorrectly formatted data. Anyone who works with data understands that 80% of the work in any data product is getting your data into good enough shape so that it’s useable. Google is a data company, and they understand this; hence the reports of more than 7,000 people working on Google Maps. And even Google Maps has its errors; I just reported a “road” that is really just a poorly maintained trail.

Mike’s post is amusing and informative so be sure to read it.

But remember these two points:

  1. Data is always dirty, syntactically and/or semantically. “Big data” is “big dirty data.”
  2. Google has 7,000 people, not servers, clusters, algorithms, etc., working on Google Maps. (Is that evidence that “big dirty data” requires human correction?)

The bigger the data, the more dirt you will encounter.

Is your data application going to be the next “Apple Maps?”

September 24, 2012

Foundation grants $575,000 for new OpenStreetMap tools

Filed under: Geographic Data,Mapping,Maps,Open Street Map — Patrick Durusau @ 5:22 pm

Foundation grants $575,000 for new OpenStreetMap tools

From the post:

The Knight Foundation has awarded a $575,000 grant to Washington-DC-based data visualisation and mapping firm Development Seed to work on new tools for OpenStreetMap (OSM). The Knight Foundation is a non-profit organisation dedicated to supporting quality journalism, media innovation and engaging communities. The award is one of six made by the Knight Foundation as part of Knight News Challenge: Data.

The funding will be used by developers from MapBox, part of Development Seed that designs maps using OSM data, to create three new open source tools for the OSM project to “lower the threshold for first time contributors”, while also making data “easier to consume by providing a bandwidth optimised data delivery system”.

Topic maps with geographic data are a sub-set of topic maps over all but its an important use case. And it is easy for people to relate to a “map” that looks like a “map.” Takes less mental effort. (One of those “slow” thinking things.) 😉

Looking forward to more good things to come from OpenStreetMaps!

September 21, 2012

Easy and customizable maps with TileMill

Filed under: Mapping,Maps — Patrick Durusau @ 7:18 pm

Easy and customizable maps with TileMill by Nathan Yau.

From the post:

I’m late to this party. TileMill, by mapping platform MapBox, is open source software that lets you quickly and easily create and edit maps. It’s available for OS X, Windows, and Ubuntu. Just download and install the program, and then load a shapefile for your point of interest.

For those unfamiliar with shapefiles, it’s a file format that describes geospatial data, such as polygons (e.g. countries), lines (e.g. roads), and points (e.g. landmarks), and they’re pretty easy to find these days. For example, you can download detailed shapefiles for roads, bodies of water, and blocks in the United States from the Census Bureau in just a few clicks.

Very cool!

Makes me wonder about shapefiles and relating information to them as information products.

You can download a road shapefile but does it include the road blocking accidents for the last five (5) years?

September 16, 2012

In Defense of the Power of Paper [Geography of Arguments/Information]

Filed under: Geography,Mapping,Maps,Marketing — Patrick Durusau @ 10:33 am

In her recent editorial, In Defense of the Power of Paper, Phyllis Korkk quotes Richard H. R. Harper saying:

Reading a long document on paper rather than on a computer screen helps people “better understand the geography of the argument contained within,” said Richard H. R. Harper, a principal researcher for Microsoft in Cambridge, England, and co-author with Abigail J. Sellen of “The Myth of the Paperless Office,” published in 2001.

Today’s workers are often navigating through multiple objects in complex ways and creating new documents as well, Mr. Harper said. Using more than one computer screen can be helpful for all this cognitive juggling. But when workers are going back and forth between points in a longer document, it can be more efficient to read on paper, he said. (emphasis added)

To “…understand the geography of the argument….”

I rather like that.

For all the debates about pointing, response codes, locators, identifiers, etc., on the web, all that was every at stake was document as blob.

Our “document as blob” schemes missed:

  • Complex complex relationships between documents
  • Tracking influences on both authors and readers
  • Their continuing but changing roles in the social life of information, and
  • The geography of arguments they contain (with at least as much complexity as documents as blobs).

Others may not be interested in the geography of arguments/information in your documents.

What about you?

Topic maps can help you break the “document as blob” barrier.

With topic maps you can plot the geography of/in your documents.

Interested?

September 8, 2012

10 Productivity Tips for Working with Large Mind Maps

Filed under: Mapping,Maps,Mind Maps,Visualization — Patrick Durusau @ 1:22 pm

10 Productivity Tips for Working with Large Mind Maps by Roger C. Parker.

From the post:

A while ago, I wrote a series of posts helping individuals get the most out of their mapping efforts. Today, I’d like share 10 productivity tips and best practices for working with large mind maps.

CMI-Image

As illustrated by the image above, mind maps can become substantially difficult to work with when the number of topics exceeds 60. At this size should you try and use MindManager’s Fit Map view, the type size decreases so much so that it becomes difficult to read. If you Zoom In to increase the type size, however, you lose context, or the “big picture” ability to view each topic in relation to all the other topics. So, what do you do?

A number of useful tips while constructing graphical views of topic maps. Or even for construction of topic maps per se.

Except for suggestion #7:

7. Search for duplicates before entering new topics

Inserting a duplicate topic is always a problem. Instead of manually searching through various topics looking for duplicates try using MindManager’s Search In All Open Maps command – it will certainly save you some time.

You should not need that one with good topic map software. 😉

September 7, 2012

HTML5: Render urban population growth on a 3D world globe with Three.js and canvas

Filed under: HTML5,Maps,Marketing,Three.js — Patrick Durusau @ 2:47 pm

HTML5: Render urban population growth on a 3D world globe with Three.js and canvas By jos.dirksen.

From the post:

In this article I’ll once again look at data / geo visualization with Three.js. This time I’ll show you how you can plot the urban population growth over the years 1950 to 2050 on a 3D globe using Three.js. The resulting visualization animates the growth of the world’s largest cities on a rotating 3D world. The result we’re aiming for looks like this (for a working example look here.):

Possible contender for the topic map graphic? A 3D globe?

If you think of topic maps as representing a users world view?

Perhaps, perhaps, but then you will need a flat earth version for some users as well. 😉

August 30, 2012

Physics as a geographic map

Filed under: Mapping,Maps,Science — Patrick Durusau @ 3:04 pm

Physics as a geographic map

Nathan Yau of Flowing Data points to a rendering of the subject area physics as a geographic map.

Somewhat dated (1939) but shows a lot of creativity and not small amount of cartographic skill.

Rather than calling it a “fictional” map I would prefer to say it is an intellectual map of physics.

Like all maps, the objects appear in explicit relationships to each other and there are no doubt as many implicit relationships are there are viewers of the map.

What continuum or dimensions would you use to create a map of modern ontologies?

That could make a very interesting exercise for the topic maps class. To have students create maps and then attempt to draw out what unspoken dimensions were driving the layout between parts of the map.

Suggestions of mapping software anyone?

August 29, 2012

ACLU maps cost of marijuana enforcement [Comparison]

Filed under: Mapping,Maps,Mashups — Patrick Durusau @ 3:36 pm

ACLU maps cost of marijuana enforcement

From the article:

Washington spent more than $200 million on enforcing and prosecuting marijuana laws and incarcerating the folks that violated them, the American Civil Liberties Union of Washington estimates.

The organization released an interactive map today of what it estimates each county spent on marijuana law enforcement. Although not specifically tied to Initiative 502, which gives voters a chance to legalize marijuana use for adults under some circumstances, ACLU is a supporter of the ballot measure.

I have always wondered what motivation, other that fear of others having a good time, could drive something as inane as an anti-marijuana policy.

I think I may have a partial answer.

That old American standby – keeping down competition.

In describing the $425.7 million dollars taken in by the Washington State Liquor Control Board, a map was given to show where the money went:

In Fiscal Year 2011, $345 million was sent to the General Fund, $71 million to cities and counties, $8.2 million to education and prevention, and $1.5 million to research. To see how much revenue your city or county received from the WSLCB in Fiscal Year 2011, visit www.liq.wa.gov/about/where-your-liquor-dollars-go [All the “where-your-liquor-dollars-go” links appear to be broken. Point an an FAQ and not the documentation.].

Consider Pierce County: Spend on anti-marijuana – $21,138,797.

If you can guess the direct URL to the county by county liquor proceeds: http://liq.wa.gov/publications/releases/2011CountiesRevenue/fy2011-PIERCE.pdf (for Pierce county), you will find in 2011, the entire county got $7,489,073.

I’m just a standards editor and semantic integration enthusiast and by no means a captain of industry.

But, spending three times the revenue from competitors to marijuana on anti-marijuana activities makes no business sense.

If you can find the liquor revenue numbers for 2011, what other comparisons would you draw?

August 25, 2012

The State of Hawai’i Demands a New Search Engine

Filed under: GIS,Maps,Searching — Patrick Durusau @ 4:06 pm

The State of Hawai’i Demands a New Search Engine by Matthew Hurst.

Matthew writes:

We will soon be embarking on a short trip to Hawai’i. Naturally, I’m turning to search engines to find out about the best beaches to go to. However, it turns out that this simple problem – where to go on vacation – is terribly under supported by today’s search engines.

Firstly, there is the problem with the Web Proposition. The web proposition – the reason for traditional web search engines to exist at all – states that there is a page containing the information you seek somewhere online. While there are many pages that list the ‘best beaches in Hawai’i’ as the analysis below demonstrates these are just sets of opinions – often very different in nature. An additional problem with the Web Proposition is that information and monetization don’t always align. Many of the ‘best’ beaches pages are really channels through which hotel and real estate commerce is done. Thus a balance is needed between objective information and commercial interests.

Secondly, beaches are not considered local entities by search engines. While the query {beaches in kauai} is very similar in form to the query {restaurants in kauai} the later generates results of entities of type while the former generates results of entities of type . While local search sounds like search over entities which have location, it is largely limited to local entities with commercial intent.

Finally, there is general confusion due to the fact that the state of Hawai’i contains a sub-region (an island) called Hawai’i.

As you may have guessed, had Matthew’s searches been successful, there would be no blog post.

How would you use topic maps to solve the shortfalls that Matthew identifies?

What other content would you aggregate with beaches?

August 17, 2012

In Maps We Trust

Filed under: Mapping,Maps — Patrick Durusau @ 3:29 pm

In Maps We Trust by James Cheshire.

From the post:

Of all the different types of data visualisation, maps* seem to have the best reputation. I think people are much less likely to trust a pie chart, for example, than a map. In a sense, this is amazing given that all maps are abstractions from reality. They can never tell the whole truth and are nearly all based on data with some degree of uncertainty that will vary over large geographic areas. An extreme interpretation of this view is that all maps are wrong- in which case we shouldn’t bother making them. A more moderate view (and the one I take) is that maps are never perfect so we need to create and use them responsibly – not making them at all would make us worse off. This responsibility criterion is incredibly important because of the high levels of belief people have in maps. You have to ask: What are the consequences of the map you have made? Now that maps are easier than ever to produce, they risk losing their lofty status as some of the most trusted data visualisations if those making them stop asking themselves this tough question.

*here I mean maps that display non-navigational data.

I posted a response over at Jame’s blog:

How do you identify “non-navigational data” in a map?

Your comment made me think of convention and some unconventional maps.

Any data rendered in relationship to other data can be used for “navigation.” Whether I intend to “navigate” as “boots on the ground” or between ideas.

Or to put it another way, who is to say what is or is not “non-navigational data?” The map maker or the reader/user of the map? Or what use is “better” for a map?

Great post!

Patrick

Curious, would you ask: “What are the consequences of the map you have made?”

July 29, 2012

National Cooperative Geologic Mapping Program

Filed under: Geologic Maps,Mapping,Maps — Patrick Durusau @ 4:56 am

National Cooperative Geologic Mapping Program

From this week’s Scout Report:

The National Cooperative Geologic Mapping Program (NCGMP) is “the primary source of funds for the production of geologic maps in the United States.” The NCGMP was created by the National Geologic Mapping Act of 1992 and its work includes producing surficial and bedrock geologic map coverage for the entire country. The program has partnered with a range of educational institutions, and this site provides access to many of the fruits of this partnership, along with educational materials. The place to start here is the What’s a Geologic Map? area. Here visitors can read a helpful article on this subject, authored by David R. Soller of the U.S. Geological Survey. Moving on, visitors can click on the National Geologic Map Database link. The database contains over 88,000 maps, along with a lexicon of geologic names, and material on the NCGMP’s upcoming mapping initiatives. Those persons with an interest in the organization of the NCGMP should look at the Program Components area. Finally, the Products-Standards area contains basic information on the technical standards and expectations for the mapping work.

More grist for your topic map mill!

July 21, 2012

Mapping Public Opinion: A Tutorial

Filed under: Mapping,Maps,R — Patrick Durusau @ 8:00 pm

Mapping Public Opinion: A Tutorial by David Sparks.

From the post:

At the upcoming 2012 summer meeting of the Society of Political Methodology, I will be presenting a poster on Isarithmic Maps of Public Opinion. Since last posting on the topic, I have made major improvements to the code and robustness of the modeling approach, and written a tutorial that illustrates the production of such maps.

This tutorial, in a very rough draft form, can be downloaded here [PDF]. I would welcome any and all comments on clarity, readability, and the method itself. Please feel free to use this code for your own projects, but I would be very interested in seeing any results, and hope you would be willing to share them.

An interesting mapping exercise, even though I find political opinion mapping just a tad tedious. Hasn’t changed significantly in years, which explains “safe” seats for both Republicans and Democrats in the United States.

Still, the techniques are valid and can be useful in other contexts.

July 18, 2012

Three.js: render real world terrain from heightmap using open data

Filed under: Mapping,Maps,Three.js,Visualization — Patrick Durusau @ 7:11 pm

Three.js: render real world terrain from heightmap using open data by Jos Dirksen.

From the post:

Three.js is a great library for creating 3D objects and animations. In a couple of previous articles I explored this library a bit and in one of those examples I showed you how you can take GIS information (in geoJSON) format and use D3.js and three.js to convert it to a 3D mesh you can render in the browser using javascript. This is great for infographic, but it doesn’t really show a real map, a real terrain. Three.js, luckily also has helper classes to render a terrain as you can see in this demo: http://mrdoob.github.com/three.js/examples/webgl_terrain_dynamic.html

This demo uses a noise generator to generate a random terrain, and adds a whole lot of extra functionality, but we can use this concept to also render maps of real terrain. In this article I’ll show you how you can use freely available open geo data containing elevation info to render a simple 3D terrain using three.js. In this example we’ll use elevation data that visualizes the data for the island of Corsica.

Rendering real world terrain, supplemented by a topic map for annotation, sounds quite interesting.

Assuming you could render any real world terrain, what would it be? For what purpose? What annotations would you supply?

July 17, 2012

Making maps, part 1: Less interactivity

Filed under: Mapping,Maps — Patrick Durusau @ 6:37 pm

Making maps, part 1: Less interactivity

A six part series on making maps from the Chicago Tribune that has this gem in the first post:

Back to the beer-fueled map talk… so, how can we do this better? The answer quickly became obvious: borrow from paper. What’s great about paper maps?

  • Paper maps are BIG
  • Paper maps are high resolution (measured by DPI *and* information-density)
  • Paper maps are general at a distance and specific up close

What if most things on your page design didn’t jump, spin or flop on mouse-over?

Could you still delivery your content effectively?

Or have you mistaken interactivity for being effective?

On the other hand, are paper maps non-interactive?

I ask because I saw a book this past weekend that had no moving parts, popups, etc., but reading it you would swear it was interactive.

More on that in a future post.

I first saw this at PeteSearch.

July 12, 2012

Real-time Twitter heat map with MongoDB

Filed under: Mapping,Maps,MongoDB,Tweets — Patrick Durusau @ 1:54 pm

Real-time Twitter heat map with MongoDB

From the post:

Over the last few weeks I got in touch with the fascinating field of data visualisation which offers great ways to play around with the perception of information.

In a more formal approach data visualisation denotes “The representation and presentation of data that exploits our visual perception abilities in order to amplify cognition

Nowadays there is a huge flood of information that hit’s us everyday. Enormous amounts of data collected from various sources are freely available on the internet. One of these data gargoyles is Twitter producing around 400 million (400 000 000!) tweets per day!

Tweets basically offer two “layers” of information. The obvious direct information within the text of the Tweet itself and also a second layer that is not directly perceived which is the Tweets’ metadata. In this case Twitter offers a large number of additional information like user data, retweet count, hashtags, etc. This metadata can be leveraged to experience data from Twitter in a lot of exciting new ways!

So as a little weekend project I have decided to build a small piece of software that generates real-time heat maps of certain keywords from Twitter data.

Yes, “…in a lot of exciting new ways!” +1!

What about maintenance issues on such a heat map? The capture of terms to the map is fairly obvious, but a subsequent user may be left in the dark as to why this term and not some other term? Or some then current synonym for a term that is being captured?

Or imposing semantics on tweets or terms that are unexpected or non-obvious to a casual or not so casual observer.

You and I can agree red means go and green means stop in a tweet. That’s difficult to maintain as the number of participants and terms go up.

A great starting place to experiment with topic maps to address such issues.

I first saw this in the NoSQL Weekly Newsletter.

July 10, 2012

Visualization Tools for Understanding Big Data

Filed under: BigData,Mapping,Maps,Visualization — Patrick Durusau @ 10:01 am

Visualization Tools for Understanding Big Data by James Cheshire.

From the post:

I recently co-wrote an editorial (download the full version here) with Mike Batty (UCL CASA) in which we explored some of the current issues surrounding the visualisation of large urban datasets. We were inspired to write it following the CASA Smart Cities conference and we included a couple of visualisations I have blogged here. Much of the day was devoted to demonstrating the potential of data visualisation to help us better understand our cities. Such visualisations would not have been possible a few years ago using desktop computers their production has ballooned as a result of recent technological (and in the case of OpenData, political) advances.

In the editorial we argue that the many new visualisations, such as the map of London bus trips above, share much in common with the work of early geographers and explorers whose interests were in the description of often-unknown processes. In this context, the unknown has been the ability to produce a large-scale impression of the dynamics of London’s bus network. The pace of exploration is largely determined by technological advancement and handling big data is no different. However, unlike early geographic research, mere description is no longer a sufficient benchmark to constitute advanced scientific enquiry into the complexities of urban life. This point, perhaps, marks a distinguishing feature between the science of cities and the thousands of rapidly produced big data visualisations and infographics designed for online consumption. We are now in a position to deploy the analytical methods developed since geography’s quantitative revolution, which began half a century ago, to large datasets to garner insights into the process. Yet, many of these methods are yet to be harnessed for the latest datasets due to the rapidity and frequency of data releases and the technological limitations that remain in place (especially in the context of network visualisation). That said, the path from description to analysis is clearly marked and, within this framework, visualisation plays an important role in the conceptualisation of the system(s) of interest, thus offering a route into more sophisticated kinds of analysis.

Curious if you would say that topic maps as navigation artifacts are “descriptive” as opposed to “explorative?”

What would you suggest as a basis for “interactive” topic maps that present the opportunity for dynamic subject identification, associations and merging?

July 4, 2012

JQVMap

Filed under: JQuery,Mapping,Maps,SVG — Patrick Durusau @ 7:35 pm

JQVMap

From the post:

JQVMap is a jQuery plugin that renders Vector Maps. It uses resizable Scalable Vector Graphics (SVG) for modern browsers like Firefox, Safari, Chrome, Opera and Internet Explorer 9. Legacy support for older versions of Internet Explorer 6-8 is provided via VML.

I saw this at Pete Warden’s Five Links, along with the Plane Networks.

July 3, 2012

Mapping Research With WikiMaps

Filed under: Mapping,Maps,WikiMaps,Wikipedia — Patrick Durusau @ 5:12 am

Mapping Research With WikiMaps

From the post:

An international research team has developed a dynamic tool that allows you to see a map of what is “important” on Wikipedia and the connections between different entries. The tool, which is currently in the “alpha” phase of development, displays classic musicians, bands, people born in the 1980s, and selected celebrities, including Lady Gaga, Barack Obama, and Justin Bieber. A slider control, or play button, lets you move through time to see how a particular topic or group has evolved over the last 3 or 4 years. The desktop version allows you to select any article or topic.

Wikimaps builds on the fact that Wikipedia contains a vast amount of high-quality information, despite the very occasional spot of vandalism and the rare instances of deliberate disinformation or inadvertent misinformation. It also carries with each article meta data about the page’s authors and the detailed information about every single contribution, edit, update and change. This, Reto Kleeb, of the MIT Center for Collective Intelligence, and colleagues say, “…opens new opportunities to investigate the processes that lie behind the creation of the content as well as the relations between knowledge domains.” They suggest that because Wikipedia has such a great amount of underlying information in the metadata it is possible to create a dynamic picture of the evolution of a page, topic or collection of connections.

See the demo version: http://www.ickn.org/wikimaps/.

For some very cutting edge thinking, see: Intelligent Collaborative Knowledge Networks (MIT) which has a download link to “Condor,” a local version of the wikimaps software.

Wikimaps builds upon a premise similar to the original premise of the WWW. Links break, deal with it. Hypertext systems prior to the WWW had tremendous overhead to make sure links remained viable. So much overhead that none of them could scale. The WWW allowed links to break and to be easily created. That scales. (The failure of the Semantic Web can be traced to the requirement that links not fail. Just the opposite of what made the WWW workable.)

Wikimaps builds upon the premise that the “facts we have may be incomplete, incorrect, partial or even contradictory. All things that most semantic systems posit as verboten. An odd requirements since our information is always incomplete, incorrect (possibly), partial or even contradictory. We have set requirements for our information systems that we can’t meet working by hand. Not surprising that our systems fail and fail to scale.

How much information failure can you tolerate?

A question that should be asked of every information system at the design stage. If the answer is none, move onto a project with some chance of success.

I was surprised at the journal reference, not one I would usually scan. Recent origin, expensive, not in library collections I access.

Journal reference:

Reto Kleeb et al. Wikimaps: dynamic maps of knowledge. Int. J. Organisational Design and Engineering, 2012, 2, 204-224

Abstract:

We introduce Wikimaps, a tool to create a dynamic map of knowledge from Wikipedia contents. Wikimaps visualise the evolution of links over time between articles in different subject areas. This visualisation allows users to learn about the context a subject is embedded in, and offers them the opportunity to explore related topics that might not have been obvious. Watching a Wikimap movie permits users to observe the evolution of a topic over time. We also introduce two static variants of Wikimaps that focus on particular aspects of Wikipedia: latest news and people pages. ‘Who-works-with-whom-on-Wikipedia’ (W5) links between two articles are constructed if the same editor has worked on both articles. W5 links are an excellent way to create maps of the most recent news. PeopleMaps only include links between Wikipedia pages about ‘living people’. PeopleMaps in different-language Wikipedias illustrate the difference in emphasis on politics, entertainment, arts and sports in different cultures.

Just in case you are interested: International Journal of Organisational Design and Engineering, Editor in Chief: Prof. Rodrigo Magalhaes, ISSN online: 1758-9800, ISSN print: 1758-9797.

June 30, 2012

Station Maps: Browser-Based 3D Maps of the London Underground

Filed under: Mapping,Maps,Merging,Topic Maps — Patrick Durusau @ 6:47 pm

Station Maps: Browser-Based 3D Maps of the London Underground

From Information Asthetics:

Station Maps [aeracode.org] by programmer Andrew Godwin contains a large collection of browser-based (HTML5) 3D maps depicting different London Underground/DLR stations.

Most of the stations are modelled from memory in combination with a few diagrams found online. This means that the models are not totally accurate, but they should represent the right layout, shape and layering of the stations.

Every map has some underlying structure/ontology onto which other information is added.

Real time merging of train, security camera, security forces, event, etc., information onto such maps is one aspect of merging based on location/interest. Not all information is equally useful to all parties.

June 22, 2012

Hacking and Trailblazing

Filed under: Mapping,Maps,Security — Patrick Durusau @ 4:22 pm

Ajay Ohri has written two “introduction” posts on hacking:

How to learn to be a hacker easily

How to learn Hacking Part 2

I thought “hacker/hacking” would be popular search terms.

“Hot” search terms this week: “Lebron James” 500,000+ searches (US), “Kate Upton” 50,000+ searches (US). (Shows what I know about “hot” search terms.)

What Ajay has created, as we all have at one time or another, is a collection of resources on a particular subject.

If you think of the infoverse as being an navigable body of information, Ajay has blazed a trail to particular locations that have information on a specific subject. More importantly, we can all follow that trail, which saves us time and effort.

Like a research/survey article in a technical journal, Ajay’s trail blazing suffers from two critical and related shortcomings:

First, we as human readers are the only ones who can take advantage of the branches and pointers in his trail. For example, when Ajay says:

The website 4chan is considered a meeting place to meet other hackers. The site can be visually shocking http://boards.4chan.org/b/
(http://www.decisionstats.com/how-to-learn-to-be-a-hacker-easily/)

Written as a prose narrative, it isn’t possible to discover 4chan and other hacker “meeting” sites. Not difficult for us, but then each one of us has to read the entire article for that pointer. I suppose this must be what Lars Marius means by “unstructured.” I stand corrected. (“visually shocking?” Only if you are really sensitive. Soft porn, profanity, juvenile humor.)

Second, where Ajay says:

Lena’s Reverse Engineering Tutorial-”Use Google.com for finding the Tutorial” (http://www.decisionstats.com/how-to-learn-hacking-part-2/)

I can’t add an extension, Reverse Engineering, a five-day course on reverse engineering.

Or, a warning that http://www.megaupload.com/?d=BDNJK4J8, displays:

seizure banner

Ajay’s trail stops where Ajay stopped.

I can write a separate document as a trail, but have no way to tie that trail to Ajay’s.

At least today, I would ask the design questions as:

  1. How do we blaze trails subject to machine-assisted navigation?
  2. How do we enable machine-assisted navigation across trails?

There are unspoken assumptions and questions in both of those formulations but it is the best I can do today.

Suggestions/comments?


PS: Someone may be watching the link that leads to the Megaupload warning. Just so you know.

PPS: Topic maps need a jingoistic logo for promotion.

Like a barracuda, wearing only a black beret, proxy drawing from the TMRM as a tatoo, a hint that its “target” is just in front of it.

Top: Topic Maps. Reading under the barracuda: “If you can map it, you can hit it….”

Research Data Australia down to Earth

Filed under: Geographic Data,Geographic Information Retrieval,Mapping,Maps — Patrick Durusau @ 2:47 pm

Research Data Australia down to Earth

From the post:

Context: free cloud servers, a workshop and an idea

In this post I look at some work we’ve been doing at the University of Western Sydney eResearch group on visualizing metadata about research data, in a geographical context. The goal is to build a data discovery service; one interface we’re exploring is the ability to ‘fly’ around Google Earth looking for data, from Research Data Australia (RDA). For example, a researcher could follow a major river and see what data collections there are along its course that might be of (re-)use. True, you can search the RDA site by dragging a marker on a map but this experiment is a more immersive approach to exploring the same data.

The post is a quick update on a work in progress, with some not very original reflections on the use of cloud servers. I am putting it here on my own blog first, will do a human-readable summary over at UWS soon, any suggestions or questions welcome.

You can try this out if you have Google Earth by downloading a KML file. This is a demo service only – let us know how you go.

This work was inspired by a workshop on cloud computing: this week Andrew (Alf) Leahy and I attended a NeCTAR and Australian National Data Service (ANDS) one day event, along with several UWS staff. The unstoppable David Flanders from ANDS asked us to run a ‘dojo’, giving technically proficient researchers and eResearch collaborators a hand-on experience with the NeCTAR research cloud, where all Australian University researchers with access to the Australian Access Federation login system are entitled to run free cloud-hosted virtual servers. Free servers! Not to mention post-workshop beer[i]. So senseis Alf and and PT worked with a small group of ‘black belts’ in a workshop loosely focused on geo-spatial data. Our idea was “Visualizing the distribution of data collections in Research Data Australia using Google Earth”[ii]. We’d been working on a demo of how this might be done for a few days, which we more-or less got running on the train from the Blue Mountains in to Sydney Uni in the morning.

When you read about “exploring” the data, bear in mind the question of how to record that “exploration?” Explorers used to keep journals, ships logs, etc. to record their explorations.

How do you record (if you do), your explorations of data? How do you share them if you do?

Given the ease of recording our explorations, no more long hand with a quill pen, is it odd that we don’t record our intellectual explorations?

Or do we want others to see a result that makes us look more clever than we are?

June 15, 2012

DataArt with BBC Backstage

Filed under: Graphics,Maps,News,Visualization — Patrick Durusau @ 3:22 pm

DataArt with BBC Backstage

From the post:

Locus is a news archive visualisation that maps Guardian articles to places over time – a spatial & temporal mapping of events and media attention in the last decade. We’re using the Guardian Open Platform because it provides an API that can be queried by date, and an archive going back over 10 years.

Each place is represented as a geo located dot that changes scale in proportion to that places appearance in news articles over time. As the time slider selection changes the circles grow and shrink giving a picture of which locations are in the news at any given time. To see the all the news articles mapped, you can extend the time slider to the full search period. You can click on the places to see the news headlines for that place and time period. The headlines link through to the online articles at the Guardian.

There are two versions of the project: Locus Afganistan, and Locus Iraq.

Very cool!

Now just imagine that time was your scope for a location you selected on the map and by choosing a location + time, a set of results were merged and returned.

That may or may not help to answer the question of who knew what when? But it is a place to start.

(I first saw this at: Is it Data or Art? Check out these Newsworthy Visualizations from the BBC)

May 25, 2012

Image compositing in TileMill

Filed under: Geographic Data,GIS,Maps — Patrick Durusau @ 10:40 am

Image compositing in TileMill by Kim Rees.

From the post:

TileMill is a tool that makes it easy to create interactive maps. Soon they will be adding some new features that will treat maps more like images in terms of modifying the look and feel. This will allow you to apply blending to polygons and GIS data.

BTW, a direct link for TileMill.

On brief glance, the TileMill site is very impressive.

Are you tying topic maps to GIS or other types of maps?

April 30, 2012

See California kills by Wildlife Services

Filed under: Mapping,Maps — Patrick Durusau @ 3:17 pm

See California kills by Wildlife Services

From the post:

Wildlife Services is a little-known federal agency of the Department of Agriculture charged with managing wildlife, particularly the intersection between humans — ranchers and farmers — and animals.

This map shows where Wildlife Services made the most kills of three commonly-killed animals — beavers, coyotes and bears. The charts below show the type of method used to kill those animals.

You can select beavers, coyotes, or bears, with other display options.

There appears to be no merging on other names for beavers, coyotes or bears, as well as the means of their, ah, control.

A good illustration that sometimes a minimal amount of merging is sufficient for the task at hand.

Mapping locations of control activities onto a map with changeable views is sufficient.

Readers aren’t expecting links into scientific/foreign literature where mapping of identifiers would be an issue.

Good illustrations, including maps, have a purpose.

So should your topic map and its merging.

April 23, 2012

Fahrenheit 118

Filed under: Graphics,Maps,Visualization — Patrick Durusau @ 5:56 pm

Sounds like a scifi knock-off doesn’t it?

Junkcharts tells a different tale: The importance of explaining your chart: the case of the red 118

Great review of an temperature map for March, 2012.

Two take aways:

1) Even expert map makers (NOAA/NCDC) screw up and/or have a very difficult time communicating clearly.

2) Get someone outside your group to review maps/charts without any explanation from you. If they have to ask questions or you feel explanation is necessary, revise the map/chart and try again.

« Newer PostsOlder Posts »

Powered by WordPress