Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

June 22, 2015

LuxRender

Filed under: Graphics,Visualization — Patrick Durusau @ 4:04 pm

LuxRender – Physically Based Renderer.

From the webpage:

LuxRender is a physically based and unbiased rendering engine. Based on state of the art algorithms, LuxRender simulates the flow of light according to physical equations, thus producing realistic images of photographic quality.

LuxRender is now a member project of the Software Freedom Conservancy which provides administrative and financial support to FOSS projects. This allows us to receive donations, which can be tax deductible in the US.

Physically based spectral rendering

LuxRender is built on physically based equations that model the transportation of light. This allows it to accurately capture a wide range of phenomena which most other rendering programs are simply unable to reproduce. This also means that it fully supports high-dynamic range (HDR) rendering.

Materials

LuxRender features a variety of material types. Apart from generic materials such as matte and glossy, physically accurate representations of metal, glass, and car paint are present. Complex properties such as absorption, dispersive refraction and thin film coating are available.

Fleximage (virtual film)

The virtual film allows you to pause and continue a rendering at any time. The current state of the rendering can even be written to a file, so that the computer (or even another computer) can continue rendering at a later moment.

Free for everyone

LuxRender is and will always be free software, both for private and commercial use. It is being developed by people with a passion for programming and for computer graphics who like sharing their work. We encourage you to download LuxRender and use it to express your artistic ideas. (learn more)

Too advanced for my graphic skills but I thought some of you might find this useful in populating your topic maps with high-end visualizations.

I first saw this in a tweet by David Bucciarelli that announced the LuxRender v1.5RC1 release.

June 18, 2015

Otherworldly CAD Software…

Filed under: Graphics,Visualization — Patrick Durusau @ 7:39 pm

Otherworldly CAD Software Hails From A Parallel Universe by Joshua Vasquez.

From the post:

The world of free 3D-modeling software tends to be grim when compared to the expensive professional packages. Furthermore, 3D CAD modeling software suggestions seem to throw an uproar when new users seek open-source or inexpensive alternatives. Taking a step apart from the rest, [Matt] has developed his own open-source CAD package with a spin that inverts the typical way we do CAD.

Antimony is a fresh perspective on 3D modeling. In contrast to Blender’s “free-form sculpting” and Solidworks’ sequential extrudes and cuts, Antimony invites you to break down your model into a network of both primitive geometry and operations that interact with that geometry.

Functionally, Antimony represents objects as a graphical collection of nodes that encode both primitives and operations. Want a cylinder? Start with a circle node and pipe it into an extrude node. Need to cut out some part geometry? Try defining it with one or more primitives, and then perform a boolean intersection operation. Users can even write their own nodes with custom scripts written in Python. Overall, Antimony boasts the power of parametric design similar to OpenSCAD while it also boosts readability with a graphical, rather than text-based, part description. Finally, because part geometry is essentially stored as a series of instructions, the process of modeling the part does not limit the resolution of the output .STL mesh. (Think: vector-based images, versus pixel-based images).

Current versions of the software are available for both Mac and Linux, and the entire project is open-source and available on the Githubs. (For the shrewd-eyed software developers, most of the project is written with Python that interacts with lower-level routines handled in C++ and exposed through Boost.Python.) Take a video tour of an Antimony workflow with [Matt] after the break. All-in-all, despite that the software is still in its alpha stages, it’s highly functional and (for the block-diagram fans) intuitive. We’re thrilled to put our programming hats on and try CAD from, as [Matt] coins it “a parallel universe.”

For all you graph lovers, parts are linked as a graph.

If you are looking for a project, try modeling historical clatrops. They remain as effective as they were in the time of Alexander the Great.

June 16, 2015

Vintage Infodesign [122] Naval Yards

Filed under: Graphics,Maps,Visualization — Patrick Durusau @ 4:42 pm

Vintage Infodesign [122] by Tiago Veloso.

From the post:

Published in October, 1940, the set of maps from Fortune magazine that open today’s Vintage Infodesign was part of a special about the industrial resources committed to the war effort by the United States. It used data compiled by the Bureau of the Census and Agricultural Commission, with the financial support by the Defense Commission. The maps within the four page report are signed by Philip Ragan Associates.

It’s just another great gem archived over at Fulltable, followed by the usual selection of ancient maps, graphics and charts from before 1960.

Hope you enjoy, and have a great week!

One original image (1940) and it modern counterpart to temp you into visiting this edition of Vintage Infodesign.

shipyards-1940a

US shipyards and arsenals in 1940.

shipyards-now

Modern map of shipyards. I couldn’t find an image quickly that had arsenals as well.

Notice the contrast in the amount of information given by the 1940 map versus that of the latest map from the Navy.

With the 1940 map, along with a state map I could get within walking distance of any of the arsenals or shipyards listed.

With the modern map, I know that shipyards need to be near water but it is only narrowed down to the coastline of any of the states with shipyards.

That may not seem like a major advantage, knowing the location of a shipyard from a map, but collating that information with a stream of other bits and pieces could be an advantage.

Such as watching wedding announcements near Navy yards for sailors getting married. Which means the happy couple will be on their honeymoon and any vehicle at their home with credentials to enter a Navy yard will be available. Of course, that information has to be co-located for the opportunity to present itself. For that I recommend topic maps.

June 15, 2015

htmlwidgets for Rich Data Visualization in R

Filed under: Graphics,R,Visualization — Patrick Durusau @ 6:04 pm

htmlwidgets for Rich Data Visualization in R

From the webpage:

With the booming popularity of big data and data science, nice visualizations are getting a lot of attention. Sure, R and Python have built-in support for basic graphs and charts, but what if you want more. What if you want interaction so you can mouse-over or rotate a visualization. What if you want to explore more than a static image? Enter Rich Visualizations.

And, creating them is not as hard as you might think!

Four compelling examples of interactive graphics using htmlwidgets to bring interactivity to R code.

At first I thought this might be useful for an interactive map of cybersecurity incompetence inside the DC beltway but quickly realized that a map with only one uniform feature isn’t all that useful.

I am sure htmlwidgets will be useful for many other visualizations!

Enjoy!

June 13, 2015

Python Mode for Processing

Filed under: Processing,Python,Visualization — Patrick Durusau @ 3:20 pm

Python Mode for Processing

From the webpage:

You write Processing code. In Python.

Processing is a programming language, development environment, and online community. Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. Today, there are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning, prototyping, and production.

Processing was initially released with a Java-based syntax, and with a lexicon of graphical primitives that took inspiration from OpenGL, Postscript, Design by Numbers, and other sources. With the gradual addition of alternative progamming interfaces — including JavaScript, Python, and Ruby — it has become increasingly clear that Processing is not a single language, but rather, an arts-oriented approach to learning, teaching, and making things with code.

We are thrilled to make available this public release of the Python Mode for Processing, and its associated documentation. More is on the way! If you’d like to help us improve the implementation of Python Mode and its documentation, please find us on Github!

A screen shot of part of one image from Dextro.org will give you a glimpse of the power of Processing:

processing-example

BTW, this screen shot pales on comparison to the original image.

Enough said?

June 1, 2015

Portfolio of the Week – Josemi Benítez

Filed under: Graphics,Journalism,Visualization — Patrick Durusau @ 2:50 pm

Portfolio of the Week – Josemi Benítez by Tiago Veloso.

From the post:

It’s an indisputable fact that Spain has produced some of the most inspiring visual journalists of the last two decades, and we are quite happy to present you today the work of another one of those talented designers: Josemi Benítez, who has been responsible for the graphics section of the newspaper El Correo (Bilbao) for seven years.

Josemi began in the world of storytelling through images and texts working as a freelance artist for several advertising agencies, while getting degrees in Journalism and Advertising at the Universidad del País Vasco. In 1999 he began working as a Web designer in the Bilbao newspaper, helping to create elcorreo.com.

In 2002, Josemi returned to the paper version of the newspaper, coinciding with a key moment of the graphic evolution of the El Correo. Infographics gained space and a new style of graphics was buzzing. Since then, his work has been awarded a number of times by the Society for News Design and Malofiej News Design awards. In addition to his work at El Correo, he also taught infographic design at the University of Navarra and the Master in Multimedia El Correo-UPV / EHU.

Here are the works Josemi sent us:

I hesitate to call the examples infographics because they are more nearly works of communicative art. Select several for full size viewing and see if you agree.

Time spent with these images to incorporate these techniques in your own work would be time well spent.

May 26, 2015

JavaScript Graph Comparison

Filed under: Javascript,Visualization — Patrick Durusau @ 3:05 pm

JavaScript Graph Comparison

Bookmark this site if you need to select one of the many JavaScript graph libraries by features. Select by graph type, pricing scheme, options and dependencies.

The thought does occur to me that a daily chart of deaths by cause and expenditures by the U.S. government on that cause could make an interesting graphic. Particularly on the many days that there are no deaths from terrorism but the money keeps on pouring out.

April 24, 2015

Ordinary Least Squares Regression: Explained Visually

Filed under: Mathematics,Visualization — Patrick Durusau @ 2:55 pm

Ordinary Least Squares Regression: Explained Visually by Victor Powell and Lewis Lehe.

From the post:

Statistical regression is basically a way to predict unknown quantities from a batch of existing data. For example, suppose we start out knowing the height and hand size of a bunch of individuals in a “sample population,” and that we want to figure out a way to predict hand size from height for individuals not in the sample. By applying OLS, we’ll get an equation that takes hand size—the ‘independent’ variable—as an input, and gives height—the ‘dependent’ variable—as an output.

Below, OLS is done behind-the-scenes to produce the regression equation. The constants in the regression—called ‘betas’—are what OLS spits out. Here, beta_1 is an intercept; it tells what height would be even for a hand size of zero. And beta_2 is the coefficient on hand size; it tells how much taller we should expect someone to be for a given increment in their hand size. Drag the sample data to see the betas change.

[interactive graphic omitted]

At some point, you probably asked your parents, “Where do betas come from?” Let’s raise the curtain on how OLS finds its betas.

Error is the difference between prediction and reality: the vertical distance between a real data point and the regression line. OLS is concerned with the squares of the errors. It tries to find the line going through the sample data that minimizes the sum of the squared errors. Below, the squared errors are represented as squares, and your job is to choose betas (the slope and intercept of the regression line) so that the total area of all the squares (the sum of the squared errors) is as small as possible. That’s OLS!

The post includes a visual explanation of ordinary least squares regression up to 2 independent variables (3-D).

Height wasn’t the correlation I heard with hand size but Visually Explained is a family friendly blog. And to be honest, I got my information from another teenager (at the time), so my information source is suspect.

April 16, 2015

Methods for visualizing dynamic networks (Parts 1 and 2)

Filed under: Dynamic Graphs,Networks,Visualization — Patrick Durusau @ 6:16 pm

Methods for visualizing dynamic networks Part 1

Methods for visualizing dynamic networks Part 2

From part 1:

The challenge of visualizing the evolution of connected data through time has kept academics and data scientists busy for years. Finding a way to convey the added complexity of a temporal element without overwhelming the end user with it is not easy.

Whilst building the KeyLines Time Bar – our component for visualizing dynamic networks – we spent a great deal of time appraising the existing temporal visualization options available.

In this blog post, we’ve collated some of the most popular ways of visualizing dynamic graphs through time. Next week, we’ll share some of the more creative and unusual options.

Not a comprehensive survey but eight (8) ways to visualize dynamic networks that you will find interesting.

Others that you would add to this list?

Eye Candy: Spiral Triangle

Filed under: D3,SVG,Visualization — Patrick Durusau @ 5:41 pm

spiral-triangle

Mike Bostock unpacked this impossible gif.

See Spiral Triangle for all its moving glory and code.

Understanding Data Visualisations

Filed under: Graphics,Visualization — Patrick Durusau @ 3:21 pm

Understanding Data Visualisations by Andy Kirk.

From the webpage:

Regular readers will be somewhat aware of my involvement in a research project called ‘Seeing Data’, a 15 month study funded by the UK Arts and Humanities Research Council and led by Professor Helen Kennedy from the University of Sheffield.

The aim of ‘Seeing Data’ was to further our understanding about how people make sense of data visualisations. Through learning about the ways in which people engage with data visualisations our aim was to provide some key resources for the general public, to help them develop the skills they need to interact with visualisations, and also for visualisation designers/producers, to help them understand what matters to the people who view and engage with their visualisations.

We are now concluding our findings and beginning our dissemination of a range of outputs to fulfil our aims.

This looks very promising! Each section leads to a fuller presentation with an opportunity to test yourself at the end of each section.

Will results on visualization in the UK will hold true for subjects in other locations? If there are differences, what are they and how are those variances understood?

Looking forward to more details on the project!

I first saw this in a tweet by Amanda Hobbs.

April 6, 2015

Combining the power of R and D3.js

Filed under: D3,R,Visualization — Patrick Durusau @ 6:10 pm

Combining the power of R and D3.js by Andries Van Humbeeck.

From the post:

According to wikipedia, the amount of unstructured data might account for more than 70%-80% of all data in organisations. Because everyone wants to find hidden treasures in these mountains of information, new tools for processing, analyzing and visualizing data are being developed continually. This post focuses on data processing with R and visualization with the D3 JavaScript library.

Great post with fully worked examples of using R with D3.js to create interactive graphics.

Unfortunate that it uses the phrase “immutable images.” A more useful dichotomy is static versus interactive. And it lowers the number of false positives for anyone searching on “immutable.”

Enjoy!

I first saw this in a tweet by Christophe Lalanne.

March 31, 2015

Does Your Visualization Frighten or Inform?

Filed under: Graphics,Visualization — Patrick Durusau @ 8:56 am

Telling your data’s story: How storytelling can enhance the effectiveness of your visualizations by Michael Freeman.

From the post:

Visualizing complex relationships in big data often requires involved graphical displays that can be intimidating to users. As the volume and complexity of data collection and storage scale exponentially, creating clear, communicative, and approachable visual representations of that data is an increasing challenge. As a data visualization specialist, I frightened one of my first sets of collaborators when I suggested using this display:

freeman

What I had failed to communicate was that we would use a story structure to introduce audiences to the complex layout (you can see how I did it here).

Michael tackles big data visualizations that are unclear, present too much information and too many variables.

Most of us can produce visualizations that frighten and confuse, but how many of us can construct visualizations that inform and persuade?

There isn’t a cookie cutter solution to the problem of effectively visualizing data but this post will gently move you in the direction of better visualizations.

Enjoy!

PS: Not that anyone has ever seen a topic map visualization that frightened rather than informed. 😉

March 29, 2015

Collected, Vetted, Forty Visualization Blogs

Filed under: Graphics,Visualization — Patrick Durusau @ 3:07 pm

Blog Radar at VisuaLoop.

You can run a search with your favorite search engine on “visualization blogs” + “data-visualization blogs” and get about 6,500 “hits,” including duplicates. This is the weed yourself option.

Or you can choose Blog Radar and have forty (40) blogs without duplicates with three (3) posts for each one. This is the pre-weeded option.

Whether you want to expand your blog reading or are looking for a good starting point for a crawler on visualization, you will be hard pressed to find a better resource.

Enjoy!

New Spatial Aggregation Tutorial for GIS Tools for Hadoop

Filed under: Aggregation,GIS,Hadoop,Spatial Data,Visualization — Patrick Durusau @ 10:51 am

New Spatial Aggregation Tutorial for GIS Tools for Hadoop by Sarah Ambrose.

Sarah motivates you to learn about spatial aggregation, aka spatial binning, with two visualizations of New York taxi data:

No aggregation:

taxi-1

Aggregation:

taxi-2

Now that I have your attention, ;-), from the post:

This spatial analysis ability is available using the GIS Tools for Hadoop. There is a new tutorial posted that takes you through the steps of aggregating taxi data. You can find the tutorial here.

Enjoy!

March 24, 2015

Sorting [Visualization]

Filed under: Sorting,Visualization — Patrick Durusau @ 10:23 am

Carlo Zapponi created http://sorting.at/, a visualization of sorting resource that steps through different sorting algorithms. You can choose from four different initial states, six (6) different sizes (5, 10, 20, 50, 75, 100), and six (6) different colors.

The page defaults to Quick Sort and Heap Sort, but under add algorithms you will find:

I added Wikipedia links for the algorithms. For a larger list see:
Sorting algorithm.

I first saw this in a tweet by Eric Christensen.

March 21, 2015

Memantic: A Medical Knowledge Discovery Engine

Filed under: Bioinformatics,Knowledge Discovery,Synonymy,Visualization — Patrick Durusau @ 2:37 pm

Memantic: A Medical Knowledge Discovery Engine by Alexei Yavlinsky.

Abstract:

We present a system that constructs and maintains an up-to-date co-occurrence network of medical concepts based on continuously mining the latest biomedical literature. Users can explore this network visually via a concise online interface to quickly discover important and novel relationships between medical entities. This enables users to rapidly gain contextual understanding of their medical topics of interest, and we believe this constitutes a significant user experience improvement over contemporary search engines operating in the biomedical literature domain.

Alexei takes advantage of prior work on medical literature to index and display searches of medical literature in an “economical” way that can enable researchers to discover new relationships in the literature without being overwhelmed by bibliographic detail.

You will need to check my summary against the article but here is how I would describe Memantic:

Memantic indexes medical literature and records the co-occurrences of terms in every text. Those terms are mapped into a standard medical ontology (which reduces screen clutter). When a search is performed, the “results are displayed as nodes based on the medical ontology and includes relationships established by the co-occurrences found during indexing. This enables users to find relationships without the necessity of searching through multiple articles or deduping their search results manually.

As I understand it, Memantic is as much an effort at efficient visualization as it is an improvement in search technique.

Very much worth a slow read over the weekend!

I first saw this in a tweet by Sami Ghazali.

PS: I tried viewing the videos listed in the paper but wasn’t able to get any sound? Maybe you will have better luck.

March 16, 2015

A Compendium of Clean Graphs in R

Filed under: R,Visualization — Patrick Durusau @ 9:17 am

A Compendium of Clean Graphs in R by Eric-Jan Wagenmakers and Quentin Gronau.

From the post:

Every data analyst knows that a good graph is worth a thousand words, and perhaps a hundred tables. But how should one create a good, clean graph? In R, this task is anything but easy. Many users find it almost impossible to resist the siren song of adding grid lines, including grey backgrounds, using elaborate color schemes, and applying default font sizes that makes the text much too small in relation to the graphical elements. As a result, many R graphs are an aesthetic disaster; they are difficult to parse and unfit for publication.

In constrast, a good graph obeys the golden rule: “create graphs unto others as you want them to create graphs unto you”. This means that a good graph is a simple graph, in the Einsteinian sense that a graph should be made as simple as possible, but not simpler. A good graph communicates the main message effectively, without fuss and distraction. In addition, a good graph balances its graphical and textual elements – large symbols demand an increase in line width, and these together require an increase in font size.

In order to reduce the time needed to find relevant R code, we have constructed a compendium of clean graphs in R. This compendium, available at http://shinyapps.org/apps/RGraphCompendium/index.html, can also be used for teaching or as inspiration for improving one’s own graphs. In addition, the compendium provides a selective overview of the kind of graphs that researchers often use; the graphs cover a range of statistical scenarios and feature contributions of different data analysts. We do not wish to presume the graphs in the compendium are in any way perfect; some are better than others, and overall much remains to be improved. The compendium is undergoing continual refinement. Nevertheless, we hope the graphs are useful in their current state.

This rocks! A tribute to the authors, R and graphics!

A couple samples to whet your appetite:

r-graph-1

r-graph-2

BTW, the images in the compendium have Show R-Code buttons!

Enjoy!

March 15, 2015

Teaching and Learning Data Visualization: Ideas and Assignments

Filed under: Graphics,Statistics,Visualization — Patrick Durusau @ 7:32 pm

Teaching and Learning Data Visualization: Ideas and Assignments by Deborah Nolan, Jamis Perrett.

Abstract:

This article discusses how to make statistical graphics a more prominent element of the undergraduate statistics curricula. The focus is on several different types of assignments that exemplify how to incorporate graphics into a course in a pedagogically meaningful way. These assignments include having students deconstruct and reconstruct plots, copy masterful graphs, create one-minute visual revelations, convert tables into `pictures’, and develop interactive visualizations with, e.g., the virtual earth as a plotting canvas. In addition to describing the goals and details of each assignment, we also discuss the broader topic of graphics and key concepts that we think warrant inclusion in the statistics curricula. We advocate that more attention needs to be paid to this fundamental field of statistics at all levels, from introductory undergraduate through graduate level courses. With the rapid rise of tools to visualize data, e.g., Google trends, GapMinder, ManyEyes, and Tableau, and the increased use of graphics in the media, understanding the principles of good statistical graphics, and having the ability to create informative visualizations is an ever more important aspect of statistics education.

You will find a number of ideas in this paper to use in teaching and learning visualization.

I understand that visualizing a table can, with the proper techniques, display relationships that are otherwise difficult to notice.

On the other hand, due to our limited abilities to distinguish colors, graphs can conceal information that would otherwise be apparent from a table.

Not an objection to visualizing tables but a caution that details can get lost in visualization as well as being highlighted for the viewer.

March 14, 2015

The Data Engineering Ecosystem: An Interactive Map

Filed under: BigData,Data Pipelines,Visualization — Patrick Durusau @ 6:58 pm

The Data Engineering Ecosystem: An Interactive Map by David Drummond and John Joo.

From the post:

Companies, non-profit organizations, and governments are all starting to realize the huge value that data can provide to customers, decision makers, and concerned citizens. What is often neglected is the amount of engineering required to make that data accessible. Simply using SQL is no longer an option for large, unstructured, or real-time data. Building a system that makes data usable becomes a monumental challenge for data engineers.

There is no plug and play solution that solves every use case. A data pipeline meant for serving ads will look very different from a data pipeline meant for retail analytics. Since there are unlimited permutations of open-source technologies that can be cobbled together, it can be overwhelming when you first encounter them. What do all these tools do and how do they fit into the ecosystem?

Insight Data Engineering Fellows face these same questions when they begin working on their data pipelines. Fortunately, after several iterations of the Insight Data Engineering Program, we have developed this framework for visualizing a typical pipeline and the various data engineering tools. Along with the framework, we have included a set of tools for each category in the interactive map.

This looks quite handy if you are studying for a certification test and need to know the components and a brief bit about each one.

For engineering purposes, it would be even better if you could connect your pieces together and then map the data flows through the pipelines. That is where did the data previously held in table X go during each step and what operations were performed on it? Not to mention being able to track an individual datum through the process.

Is there a tool that I haven’t seen or overlooked that allows that type of insight into a data pipeline? With subject identities of course for the various subjects along the way.

Mapping Your Music Collection [Seeing What You Expect To See]

Filed under: Audio,Machine Learning,Music,Python,Visualization — Patrick Durusau @ 4:11 pm

Mapping Your Music Collection by Christian Peccei.

From the post:

In this article we’ll explore a neat way of visualizing your MP3 music collection. The end result will be a hexagonal map of all your songs, with similar sounding tracks located next to each other. The color of different regions corresponds to different genres of music (e.g. classical, hip hop, hard rock). As an example, here’s a map of three albums from my music collection: Paganini’s Violin Caprices, Eminem’s The Eminem Show, and Coldplay’s X&Y.

smallmap

To make things more interesting (and in some cases simpler), I imposed some constraints. First, the solution should not rely on any pre-existing ID3 tags (e.g. Arist, Genre) in the MP3 files—only the statistical properties of the sound should be used to calculate the similarity of songs. A lot of my MP3 files are poorly tagged anyways, and I wanted to keep the solution applicable to any music collection no matter how bad its metadata. Second, no other external information should be used to create the visualization—the only required inputs are the user’s set of MP3 files. It is possible to improve the quality of the solution by leveraging a large database of songs which have already been tagged with a specific genre, but for simplicity I wanted to keep this solution completely standalone. And lastly, although digital music comes in many formats (MP3, WMA, M4A, OGG, etc.) to keep things simple I just focused on MP3 files. The algorithm developed here should work fine for any other format as long as it can be extracted into a WAV file.

Creating the music map is an interesting exercise. It involves audio processing, machine learning, and visualization techniques.

It would take longer than a weekend to complete this project with a sizable music collection but it would be a great deal of fun!

Great way to become familiar with several Python libraries.

BTW, when I saw Coldplay, I thought of Coal Chamber by mistake. Not exactly the same subject. 😉

I first saw this in a tweet by Kirk Borne.

Google Chrome (Version 41.0.2272.89 (64-bit)) WARNING!

Filed under: Browsers,Visualization,WWW — Patrick Durusau @ 11:22 am

An update of Google Chrome on Ubuntu this morning took my normal bookmark manager list of small icons and text to:

google-bookmarks

What do the kids say these days?

That sucks!

Some of you may prefer the new display. Good for you.

As far as I can tell, Chrome does not offer an option to revert to the previous display.

I keep quite a few bookmarks with an active blog so the graphic images are a waste of screen space and force me to scroll far more often than otherwise. I often work with the bookmark manager open in a separate screen.

For people who like this style, great. My objection is to it being forced on users who may prefer the prior style of bookmarks.

Here’s your design tip for the day: Don’t help users without giving them the ability to decline the help. Especially with display features.

March 13, 2015

NYC 311 with Turf

Filed under: Graphics,Visualization — Patrick Durusau @ 9:35 am

NYC 311 with Turf by Morgan Herlocker.

From the post:

In this example, Turf is processing and visualizing the largest historical point dataset on Socrata and data.gov. The interface below visualizes every 311 call in NYC since 2010 from a dataset that weighs in at nearly 8 GB.

An excellent visualization from Mapbox to start a Friday morning!

The Turf homepage reports: “Advanced geospatial analysis for browsers and node.”

Wikipedia on 311 calls.

The animation is interactive and can lead to some interesting questions. For example, when I slow down the animation (icons on top), I can see a pattern that develops in Brooklyn with a large number of calls from November to January of each year (roughly and there are other patterns). Zooming in, I located one hot spot at the intersection of Woodruff Ave. and Ocean Ave.

The “hotspot” appears to be an artifact of summarizing the 311 data because the 311 records contain individual addresses for the majority of reports. I say “majority,” I didn’t download the data set to verify that statement, just scanned the first 6,000 or so records.

Deeper drilling into the data could narrow the 311 hotspots to block or smaller locations.

As you have come to expect, Mapbox has a tutorial on using Turf analysis.

If this hasn’t captured your interest yet, perhaps the keywords, composable, scale, scaling, will:

Unlike a traditional GIS database, Turf’s flexibility allows for composable algorithms that scale well past what fits into memory or even on a single machine.

Morgan discusses a similar project and the use of steamgraphs. Great way to start a Friday!

March 8, 2015

Data Viz News March 2 – 7, 2015 (Delivery Format Challenge)

Filed under: Graphics,Visualization — Patrick Durusau @ 6:19 pm

Tiago Veloso has posted hundreds of links to visualizations and resources that have never appeared on Data Viz News, with one post per day between March 2 – 7, 2015.

Which highlights a problem Tiago needs your assistance to solve. From the first post:

At long last, we return to our weekly round ups of the best links about data visualization. Well, it hasn’t been that long, but when you look at what has already taken place since our last post, well, it does seem like an eternity. So much has happened in the first two months of 2015!

This means, of course, that we have a lot of catching up to do! Yes, we could just bring you the most recent articles, interviews and resources. But we’ll try to mix in some of the amazing content already published during this past 60 days, so that we may continue to feature the very best content related to visualization, infographic design, visual journalism, cartography, and much more.

That said, we have been also thinking hardly about alternatives to these long, many times overwhelming, gigantic posts. When we created Data Viz News, we were sure that there was enough content to make an appealing, interesting weekly round up just with links about the fields closer to our interests. Now, almost two years later, the question is sort of if we have content for such a post… every day!

So, while today’s post – and the upcoming ones, all to be posted this week – are still in that very same format, we are intensively looking for alternatives, and your help would be very much appreciated: just let us know on Twitter (@visualoop) what you think would be the best way to deliver this much amount of articles. Looking forward for your ideas.

Our cup runs over with data visualization content.

Taking those six days as a data set, how would you organize the same material?

March 7, 2015

RawTherapee

Filed under: Image Processing,Topic Maps,Visualization — Patrick Durusau @ 4:41 pm

RawTherapee

From the RawPedia (Getting Started)

RawTherapee is a cross-platform raw image processing program, released under the GNU General Public License Version 3. It was originally written by Gábor Horváth of Budapest. Rather than being a raster graphics editor such as Photoshop or GIMP, it is specifically aimed at raw photo post-production. And it does it very well – at a minimum, RawTherapee is one of the most powerful raw processing programs available. Many of us would make bigger claims…

At intervals of more than a month but not much more than two months, there is a Play Raw competition with an image and voting (plus commentary along the way).

Very impressive!

Thoughts on topic map competitions?

I first saw this in a tweet by Neil Saunders.

March 6, 2015

Data Visualization as a Communication Tool

Filed under: Graphics,Library,Visualization — Patrick Durusau @ 3:53 pm

Data Visualization as a Communication Tool by Susan [Gardner] Archambault, Joanne Helouvry, Bonnie Strohl, and Ginger Williams.

Abstract:

This paper provides a framework for thinking about meaningful data visualization in ways that can be applied to routine statistics collected by libraries. An overview of common data display methods is provided, with an emphasis on tables, scatter plots, line charts, bar charts, histograms, pie charts, and infographics. Research on “best practices” in data visualization design is presented as well as a comparison of free online data visualization tools. Different data display methods are best suited for different quantitative relationships. There are rules to follow for optimal data visualization design. Ten free online data visualization tools are recommended by the authors.

Good review of basic visualization techniques with an emphasis on library data. You don’t have to be in Tufte‘s league in order to make effective data visualizations.

February 27, 2015

Have You Tried DRAKON Comrade? (Russian Space Program Specification Language)

Filed under: Flowchart,Graphics,Visualization — Patrick Durusau @ 5:04 pm

DRAKON

From the webpage:

DRAKON is a visual language for specifications from the Russian space program. DRAKON is used for capturing requirements and building software that controls spacecraft.

The rules of DRAKON are optimized to ensure easy understanding by human beings.

DRAKON is gaining popularity in other areas beyond software, such as medical textbooks. The purpose of DRAKON is to represent any knowledge that explains how to accomplish a goal.


DRAKON Editor is a free tool for authoring DRAKON flowcharts. It also supports sequence diagrams, entity-relationship and class diagrams.

With DRAKON Editor, you can quickly draw diagrams for:

  • software requirements and specifications;
  • documenting existing software systems;
  • business processes;
  • procedures and rules;
  • any other information that tells “how to do something”.

DRAKON Editor runs on Windows, Mac and Linux.

The user interface of DRAKON Editor is extremely simple and straightforward.

Software developers can build real programs with DRAKON Editor. Source code can be generated in several programming languages, including Java, Processing.org, D, C#, C/C++ (with Qt support), Python, Tcl, Javascript, Lua, Erlang, AutoHotkey and Verilog

I note with amusement that the DRAKON editor has no “save” button. Rest easy! DRAKON saves all input automatically, removing the need for a “save” button. About time!

Download DRAKON editor.

I am in the middle of an upgrade so look for sample images next week.

February 26, 2015

Gregor Aisch – Information Visualization, Data Journalism and Interactive Graphics

Filed under: Journalism,News,Reporting,Visualization — Patrick Durusau @ 8:04 pm

Gregor has two sites that I wanted to bring to your attention on information visualization, data journalism and interactive graphics.

The first one, driven-by-data.net are graphics from New York Times stories created by Gregor and others. Impressive graphics. If you are looking for visualization ideas, not a bad place to stop.

The second one, Vis4.net is a blog that features Gregor’s work. But more than a blog, if you choose the navigation links at the top of the page:

Color – Posts on color.

Code – Posts focused on code.

Cartography – Posts on cartography.

Advice – Advice (not for the lovelorn).

Archive – Archive of his posts.

Rather than a long list of categories (ahem), Gregor has divided his material into easy to recognize and use divisions.

Always nice when you see a professional at work!

Enjoy!

Data Visualization with JavaScript

Filed under: Javascript,Visualization — Patrick Durusau @ 7:49 pm

Data Visualization with JavaScript by Stephen A. Thomas.

From the webpage:

It’s getting hard to ignore the importance of data in our lives. Data is critical to the largest social organizations in human history. It can affect even the least consequential of our everyday decisions. And its collection has widespread geopolitical implications. Yet it also seems to be getting easier to ignore the data itself. One estimate suggests that 99.5% of the data our systems collect goes to waste. No one ever analyzes it effectively.

Data visualization is a tool that addresses this gap.

Effective visualizations clarify; they transform collections of abstract artifacts (otherwise known as numbers) into shapes and forms that viewers quickly grasp and understand. The best visualizations, in fact, impart this understanding intuitively. Viewers comprehend the data immediately—without thinking. Such presentations free the viewer to more fully consider the implications of the data: the stories it tells, the insights it reveals, or even the warnings it offers. That, of course, defines the best kind of communication.

If you’re developing web sites or web applications today, there’s a good chance you have data to communicate, and that data may be begging for a good visualization. But how do you know what kind of visualization is appropriate? And, even more importantly, how do you actually create one? Answers to those very questions are the core of this book. In the chapters that follow, we explore dozens of different visualizations, techniques, and tool kits. Each example discusses the appropriateness of the visualization (and suggests possible alternatives) and provides step-by-step instructions for including the visualization in your own web pages.

With a publication date of March 2015 its hard to get any more current information on data visualization and JavaScript!

You can view the text online or buy a proper ebook/hard copy.

Enjoy!

February 25, 2015

Learning Data Visualization using Processing

Filed under: Processing,Visualization — Patrick Durusau @ 5:31 pm

Learning Data Visualization using Processing by C.P. O’Neill.

From the post:

Learning data visualization techniques using the Processing programming language has always been a skill that has been on my list of things to learn really well and I finally got around to get started. I’ve used other technologies and methods before for data visualization, most notably R and RStudio, so when I got the opportunity to learn how to take that skill to the next level I jumped at it. Here is a visualization of all the meteor strikes that have been collected around the world. The bigger the circles, the larger the impact. I’m not going to go into a hugh analysis since I’m sure it’s been done many times before, but I am excited to get cracking on other data sets in the near future.

GitHub: repo

Skillshare Class: Data Visualization: Designing Maps with Processing and Illustrator

A nice reminder about Processing.

I have seen the usual visualization of arms exporters (U.S. is #1 by the way) but wonder about a visualization of the deaths attributable to world leaders during their terms in office (20th/21st century). Some of the counts are iffy and how do you allocate Russian deaths between Germany and the Allies (for not supporting Russia)? Still, it could be an interesting exercise.

I first saw this in a tweet by Stéphane Fréchette.

« Newer PostsOlder Posts »

Powered by WordPress