Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

July 28, 2016

Entropy Explained, With Sheep

Filed under: Cryptography,Encryption,Information Theory,Shannon — Patrick Durusau @ 2:34 pm

Entropy Explained, With Sheep by Aatish Bhatia.

Entropy is relevant to information theory, encryption, Shannon, but I mention it here because of the cleverness of the explanation.

Aatish sets a very high bar for taking a difficult concept and creating a compelling explanation that does not involve hand-waving and/or leaps of faith on the part of the reader.

Highly recommended as a model for explanation!

Enjoy!

April 28, 2016

Quantum Shannon Theory (Review Request)

Filed under: Information Theory,Quantum,Shannon — Patrick Durusau @ 2:51 pm

Quantum Shannon Theory by John Preskill.

Abstract:

This is the 10th and final chapter of my book on Quantum Information, based on the course I have been teaching at Caltech since 1997. An early version of this chapter (originally Chapter 5) has been available on the course website since 1998, but this version is substantially revised and expanded. The level of detail is uneven, as I’ve aimed to provide a gentle introduction, but I’ve also tried to avoid statements that are incorrect or obscure. Generally speaking, I chose to include topics that are both useful to know and relatively easy to explain; I had to leave out a lot of good stuff, but on the other hand the chapter is already quite long. This is a working draft of Chapter 10, which I will continue to update. See the URL on the title page for further updates and drafts of other chapters, and please send me an email if you notice errors. Eventually, the complete book will be published by Cambridge University Press.

Prekill tweeted requesting reviews of and comments on this 112 page “chapter” from Quantum Information (forthcoming, appropriately, no projected date).

Be forewarned that Preskill compresses classical information theory into 14 pages or so. 😉

You can find more chapters at: Quantum Computation.

Previous problem sets with solutions are also available.

Quantum computing is coming. Are you going to be the first quantum hacker?

Enjoy!

October 23, 2015

Information Cartography

Filed under: Cartography,Information Theory — Patrick Durusau @ 8:29 pm

Information Cartography by Carlos Guestrin and Eric Horvitz. (cacm.acm.org/magazines/2015/11/193323)

Brief discussion of the CACM paper that I think will capture your interest.

From the introduction:

We demonstrate that metro maps can help people understand information in many areas, including news stories, research areas, legal cases, even works of literature. Metro maps can help them cope with information overload, framing a direction for research on automated extraction of information, as well as on new representations for summarizing and presenting complex sets of interrelated concepts.

Spend some time this weekend with this article and its references.

More to follow next week!

October 15, 2015

Visual Information Theory

Filed under: Information Theory,Shannon,Visualization — Patrick Durusau @ 2:47 pm

Visual Information Theory by Christopher Olah.

From the post:

I love the feeling of having a new way to think about the world. I especially love when there’s some vague idea that gets formalized into a concrete concept. Information theory is a prime example of this.

Information theory gives us precise language for describing a lot of things. How uncertain am I? How much does knowing the answer to question A tell me about the answer to question B? How similar is one set of beliefs to another? I’ve had informal versions of these ideas since I was a young child, but information theory crystallizes them into precise, powerful ideas. These ideas have an enormous variety of applications, from the compression of data, to quantum physics, to machine learning, and vast fields in between.

Unfortunately, information theory can seem kind of intimidating. I don’t think there’s any reason it should be. In fact, many core ideas can be explained completely visually!

Great visualization of the central themes of information theory!

Plus an interesting aside at the end of the post:

Claude Shannon’s original paper on information theory, A Mathematical Theory of Computation, is remarkably accessible. (This seems to be a recurring pattern in early information theory papers. Was it the era? A lack of page limits? A culture emanating from from Bell Labs?)

Cover & Thomas’ Elements of Information Theory seems to be the standard reference. I found it helpful.

Cover & Thomas’ Elements of Information Theory

I don’t find Shannon’s “accessibility” all that remarkable, he was trying to be understood. Once a field matures and develops an insider jargon, trying to be understood is no longer “professional.” Witness the lack of academic credit for textbooks and other explanatory material as opposed to jargon-laden articles that may or may not be read by anyone other than proof readers.

October 10, 2015

Information theory and Coding

Filed under: Cryptography,Encryption,Information Theory — Patrick Durusau @ 5:57 am

Information theory and Coding by Mathematicalmonk.

From the introduction video:

Overview of central topics in Information theory and Coding.

Compression (source coding) theory: Source coding theorem, Kraft-McMillan inequality, Rate-distortion theorem

Error-correction (channel coding) theory: Channel coding theorem, Channel capacity, Typicality and the AEP

Compression algorithms: Huffman codes, Arithmetic coding, Lempel-Ziv

Error-correction algorithms: Hamming codes, Reed-Solomon codes, Turbo codes, Gallager (LDPC) codes

There is a great deal of cross-over between information theory and coding, cryptography, statistics, machine learning and other topics. A grounding in information theory and coding will enable you to spot and capitalize on those commonalities.

November 6, 2014

Deeper Than Quantum Mechanics—David Deutsch’s New Theory of Reality

Filed under: Information Theory,Philosophy,Quantum — Patrick Durusau @ 8:07 pm

Deeper Than Quantum Mechanics—David Deutsch’s New Theory of Reality

From the post:


Their new idea is called constructor theory and it is both simpler and deeper than quantum mechanics, or indeed any other laws of physics. In fact, Deutsch claims that constructor theory forms a kind of bedrock of reality from which all the laws of physics emerge.

Constructor theory is a radically different way of thinking about the universe that Deutsch has been developing for some time. He points out that physicists currently ply their trade by explaining the world in terms of initial conditions and laws of motion. This leads to a distinction between what happens and what does not happen.

Constructor theory turns this approach on its head. Deutsch’s new fundamental principle is that all laws of physics are expressible entirely in terms of the physical transformations that are possible and those that are impossible.

In other words, the laws of physics do not tell you what is possible and impossible, they are the result of what is possible and impossible. So reasoning about the physical transformations that are possible and impossible leads to the laws of physics.

That’s why constructor theory is deeper than anything that has gone before it. In fact, Deutsch does not think about it as a law of physics but as a principle, or set of principles, that the laws of physics must obey.

If that sounds like heavy sledding, see: arxiv.org/abs/1405.5563 : Constructor Theory of Information.

Abstract:

We present a theory of information expressed solely in terms of which transformations of physical systems are possible and which are impossible – i.e. in constructor-theoretic terms. Although it includes conjectured laws of physics that are directly about information, independently of the details of particular physical instantiations, it does not regard information as an a priori mathematical or logical concept, but as something whose nature and properties are determined by the laws of physics alone. It does not suffer from the circularity at the foundations of existing information theory (namely that information and distinguishability are each defined in terms of the other). It explains the relationship between classical and quantum information, and reveals the single, constructor-theoretic property underlying the most distinctive phenomena associated with the latter, including the lack of in-principle distinguishability of some states, the impossibility of cloning, the existence of pairs of variables that cannot simultaneously have sharp values, the fact that measurement processes can be both deterministic and unpredictable, the irreducible perturbation caused by measurement, and entanglement (locally inaccessible information).

The paper runs thirty (30) pages so should give you a good workout before the weekend. 😉

I first saw this in a tweet by Steven Pinker.

October 27, 2014

Extended Artificial Memory:…

Filed under: Information Retrieval,Information Theory,IT,Memory — Patrick Durusau @ 2:46 pm

Extended Artificial Memory: Toward an Integral Cognitive Theory of Memory and Technology by Lars Ludwig. (PDF) (Or you can contribute to the cause by purchasing a printed or Kindle copy of: Information Technology Rethought as Memory Extension: Toward an integral cognitive theory of memory and technology.)

Convention book selling wisdom is that a title should provoke people to pick up the book. First step towards a sale. Must be the thinking behind this title. Just screams “Read ME!”

😉

Seriously, I have read some of the PDF version and this is going on the my holiday wish list as a hard copy request.

Abstract:

This thesis introduces extended artificial memory, an integral cognitive theory of memory and technology. It combines cross-scientific analysis and synthesis for the design of a general system of essential knowledge-technological processes on a sound theoretical basis. The elaboration of this theory was accompanied by a long-term experiment for understanding [Erkenntnisexperiment]. This experiment included the agile development of a software prototype (Artificial Memory) for personal knowledge management.

In the introductory chapter 1.1 (Scientific Challenges of Memory Research), the negative effects of terminological ambiguity and isolated theorizing to memory research are discussed.

Chapter 2 focuses on technology. The traditional idea of technology is questioned. Technology is reinterpreted as a cognitive actuation process structured in correspondence with a substitution process. The origin of technological capacities is found in the evolution of eusociality. In chapter 2.2, a cognitive-technological model is sketched. In this thesis, the focus is on content technology rather than functional technology. Chapter 2.3 deals with different types of media. Chapter 2.4 introduces the technological role of language-artifacts from different perspectives, combining numerous philosophical and historical considerations. The ideas of chapter 2.5 go beyond traditional linguistics and knowledge management, stressing individual constraints of language and limits of artificial intelligence. Chapter 2.6 develops an improved semantic network model, considering closely associated theories.

Chapter 3 gives a detailed description of the universal memory process enabling all cognitive technological processes. The memory theory of Richard Semon is revitalized, elaborated and revised, taking into account important newer results of memory research.

Chapter 4 combines the insights on the technology process and the memory process into a coherent theoretical framework. Chapter 4.3.5 describes four fundamental computer-assisted memory technologies for personally and socially extended artificial memory. They all tackle basic problems of the memory-process (4.3.3). In chapter 4.3.7, the findings are summarized and, in chapter 4.4, extended into a philosophical consideration of knowledge.

Chapter 5 provides insight into the relevant system landscape (5.1) and the software prototype (5.2). After an introduction into basic system functionality, three exemplary, closely interrelated technological innovations are introduced: virtual synsets, semantic tagging, and Linear Unit tagging.

The common memory capture (of two or more speakers) imagery is quite powerful. It highlights a critical aspect of topic maps.

Be forewarned this is European style scholarship, where the reader is assumed to be comfortable with philosophy, linguistics, etc., in addition to the more narrow aspects of computer science.

To see these ideas in practice: http://www.artificialmemory.net/.

Slides on What is Artificial Memory.

I first saw this in a note from Jack Park, the source of many interesting and useful links, papers and projects.

June 14, 2014

Access To Information Is Power

Filed under: Information Theory,NSA — Patrick Durusau @ 4:52 pm

No Place to Hide Freed

From the post:

After reading No Place to Hide on day of release and whipping out a review, now these second thoughts:

We screen shot the Kindle edition, plugged the double-paged images into Word, and printed five PDFs of the Introduction, Chapter 1 through 5, and Epilogue. Then put the 7Z package online at Cryptome.

This was done to make more of Edward Snowden’s NSA material available to readers than will be done by the various books about it — NPTH among a half-dozen — hundreds of news and opinion articles, TV appearances and awards ceremonies by Snowden, Greenwald, Poitras, McAskin, Gellman, Alexander, Clapper, national leaders and gaggles of journalist hobos of the Snowden Intercept of NSA runaway metadata traffic.

The copying and unlimited distribution of No Place to Hide is to compensate in a small way for the failure to release 95% of the Snowden material to the public.

After Snowden dumped the full material on Greenwald, Poitras and Gellman, about 97% of it has been withheld. This book provides a minuscule amount, 106 images, of the 1500 pages released so far out of between 59,000 and 1.7 million allegedly taken by Snowden.

Interesting that the post concludes:

Read No Place to Hide and wonder why it does not foster accelerated, full release of the Snowden material, to instead for secretkeepers of all stripes profit from limited releases and inadequate, under-informed public debate.

I would think the answer to the concluding question is self-evident.

The NSA kept the programs and documents about the programs secret in order to avoid public debate and the potential, however unlikely, of being held responsible for violation of various laws. There is no exception in the United States Constitution that reads: “the executive branch is freed from the restrictions of this constitution when at its option, it decides that freedom to be necessary.”

I have read the United States Constitution rather carefully and at least in my reading, there is no such language.

The answer for Glenn Greenwald is even easier. What should be the basis for a public debate over privacy and what government measures, if any, are appropriate when defending itself against a smallish band of malcontents into a cash cow for Glenn Greenwald. Because Greenwald has copies of the documents stolen by Snowden, he can expect to sell news stories, to be courted and feted by news organizations, etc., for the rest of his life.

Neither the NSA nor Greenwald are interested in a full distribution of all the documents taken by Snowden. Nor are they interested in a fully informed public debate.

Their differences on the release of some documents is a question of whose interest in being served rather than a question of public interest.

Leaking documents to the press is a good way to make someone’s career. Not a good way to get secrets out for public debate.

Leak to the press if you have to but also post full copies to as many public repositories as possible.

Access to information is power. The NSA and Greenwald have it and they are not going to share it with you, at least voluntarily.

December 8, 2013

Advances in Neural Information Processing Systems 26

Advances in Neural Information Processing Systems 26

The NIPS 2013 conference ended today.

All of the NIPS 2013 papers were posted today.

I count three hundred and sixty (360) papers.

From the NIPS Foundation homepage:

The Foundation: The Neural Information Processing Systems (NIPS) Foundation is a non-profit corporation whose purpose is to foster the exchange of research on neural information processing systems in their biological, technological, mathematical, and theoretical aspects. Neural information processing is a field which benefits from a combined view of biological, physical, mathematical, and computational sciences.

The primary focus of the NIPS Foundation is the presentation of a continuing series of professional meetings known as the Neural Information Processing Systems Conference, held over the years at various locations in the United States, Canada and Spain.

Enjoy the proceedings collection!

I first saw this in a tweet by Benoit Maison.

August 9, 2013

Complex Adaptive Dynamical Systems, a Primer

Filed under: Cellular Automata,Complexity,Game Theory,Information Theory,Self-Organizing — Patrick Durusau @ 3:47 pm

Complex Adaptive Dynamical Systems, a Primer by Claudius Gros. (PDF)

The high level table of contents should capture your interest:

  1. Graph Theory and Small-World Networks
  2. Chaos, Bifurcations and Diffusion
  3. Complexity and Information Theory
  4. Random Boolean Networks
  5. Cellular Automata and Self-Organized Criticality
  6. Darwinian Evolution, Hypercycles and Game Theory
  7. Synchronization Phenomena
  8. Elements of Cognitive Systems Theory

If not, you can always try the video lectures by the author.

While big data is a crude approximation of some part of the world as we experience it, it is less coarse than prior representations.

Curious how less coarse representations will need to become in order to exhibit the complex behavior of what they represent?

I first saw this at Complex Adaptive Dynamical Systems, a Primer (Claudius Gros) by Charles Iliya Krempeaux.

July 29, 2013

Majority voting and information theory: What am I missing?

Filed under: Information Theory,Merging — Patrick Durusau @ 3:40 pm

Majority voting and information theory: What am I missing? by Panos Ipeirotis.

From the post:

In crowdsourcing, redundancy is a common approach to ensure quality. One of the questions that arises in this setting is the question of equivalence. Let’s assume that a worker has a known probability q of giving a correct answer, when presented with a choice of n possible answers. If I want to simulate one high-quality worker workers of quality q, how many workers of quality q < q do we need?

If you step away from match / no match type merging tests for topics, the question that Panos poses comes into play.

There has been prior work in the area where the question was the impact of quality (q) being less than or greater than 0.5. Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers by Victor S. Sheng, Foster Provost, Panagiotis G. Ipeirotis.

Panos’ question is why can’t he achieve a theoretical quality of 1.0 if he uses two workers with q = 0.85?

I agree that using high quality workers in series can improve over all results. However, as I respond to his blog post, probabilities are not additive.

They are ever probabilities. Could have, on occasion, two 0.85 workers in series transmit an answer perfectly. But that is only one possible outcome out of number of possible outcomes.

What would your response be?

December 14, 2012

Structure and Dynamics of Information Pathways in Online Media

Filed under: Information Flow,Information Theory,Networks,News,Social Networks — Patrick Durusau @ 6:16 am

Structure and Dynamics of Information Pathways in Online Media by Manuel Gomez Rodriguez, Jure Leskovec, Bernhard Schölkopf.

Abstract:

Diffusion of information, spread of rumors and infectious diseases are all instances of stochastic processes that occur over the edges of an underlying network. Many times networks over which contagions spread are unobserved, and such networks are often dynamic and change over time. In this paper, we investigate the problem of inferring dynamic networks based on information diffusion data. We assume there is an unobserved dynamic network that changes over time, while we observe the results of a dynamic process spreading over the edges of the network. The task then is to infer the edges and the dynamics of the underlying network.

We develop an on-line algorithm that relies on stochastic convex optimization to efficiently solve the dynamic network inference problem. We apply our algorithm to information diffusion among 3.3 million mainstream media and blog sites and experiment with more than 179 million different pieces of information spreading over the network in a one year period. We study the evolution of information pathways in the online media space and find interesting insights. Information pathways for general recurrent topics are more stable across time than for on-going news events. Clusters of news media sites and blogs often emerge and vanish in matter of days for on-going news events. Major social movements and events involving civil population, such as the Libyan’s civil war or Syria’s uprise, lead to an increased amount of information pathways among blogs as well as in the overall increase in the network centrality of blogs and social media sites.

A close reading of this paper will have to wait for the holidays but it will be very near the top of the stack!

Transient subjects anyone?

November 23, 2012

Course on Information Theory, Pattern Recognition, and Neural Networks

Filed under: CS Lectures,Information Theory,Neural Networks,Pattern Recognition — Patrick Durusau @ 11:27 am

Course on Information Theory, Pattern Recognition, and Neural Networks by David MacKay.

From the description:

A series of sixteen lectures covering the core of the book “Information Theory, Inference, and Learning Algorithms (Cambridge University Press, 2003)” which can be bought at Amazon, and is available free online. A subset of these lectures used to constitute a Part III Physics course at the University of Cambridge. The high-resolution videos and all other course material can be downloaded from the Cambridge course website.

Excellent lectures on information theory, the probability that a message sent is the one received.

Makes me wonder if there is a similar probability theory for the semantics of a message sent being the semantics of the message as received?

July 27, 2012

Information Theory, Pattern Recognition, and Neural Networks

Filed under: Inference,Information Theory,Neural Networks,Pattern Recognition — Patrick Durusau @ 11:13 am

Information Theory, Pattern Recognition, and Neural Networks by David MacKay.

David MacKay’s lectures with slides on information theory, inference and neural networks. Spring/Summer of 2012.

Just in time for the weekend!

I saw this in Christophe Lalanne’s Bag of Tweets for July 2012.

July 9, 2012

Stability as Illusion

Filed under: Information Theory,Visualization,Wikipedia — Patrick Durusau @ 5:28 am

In A Visual Way to See What is Changing Within Wikipedia, Jennifer Shockley writes:

Wikipedia is a go to source for quick answers outside the classroom, but many don’t realize Wiki is an ever evolving information source. Geekosystem’s article “Wikistats Show You What Parts Of Wikipedia Are Changing” provides a visual way to see what is changing within Wikipedia.

Is there any doubt that all of our information sources are constantly evolving?

Whether by edits to the sources or in our reading of those sources?

I wonder, have there been recall/precision studies done chronologically?

That is to say, studies of user evaluation of precision/recall on a given data set that repeat the evaluation with users at five (5) year intervals?

To learn if user evaluations of precision/recall change over time for the same queries on the same body of material?

My suspicion, without attributing a cause, is yes.

Suggestions or pointers welcome!

June 22, 2012

From Classical to Quantum Shannon Theory

Filed under: Communication,Information Theory,Shannon — Patrick Durusau @ 2:38 pm

From Classical to Quantum Shannon Theory by Mark M. Wilde

Abstract:

The aim of this book is to develop “from the ground up” many of the major, exciting, pre- and post-millenium developments in the general area of study known as quantum Shannon theory. As such, we spend a significant amount of time on quantum mechanics for quantum information theory (Part II), we give a careful study of the important unit protocols of teleportation, super-dense coding, and entanglement distribution (Part III), and we develop many of the tools necessary for understanding information transmission or compression (Part IV). Parts V and VI are the culmination of this book, where all of the tools developed come into play for understanding many of the important results in quantum Shannon theory.

From Chapter 1:

You may be wondering, what is quantum Shannon theory and why do we name this area of study as such? In short, quantum Shannon theory is the study of the ultimate capability of noisy physical systems, governed by the laws of quantum mechanics, to preserve information and correlations. Quantum information theorists have chosen the name quantum Shannon theory to honor Claude Shannon, who single-handedly founded the fi eld of classical information theory, with a groundbreaking 1948 paper [222]. In particular, the name refers to the asymptotic theory of quantum information, which is the main topic of study in this book. Information theorists since Shannon have dubbed him the “Einstein of the information age.”1 The name quantum Shannon theory is fit to capture this area of study because we use quantum versions of Shannon’s ideas to prove some of the main theorems in quantum Shannon theory.

This is of immediate importance if you are interested in current research in information theory. Of near-term importance if you are interested in practical design of algorithms for quantum information systems.

March 31, 2012

Automated science, deep data and the paradox of information – Data As Story

Filed under: BigData,Epistemology,Information Theory,Modeling,Statistics — Patrick Durusau @ 4:09 pm

Automated science, deep data and the paradox of information…

Bradley Voytek writes:

A lot of great pieces have been written about the relatively recent surge in interest in big data and data science, but in this piece I want to address the importance of deep data analysis: what we can learn from the statistical outliers by drilling down and asking, “What’s different here? What’s special about these outliers and what do they tell us about our models and assumptions?”

The reason that big data proponents are so excited about the burgeoning data revolution isn’t just because of the math. Don’t get me wrong, the math is fun, but we’re excited because we can begin to distill patterns that were previously invisible to us due to a lack of information.

That’s big data.

Of course, data are just a collection of facts; bits of information that are only given context — assigned meaning and importance — by human minds. It’s not until we do something with the data that any of it matters. You can have the best machine learning algorithms, the tightest statistics, and the smartest people working on them, but none of that means anything until someone makes a story out of the results.

And therein lies the rub.

Do all these data tell us a story about ourselves and the universe in which we live, or are we simply hallucinating patterns that we want to see?

I reformulate Bradley’s question into:

We use data to tell stories about ourselves and the universe in which we live.

Which means that his rules of statistical methods:

  1. The more advanced the statistical methods used, the fewer critics are available to be properly skeptical.
  2. The more advanced the statistical methods used, the more likely the data analyst will be to use math as a shield.
  3. Any sufficiently advanced statistics can trick people into believing the results reflect truth.

are sources of other stories “about ourselves and the universe in which we live.”

If you prefer Bradley’s original question:

Do all these data tell us a story about ourselves and the universe in which we live, or are we simply hallucinating patterns that we want to see?

I would answer: And the difference would be?

January 7, 2012

The feedback economy

Filed under: Information Flow,Information Theory — Patrick Durusau @ 3:55 pm

The feedback economy Companies that employ data feedback loops are poised to dominate their industries. by Alistair Croll.

From the post:

Military strategist John Boyd spent a lot of time understanding how to win battles. Building on his experience as a fighter pilot, he broke down the process of observing and reacting into something called an Observe, Orient, Decide, and Act (OODA) loop. Combat, he realized, consisted of observing your circumstances, orienting yourself to your enemy’s way of thinking and your environment, deciding on a course of action, and then acting on it.

[graphic omitted, but it is interesting. Go to Croll’s post to see it.]

The most important part of this loop isn’t included in the OODA acronym, however. It’s the fact that it’s a loop. The results of earlier actions feed back into later, hopefully wiser, ones. Over time, the fighter “gets inside” their opponent’s loop, outsmarting and outmaneuvering them. The system learns.

Boyd’s genius was to realize that winning requires two things: being able to collect and analyze information better, and being able to act on that information faster, incorporating what’s learned into the next iteration. Today, what Boyd learned in a cockpit applies to nearly everything we do.

Information is important but so is the use of information in the form of feedback.

But all systems, even information systems generate feedback.

The question is: Does your system (read topic map) hear feedback? Perhaps more importantly, does it adapt based upon feedback it hears?

March 10, 2011

Topic Maps: From Information to Discourse Architecture

Filed under: Information Theory,Interface Research/Design,Topic Maps — Patrick Durusau @ 10:27 am

Topic Maps: From Information to Discourse Architecture

Lars Johnsen writes in the Journal of Information Architecture that:

Topic Maps is a standards-based technology and model for organizing and integrating digital information in a range of applications and domains. Drawing on notions adapted from current discourse theory, this article focuses on the communicative, or explanatory, potential of topic maps. It is demonstrated that topic maps may be structured in ways that are “text-like” in character and, therefore, conducive to more expository or discursive forms of machine-readable information architecture. More specifically, it is exemplified how a certain measure of “texture”, i.e. textual cohesion and coherence, may be built into topic maps. Further, it is argued that the capability to represent and organize discourse structure may prove useful, if not essential, in systems and services associated with the emerging Socio-Semantic Web. As an example, it is illustrated how topic maps may be put to use within an area such as distributed semantic micro-blogging ….

I very much liked his “expository topic maps” metaphor, although I would extend to to say that topic maps can represent an intersection of “expository” spaces, each unique in its own right.

Highly recommended!

February 10, 2011

Building Interfaces for Data Engines – Post

Building Interfaces for Data Engines is a summary by Matthew Hurst of six data engines that provide access to data released by others.

If you are a data user, definitely worth a visit to learn about current data engines.

If you are a data developer, definitely worth a visit to glean where we might be going next.

If it is any consolation, the art of book design, that is the layout of text and images on a page remains more art than science.

Research on that topic, layout of print and images, has been underway for approximately 2,000 years, with no signs of slacking off now.

User interfaces face a similar path in my estimation.

February 8, 2011

Which Automatic Differentiation Tool for C/C++?

Which Automatic Differentiation Tool for C/C++?

OK, not immediately obvious why this is relevant to topic maps.

Nor is Bob Carpenter’s references:

I’ve been playing with all sorts of fun new toys at the new job at Columbia and learning lots of new algorithms. In particular, I’m coming to grips with Hamiltonian (or hybrid) Monte Carlo, which isn’t as complicated as the physics-based motivations may suggest (see the discussion in David MacKay’s book and then move to the more detailed explanation in Christopher Bishop’s book).

particularly useful.

I suspect the two book references are:

but I haven’t asked. In part to illustrate the problem of resolving any entity reference. Both authors have authored other books touching on the same subjects so my guesses may or may not be correct.

Oh, relevance to topic maps. The technique automatic differentiation is used in Hamiltonian Monte Carlo methods for the generation of gradients. Still not helpful? Isn’t to me either.

Ah, what about Bayesian models in IR? That made the light go on!

I will be discussing ways to show more immediate relevance to topic maps, at least for some posts, in post #1000.

It isn’t as far away as you might think.

January 12, 2011

Information Theory, Inference, and Learning Algorithms

Filed under: Inference,Information Theory,Machine Learning — Patrick Durusau @ 11:46 am

Information Theory, Inference, and Learning Algorithms Author: David J.C. MacKay, full text of the 2005 printing available for downloading. Software is also available.

From a review that I read (http://dx.doi.org/10.1145/1189056.1189063), MacKay treats machine learning as the other side of the coin from information theory.

Take the time to visit MacKay’s homepage.

There you will find his book Sustainable Energy – Without the Hot Air. Highly entertaining.

Powered by WordPress