Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

February 13, 2013

Streaming Histograms for Clojure and Java

Filed under: Clojure,Graphics,Java,Visualization — Patrick Durusau @ 2:36 pm

Streaming Histograms for Clojure and Java

From the post:

We’re happy to announce that we’ve open-sourced our “fancy” streaming histograms. We’ve talked about them before, but now the project has been tidied up and is ready to share.

PDF & CDF for a 32-bin histogram approximating a multimodal distribution.

The histograms are a handy way to compress streams of numeric data. When you want to summarize a stream using limited memory there are two general options. You can either store a sample of data in hopes that it is representative of the whole (such as a reservoir sample) or you can construct some summary statistics, updating as data arrives. The histogram library provides a tool for the latter approach.

The project is a Clojure/Java library. Since we use a lot of Clojure at BigML, the readme’s examples are all Clojure oriented. However, Java developers can still find documentation for the histogram’s public methods.

A tool for visualizing/exploring large amounts of numeric data.

datacatalogs.org [San Francisco, for example]

Filed under: Data,Dataset,Government,Government Data — Patrick Durusau @ 2:25 pm

datacatalogs.org

From the homepage:

a comprehensive list of open data catalogs curated by experts from around the world.

Cited in Simon Roger’s post: Competition: visualise open government data and win $2,000.

As of today, 288 registered data catalogs.

The reservation I have about “open” government data is that when it is “open,” it’s not terribly useful.

I am sure there is useful “open” government data but let me give you an example of non-useful “open” government data.

Consider San Francisco, CA and cases of police misconduct against it citizens.

A really interesting data visualization would be to plot those incidents against the neighborhoods of San Francisco. Where the neighborhoods are colored by economic status.

The maps of San Francisco are available at DataSF, specifically, Planning Neighborhoods.

What about the police data?

I found summaries like: OCC Caseload/Disposition Summary – 1993-2009

Which listed:

  • Opened
  • Closed
  • Pending
  • Sustained

Not exactly what is needed for neighborhood by neighborhood mapping.

Note: No police misconduct since 2009 according to these data sets. (I find that rather hard to credit.)

How would you vote on this data set from San Francisco?

Open, Opaque, Semi-Transparent?

Competition: visualise open government data and win $2,000

Filed under: Contest,Graphics,Open Data,Open Government,Visualization — Patrick Durusau @ 1:54 pm

Competition: visualise open government data and win $2,000 by Simon Rogers.

Closing date: 23:59 BST on 2 April 2013

What can you do with the thousands of open government datasets? With Google and Open Knowledge Foundation we are launching a competition to find the best dataviz out there. You might even win a prize.

(graphic omitted)

Governments around the world are releasing a tidal wave of open data – on everything from spending through to crime and health. Now you can compare national, regional and city-wide data from hundreds of locations around the world.

But how good is this data? We want to see what you can do with it. What apps and visualisations can you make with this data? We want to see how the data changes the way you see the world.

In conjunction with Google and the Open Knowledge Foundation (who will be helping us judge the results), see if you can win the $2,000 prize.

All we want you to do is to take an open dataset from any government open data website (there’s a list of them at the bottom of this article) and visualise it.

The competition is open to citizens of the UK, US, France, Germany, Spain, Netherlands, Sweden. The winner will take home $2,000 and the result will be published on the Guardian Datastore on our Show and Tell site.

Here are some of the key datasets we’ve found (list below) – and feel free to bring your own data to the party – we only ask that it is freely available and open as in OpenDefinition.org.

You are visualizing data anyway, why not take a chance on free PR and $2,000?

LobbyPlag: compares text of EU regulation with texts of lobbyists’ proposals

Filed under: EU,Law,Plagiarism — Patrick Durusau @ 1:21 pm

LobbyPlag: compares text of EU regulation with texts of lobbyists’ proposals

From the post:

A service called LobbyPlag lets users view provisions of EU regulations and compare them to provisions of lobbyists’ proposals.

The example currently available on LobbyPlag concerns the General Data Protection Regulation (GDPR).

Click here to see how LobbyPlag compares the GDPR’s forum shopping provision to what the site claims are lobbyists’ proposals for that provision.

LobbyPlag is an interesting use of legal text comparison tools to promote transparency.

See the original post for more details and links.

Another step in the right direction.

TPC Benchmark H

Filed under: Benchmarks,TPC-H — Patrick Durusau @ 1:14 pm

TPC Benchmark H

From the webpage:

Summary

The TPC Benchmark™H (TPC-H) is a decision support benchmark. It consists of a suite of business oriented ad-hoc queries and concurrent data modifications. The queries and the data populating the database have been chosen to have broad industry-wide relevance. This benchmark illustrates decision support systems that examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions.

The performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@Size), and reflects multiple aspects of the capability of the system to process queries. These aspects include the selected database size against which the queries are executed, the query processing power when queries are submitted by a single stream, and the query throughput when queries are submitted by multiple concurrent users. The TPC-H Price/Performance metric is expressed as $/QphH@Size.

Just in case you want to incorporate the TPC-H benchmark into your NoSQL solution.

I don’t recall any literature on benchmarks for semantic integration solutions.

At least in the sense of either the speed of semantic integration based on some test of semantic equivalence or the range of semantic equivalents handled by a particular engine.

You?

Imperative and Declarative Hadoop: TPC-H in Pig and Hive

Filed under: Hadoop,Hive,MapReduce,Pig,TPC-H — Patrick Durusau @ 11:41 am

Imperative and Declarative Hadoop: TPC-H in Pig and Hive by Russell Jurney.

From the post:

According to the Transaction Processing Council, TPC-H is:

The TPC Benchmark™H (TPC-H) is a decision support benchmark. It consists of a suite of business oriented ad-hoc queries and concurrent data modifications. The queries and the data populating the database have been chosen to have broad industry-wide relevance. This benchmark illustrates decision support systems that examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions.

TPC-H was implemented for Hive in HIVE-600 and for Pig in PIG-2397 by Hortonworks intern Jie Li. In going over this work, I was struck by how it outlined differences between Pig and SQL.

There seems to be tendency for simple SQL to provide greater clarity than Pig. At some point as the TPC-H queries become more demanding, complex SQL seems to have less clarity than the comparable Pig. Lets take a look.
(emphasis in original)

A refresher in the lesson that what solution you need, in this case Hive or PIg, depends upon your requirements.

Use either one blindly at the risk of poor performance or failing to meet other requirements.

Data deduplication tactics with HDFS and MapReduce [Contractor Plagiarism?]

Filed under: Deduplication,HDFS,MapReduce,Plagiarism — Patrick Durusau @ 11:29 am

Data deduplication tactics with HDFS and MapReduce

From the post:

As the amount of data continues to grow exponentially, there has been increased focus on stored data reduction methods. Data compression, single instance store and data deduplication are among the common techniques employed for stored data reduction.

Deduplication often refers to elimination of redundant subfiles (also known as chunks, blocks, or extents). Unlike compression, data is not changed and eliminates storage capacity for identical data. Data deduplication offers significant advantage in terms of reduction in storage, network bandwidth and promises increased scalability.

From a simplistic use case perspective, we can see application in removing duplicates in Call Detail Record (CDR) for a Telecom carrier. Similarly, we may apply the technique to optimize on network traffic carrying the same data packets.

Covers five (5) tactics:

  1. Using HDFS and MapReduce only
  2. Using HDFS and HBase
  3. Using HDFS, MapReduce and a Storage Controller
  4. Using Streaming, HDFS and MapReduce
  5. Using MapReduce with Blocking techniques

In these times of “Great Sequestration,” how much you are spending on duplicated contractor documentation?

You do get electronic forms of documentation. Yes?

Not that difficult to document prior contractor self-plagiarism. Teasing out what you “mistakenly” paid for it may be harder.

Question: Would you rather find out now and correct or have someone else find out?

PS: For the ambitious in government employment. You might want to consider how discovery of contractor self-plagiarism reflects on your initiative and dedication to “good” government.

MarkLogic Announces Free Developer License for Enterprise [With Odd Condition]

Filed under: MarkLogic,NoSQL,XML — Patrick Durusau @ 5:46 am

MarkLogic Announces Free Developer License for Enterprise

From the post:

MarkLogic Corporation today announced the availability of a free Developer License for MarkLogic Enterprise Edition.

The Developer License provides access to the features available in MarkLogic Enterprise Edition, including integrated search, government-grade security, clustering, replication, failover, alerting, geospatial indexing, conversion, and a suite of application development tools. MarkLogic also announced the Mongo2MarkLogic converter, a Java-based tool for importing data from MongoDB into MarkLogic providing developers immediate access to features needed to build out enterprise-ready big data solutions.

“By providing a free Developer License we enable developers to quickly deliver reliable, scalable and secure information and analytic applications that are production-ready,” said Gary Bloom, CEO and President of MarkLogic. “Many of our customers first experimented with other free NoSQL products, but turned to MarkLogic when they recognized the need for search, security, support for ACID transactions and other features necessary for enterprise environments. Our goal is to eliminate the cost barrier for developers and give them access to the best enterprise NoSQL platform from the start.”

The Developer License for MarkLogic Enterprise Edition includes tools for faster application development, business intelligence (BI) tool integration, analytic functions and visualization tools, and the ability to create user-defined functions for fast and flexible analysis of huge volumes of data.

You would think that story would merit at least one link to the free developer program.

For your convenience: Developer License for Enterprise Edition. BTW, MarkLogic homepage.

That wasn’t hard. Two links and you have direct access to the topic of the story and the company.

One odd licensing condition:

Q. Can I publish my work done with MarkLogic Server?

A. We encourage you to share your work publicly, but note that you can not disclose, without MarkLogic prior written consent, any performance or capacity statistics or the results of any benchmark test performed on MarkLogic Server.

That sounds just a tad defensive doesn’t it?

I haven’t looked at MarkLogic for a couple of iterations but earlier versions had no need to fear statistics or benchmark tests.

Results vary depending on how testing is done but anyone authorized to recommend or sign acquisition orders should know that.

If they don’t, your organization has more serious problems than needing a MarkLogic server.

February 12, 2013

Saving the “Semantic” Web (part 3)

Filed under: OWL,RDF,Semantic Web — Patrick Durusau @ 6:22 pm

On Semantic Transparency

The first responder to this series of posts, j22, argues the logic of the Semantic Web has been found to be useful.

I said as much in my post and stand by that position.

The difficulty is that the “logic” of the Semantic Web excludes vast swathes of human expression and the people who would make those expressions.

If you need authority for that proposition, consider George Boole (An Investigation of the Laws of Thought, pp. 327-328):

But the very same class of considerations shows with equal force the error of those who regard the study of Mathematics, and of their applications, as a sufficient basis either of knowledge or of discipline. If the constitution of the material frame is mathematical, it is not merely so. If the mind, in its capacity of formal reasoning, obeys, whether consciously or unconsciously, mathematical laws, it claims through its other capacities of sentiment and action, through its perceptions of beauty and of moral fitness, through its deep springs of emotion and affection, to hold relation to a different order of things. There is, moreover, a breadth of intellectual vision, a power of sympathy with truth in all its forms and manifestations, which is not measured by the force and subtlety of the dialectic faculty. Even the revelation of the material universe in its boundless magnitude, and pervading order, and constancy of law, is not necessarily the most fully apprehended by him who has traced with minutest accuracy the steps of the great demonstration. And if we embrace in our survey the interests and duties of life, how little do any processes of mere ratiocination enable us to comprehend the weightier questions which they present! As truly, therefore, as the cultivation of the mathematical or deductive faculty is a part of intellectual discipline, so truly is it only a part. The prejudice which would either banish or make supreme any one department of knowledge or faculty of mind, betrays not only error of judgment, but a defect of that intellectual modesty which is inseparable from a pure devotion to truth. It assumes the office of criticising a constitution of things which no human appointment has established, or can annul. It sets aside the ancient and just conception of truth as one though manifold. Much of this error, as actually existent among us, seems due to the special and isolated character of scientific teaching—which character it, in its turn, tends to foster. The study of philosophy, notwithstanding a few marked instances of exception, has failed to keep pace with the advance of the several departments of knowledge, whose mutual relations it is its province to determine. It is impossible, however, not to contemplate the particular evil in question as part of a larger system, and connect it with the too prevalent view of knowledge as a merely secular thing, and with the undue predominance, already adverted to, of those motives, legitimate within their proper limits, which are founded upon a regard to its secular advantages. In the extreme case it is not difficult to see that the continued operation of such motives, uncontrolled by any higher principles of action, uncorrected by the personal influence of superior minds, must tend to lower the standard of thought in reference to the objects of knowledge, and to render void and ineffectual whatsoever elements of a noble faith may still survive.

Or Justice Holmes writing in 1881 (The Common Law, page 1)

The life of the law has not been logic: it has been experience. The felt necessities of the time, the prevalent moral and political theories, intuitions of public policy, avowed or unconscious, even the prejudices which judges share with their fellow-men, have had a good deal more to do than the syllogism in determining the rules by which men should be governed. The law embodies the story of a nation’s development through many centuries, and it cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics.

In terms of historical context, remember that Holmes is writing at a time when works like John Stuart Mill’s A System of Logic, Ratiocinative and Inductive: being a connected view of The Principles of Evidence and the Methods of Scientific Investigation, were in high fashion.

The Semantic Web isn’t the first time “logic” has been seized upon as useful (as no doubt it is) and exclusionary (the part I object to) of other approaches.

Rather than presuming the semantic monotone the Semantic Web needs for its logic, a false presumption for owl:sameAs and no doubt other subjects, why not empower users to use more complex identifiers for subjects than solitary URIs?

It would not take anything away from the current Semantic Web infrastructure, simply makes its basis, URIs, less semantically opaque to users.

Isn’t semantic transparency a good thing?


Some principles of intelligent tutoring

Filed under: Education,Knowledge,Knowledge Representation — Patrick Durusau @ 6:20 pm

Some principles of intelligent tutoring by Stellan Ohlsson. (Instructional Science May 1986, Volume 14, Issue 3-4, pp 293-326)

Abstract:

Research on intelligent tutoring systems is discussed from the point of view of providing moment-by-moment adaptation of both content and form of instruction to the changing cognitive needs of the individual learner. The implications of this goal for cognitive diagnosis, subject matter analysis, teaching tactics, and teaching strategies are analyzed. The results of the analyses are stated in the form of principles about intelligent tutoring. A major conclusion is that a computer tutor, in order to provide adaptive instruction, must have a strategy which translates its tutorial goals into teaching actions, and that, as a consequence, research on teaching strategies is central to the construction of intelligent tutoring systems.

Be sure to notice the date: 1986, when you could write:

The computer offers the potential for adapting instruction to the student at a finer grain-level than the one which concerned earlier generations of educational researchers. First, instead of adapting to global traits such as learning style, the computer tutor can, in principle, be programmed to adapt to the student dynamically, during on-going instruction, at each moment in time providing the kind of instruction that will be most beneficial to the student at that time. Said differently, the computer tutor takes a longitudinal, rather than cross-sectional, perspective, focussing on the fluctuating cognitive needs of a single learner over time, rather than on stable inter-individual differences. Second, and even more important, instead of adapting to content-free characteristics of the learner such as learning rate, the computer can, in principle, be programmed to adapt both the content and the form of instruction to the student’s understanding of the subject matter. The computer can be programmed, or so we hope, to generate exactly that question, explanation, example, counter-example, practice problem, illustration, activity, or demonstration which will be most helpful to the learner. It is the task of providing dynamic adaptation of content and form which is the challenge and the promise of computerized instruction*

That was written decades before we were habituated to users adapting to the interface, not the other way around.

More on point, the quote from Ohlsson, Principle of Non-Equifinality of Learning, was proceeded by:

But there are no canonical representations of knowledge. Any knowledge domain can be seen from several different points of view, each view showing a different structure, a different set of parts, differently related. This claim, however broad and blunt – almost impolite – it may appear when laid out in print, is I believe, incontrovertible. In fact, the evidence for it is so plentiful that we do not notice it, like the fish in the sea who never thinks about water. For instance, empirical studies of expertise regularly show that human experts differ in their problem solutions (e.g., Prietula and Marchak, 1985); at the other end of the scale, studies of young children tend to show that they invent a variety of strategies even for simple tasks, (e.g., Young, 1976; Svenson and Hedenborg, 1980). As a second instance, consider rational analyses of thoroughly codified knowledge domains such as the arithmetic of rational numbers. The traditional mathematical treatment by Thurstone (1956) is hard to relate to the didactic analysis by Steiner (1969), which, in turn, does not seem to have much in common with the informal, but probing, analyses by Kieren (1976, 1980) – and yet, they are all experts trying to express the meaning of, for instance, “two-thirds”. In short, the process of acquiring a particular subject matter does not converge on a particular representation of that subject matter. This fact has such important implications for instruction that it should be stated as a principle.

The first two sentences capture the essence of topic maps as well as any I have ever seen:

But there are no canonical representations of knowledge. Any knowledge domain can be seen from several different points of view, each view showing a different structure, a different set of parts, differently related.
(emphasis added)

Single knowledge representations, such as in bank accounting systems can be very useful. But when multiple banks with different accounting systems try to roll knowledge up to the Federal Reserve, different (not better) representations may be required.

Could even require representations that support robust mappings between different representations.

What do you think?

Principle of Non-Equifinality of Learning

Filed under: Education,Knowledge Representation,Topic Maps — Patrick Durusau @ 6:20 pm

In “Educational Concept Maps: a Knowledge Based Aid for Instructional Design.” by Giovanni Adorni, Mauro Coccoli, Giuliano Vivanet (DMS 2011: 234-237), you will find the following passage:

…one of the most relevant problems concerns the fact that there are no canonical representations of knowledge structures and that a knowledge domain can be structured in different ways, starting from various points of view. As Ohlsson [2] highlighted, this fact has such relevant implications for authoring systems, that it should be stated as the “Principle of Non-Equifinality of Learning”. According to this, “The state of knowing the subject matter does not correspond to a single well-defined cognitive state. The target knowledge can always be represented in different ways, from different perspectives; hence, the process of acquiring the subject matter have many different, equally valid, end states”. Therefore it is necessary to re-think learning models and environments in order to enable users to better build represent and share their knowledge. (emphasis in original)

Nominees for representing “target knowledge…in different ways, from different perspectives….?”

In the paper, the authors detail their use of topic maps, XTM topic maps in particular and the Vizigator for visualization of their topic maps.

Sorry, I was so excited about the quote I forgot to post the article abstract:

This paper discusses a knowledge-based model for the design and development of units of learning and teaching aids. The idea behind this model originates from both the analysis of the open issues in instructional authoring systems, and the lack of a well-defined process able to merge pedagogical strategies with systems for the knowledge organization of the domain. In particular, it is presented the Educational Concept Map (ECM): a, pedagogically founded (derived from instructional design theories), abstract annotation system that was developed with the aim of guaranteeing the reusability of both teaching materials and knowledge structures. By means of ECMs, it is possible to design lessons and/or learning paths from an ontological structure characterized by the integration of hierarchical and associative relationships among the educational objectives. The paper also discusses how the ECMs can be implemented by means of the ISO/IEC 13250 Topic Maps standard. Based on the same model, it is also considered the possibility of visualizing, through a graphical model, and navigate, through an ontological browser, the knowledge structure and the relevant resources associated to them.

BTW, you can find the paper in DMS 2011 Proceedings Warning: Complete Proceedings, 359 pages, 26.3 MB PDF file. Might not want to try it on your cellphone.

And yes, this is the paper that I found this morning that triggered a number of posts as I ran it to ground. 😉 At least I will have sign-posts for some of these places next time.

In Cyberwar, Software Flaws Are A Hot Commodity

Filed under: Security,Software — Patrick Durusau @ 6:20 pm

In Cyberwar, Software Flaws Are A Hot Commodity by Tom Gjelten.

Morning Edition ran a story today on firms that are finding software flaws and then selling them to the highest bidder.

A market that has exploded in the last two years.

If there is a market for the latest and greatest flaws, doesn’t the same exist for flaws in older software that hasn’t been upgraded?

Flaws that are “out there” and known, but scattered over email lists, web pages, blog posts, conference proceedings.

But not collated, verified and packaged together.

Just curious.

How to Implement Lean BI

Filed under: Business Intelligence — Patrick Durusau @ 6:19 pm

How to Implement Lean BI by Steve Dine.

A followup to his Why Most BI Programs Under-Deliver Value.

General considerations:

Many people hear the word “Lean” and it conjures up images of featureless tools, limited budgets, reduced development and the elimination of jobs. Dispelling those myths out of the gate is crucial in order to garner support for implementing Lean BI from the organization and the BI team. If team members feel that by becoming lean they are working themselves out of a job then they will not support your efforts. If your customers feel that they will receive less service or be relegated to using suboptimal tools then they may not support your efforts as well.

So, what is Lean BI? Lean BI is about focusing on customer value and generating additional value by accomplishing more with existing resources by eliminating waste….

Some highlights:

  1. Focus on Customer Value

    Value is defined as meeting or exceeding the customer needs at a specific cost at a specific time and, as mentioned in my last article, can only be defined by the customer. Anything that consumes resources that does not deliver customer value is considered waste….

  2. See the Whole Picture

    Learn to see beyond each individual architectural decision, organizational issue or technical problem by considering how they relate in a wider context. When business users make decisions and solve problems, they often only consider the immediate symptom rather than the root cause issue….

  3. Iterate Quickly

    It is often the case that by the time a project is implemented, the requirements have changed and part of what is implemented is not required anymore or is no longer a priority. When features, reports and data elements are implemented that aren’t utilized, it is considered waste….

  4. Reduce Variation

    Variation in BI is caused by a lack of standardization in processes, design, procedures, development and practices. Variation is introduced when work is initiated and implemented both inside and outside of the BI group. It causes waste in a number of ways including the added time to reverse engineer what others have developed, recovering ETL jobs caused by maintenance overlap, the extra time searching for scripts and reports, and the duplication of development caused by two developers working on the same file….

  5. Pursue Perfection

    Perfection is a critical component of Lean BI even though the key to successfully pursuing it is the understanding that you will never get there. The key to pursuing perfection is to focus on continuous improvement in an increment fashion….

Read Steve’s post for more analysis and his suggestions on possible solutions to these issues.

From a topic map perspective:

  1. Focus on Customer Value: A topic map solution can focus on specifics that return ROI to the customer. If you don’t need or want particular forms of inferencing, they can be ignored.
  2. See the Whole Picture: A topic map can capture and preserve relationships between businesses processes. Particularly ones discovered in earlier projects. Enabling teams to make new mistakes, not simply repeat old ones.
  3. Iterate Quickly: With topic maps you aren’t bound to decisions may by projects such as SUMO or Cyc. Your changes and models are just that, yours. You don’t need anyone’s permission to make changes.
  4. Reduce Variation: Some variation can be reduced but other variation, between departments or locations may successfully resist change. Topic maps can document variation and provide mappings to get around resistance to eliminating variation.
  5. Pursue Perfection: Topic maps support incremental change by allowing you to choose how much change you can manage. Not to mention that systems can still appear to other users as though they are unchanged. Unseen change is the most acceptable form of change.

Highly recommend you read both of Steve’s posts.

Distributed Multimedia Systems (Archives)

Filed under: Conferences,Graphics,Multimedia,Music,Music Retrieval,Sound,Video,Visualization — Patrick Durusau @ 6:19 pm

Proceedings of the International Conference on Distributed Multimedia Systems

From the webpage:
http://www.ksi.edu/seke/Proceedings/dms/DMS2012_Proceedings.pdf

DMS 2012 Proceedings August 9 to August 11, 2012 Eden Roc Renaissance Miami Beach, USA
DMS 2011 Proceedings August 18 to August 19, 2011 Convitto della Calza, Florence, Italy
DMS 2010 Proceedings October 14 to October 16, 2010 Hyatt Lodge at McDonald’s Campus, Oak Brook, Illinois, USA
DMS 2009 Proceedings September 10 to September 12, 2009 Hotel Sofitel, Redwood City, San Francisco Bay, USA
DMS 2008 Proceedings September 4 to September 6, 2008 Hyatt Harborside at Logan Int’l Airport, Boston, USA
DMS 2007 Proceedings September 6 to September 8, 2007 Hotel Sofitel, Redwood City, San Francisco Bay, USA

For coverage, see the Call for Papers, DMS 2013.

Another archive with topic map related papers!

DMS 2013

Filed under: Conferences,Graphics,Multimedia,Music,Music Retrieval,Sound,Video,Visualization — Patrick Durusau @ 6:18 pm

DMS 2013: The 19th International Conference on Distributed Multimedia Systems

Dates:

Paper submission due: April 29, 2013
Notification of acceptance: May 31, 2013
Camera-ready copy: June 15, 2013
Early conference registration due: June 15, 2013
Conference: August 8 – 10, 2013

From the call for papers:

With today’s proliferation of multimedia data (e.g., images, animations, video, and sound), comes the challenge of using such information to facilitate data analysis, modeling, presentation, interaction and programming, particularly for end-users who are domain experts, but not IT professionals. The main theme of the 19th International Conference on Distributed Multimedia Systems (DMS’2013) is multimedia inspired computing. The conference organizers seek contributions of high quality papers, panels or tutorials, addressing any novel aspect of computing (e.g., programming language or environment, data analysis, scientific visualization, etc.) that significantly benefits from the incorporation/integration of multimedia data (e.g., visual, audio, pen, voice, image, etc.), for presentation at the conference and publication in the proceedings. Both research and case study papers or demonstrations describing results in research area as well as industrial development cases and experiences are solicited. The use of prototypes and demonstration video for presentations is encouraged.

Topics

Topics of interest include, but are not limited to:

Distributed Multimedia Technology

  • media coding, acquisition and standards
  • QoS and Quality of Experience control
  • digital rights management and conditional access solutions
  • privacy and security issues
  • mobile devices and wireless networks
  • mobile intelligent applications
  • sensor networks, environment control and management

Distributed Multimedia Models and Systems

  • human-computer interaction
  • languages for distributed multimedia
  • multimedia software engineering issues
  • semantic computing and processing
  • media grid computing, cloud and virtualization
  • web services and multi-agent systems
  • multimedia databases and information systems
  • multimedia indexing and retrieval systems
  • multimedia and cross media authoring

Applications of Distributed Multimedia Systems

  • collaborative and social multimedia systems and solutions
  • humanities and cultural heritage applications, management and fruition
  • multimedia preservation
  • cultural heritage preservation, management and fruition
  • distance and lifelong learning
  • emergency and safety management
  • e-commerce and e-government applications
  • health care management and disability assistance
  • intelligent multimedia computing
  • internet multimedia computing
  • virtual, mixed and augmented reality
  • user profiling, reasoning and recommendations

The presence of information/data doesn’t mean topic maps return good ROI.

On the other hand, the presence of information/data does mean semantic impedance is present.

The question is what need you have to overcome semantic impedance and at what cost?

Software Engineering and Knowledge Engineering (Archives)

Filed under: Conferences,Knowledge Engineering,Software Engineering — Patrick Durusau @ 6:17 pm

Proceedings of the International Conference on Software Engineering and Knowledge Engineering

From the webpage:

SEKE 2012 Proceedings July 1 – July 3, 2012 Hotel Sofitel, Redwood City, San Francisco Bay, USA
SEKE 2011 Proceedings July 7 – July 9, 2011 Eden Roc Renaissance Miami Beach, USA
SEKE 2010 Proceedings July 1 – July 3, 2010 Hotel Sofitel, Redwood City, San Francisco Bay, USA
SEKE 2009 Proceedings July 1 – July 3, 2009 Hyatt Harborside at Logan Int’l Airport, Boston, USA
SEKE 2008 Proceedings July 1 – July 3, 2008 Hotel Sofitel, Redwood City, San Francisco Bay, USA
SEKE 2007 Proceedings July 9 – July 11, 2007 Hyatt Harborside at Logan Int’l Airport, Boston, USA

Another treasure I discovered while hunting down topic map papers.

For coverage, see the call for papers, SEKE 2013.

SEKE 2013

Filed under: Conferences,Knowledge Engineering,Software Engineering — Patrick Durusau @ 6:16 pm

SEKE 2013: The 25th International Conference on Software Engineering and Knowledge Engineering

Dates:

Paper submission due: Midnight EST, March 1, 2013
Notification of acceptance: April 20, 2013
Early registration deadline: May 10, 2013
Camera-ready copy: May 10, 2013
Conference: June 27 – 29, 2013

From the call for papers:

The Twenty-Fifth International Conference on Software Engineering and Knowledge Engineering (SEKE 2013) will be held at Hyatt Harborside at Boston’s Logan International Airport, USA from June 27 to June 29, 2013.

The conference aims at bringing together experts in software engineering and knowledge engineering to discuss on relevant results in either software engineering or knowledge engineering or both. Special emphasis will be put on the transference of methods between both domains. Submission of papers and demos are both welcome.

TOPICS

Agent architectures, ontologies, languages and protocols
Multi-agent systems
Agent-based learning and knowledge discovery
Interface agents
Agent-based auctions and marketplaces
Artificial life and societies
Secure mobile and multi-agent systems
Mobile agents
Mobile Commerce Technology and Application Systems
Mobile Systems

Autonomic computing
Adaptive Systems
Integrity, Security, and Fault Tolerance
Reliability
Enterprise Software, Middleware, and Tools
Process and Workflow Management
E-Commerce Solutions and Applications
Industry System Experience and Report

Service-centric software engineering
Service oriented requirements engineering
Service oriented architectures
Middleware for service based systems
Service discovery and composition
Quality of services
Service level agreements (drafting, negotiation, monitoring and management)
Runtime service management
Semantic web

Requirements Engineering
Agent-based software engineering
Artificial Intelligence Approaches to Software Engineering
Component-Based Software Engineering
Automated Software Specification
Automated Software Design and Synthesis
Computer-Supported Cooperative Work
Embedded and Ubiquitous Software Engineering
Measurement and Empirical Software Engineering
Reverse Engineering
Programming Languages and Software Engineering
Patterns and Frameworks
Reflection and Metadata Approaches
Program Understanding

Knowledge Acquisition
Knowledge-Based and Expert Systems
Knowledge Representation and Retrieval
Knowledge Engineering Tools and Techniques
Time and Knowledge Management Tools
Knowledge Visualization
Data visualization
Uncertainty Knowledge Management
Ontologies and Methodologies
Learning Software Organization
Tutoring, Documentation Systems
Human-Computer Interaction
Multimedia Applications, Frameworks, and Systems
Multimedia and Hypermedia Software Engineering

Smart Spaces
Pervasive Computing
Swarm intelligence
Soft Computing

Software Architecture
Software Assurance
Software Domain Modeling and Meta-Modeling
Software dependability
Software economics
Software Engineering Decision Support
Software Engineering Tools and Environments
Software Maintenance and Evolution
Software Process Modeling
Software product lines
Software Quality
Software Reuse
Software Safety
Software Security
Software Engineering Case Study and Experience Reports

Web and text mining
Web-Based Tools, Applications and Environment
Web-Based Knowledge Management
Web-Based Tools, Systems, and Environments
Web and Data Mining

Given the range of topics, I am sure you can find one or two that interest you and involve issues where topic maps can make a significant contribution.

Looking forward to seeing your paper in the SEKE Proceedings for 2013.

Journal of e-Learning and Knowledge Society

Filed under: Education,Interface Research/Design,Training — Patrick Durusau @ 10:36 am

Journal of e-Learning and Knowledge Society

From the focus and scope statement for the journal:

SIe-L , Italian e-Learning Association, is a non-profit organization who operates as a non-commercial entity to promote scientific research and testing best practices of e-Learning and Distance Education. SIe-L consider these subjects strategic for citizen and companies for their instruction and education.

I encountered this journal while chasing a paper about topic maps in education to ground.

I have only started to explore but definitely a resource for anyone interested in the exploding on-line education market.

February 11, 2013

Saving the “Semantic” Web (part 2) [NOTLogic]

Filed under: Linked Data,RDF,Semantic Web — Patrick Durusau @ 5:45 pm

Expressing Your Semantics: NOTLogic

Saving the “Semantic” Web (part 1) ended concluding authors of data/content should be asked about the semantics of their content.

I asked if there were compelling reasons to ask someone else and got no takers.

The acronym, NOTLogic may not be familiar. It expands to: Not Only Their Logic.

Users should express their semantics in the “logic” of their domain.

After all, it is their semantics, knowledge and domain that are being captured.

Their “logic” may not square up with FOL (first order logic) but where’s the beef?

Unless one of the project requirements is to maintain consistency with FOL, why bother?

The goal in most BI projects is ROI on capturing semantics, not adhering to FOL for its own sake.

Some people want to teach calculators how to mimic “reasoning” by using that subset known as “logic.”

However much I liked the Friden rotary calculator of my youth:

Calculator

teaching it to mimic “reasoning” isn’t going to happen on my dime.

What about yours?

There are cases where machine learning technique are very productive and fully justified.

The question you need to ask yourself (after discovering if you should be using RDF at all, The Semantic Web Is Failing — But Why? (Part 2)) is whether “their” logic works for your use case.

I suspect you will find that you can express your semantics, including relationships, without resort to FOL.

Which may lead you to wonder: Why would anyone want you to use a technique they know, but you don’t?

I don’t know for sure but have some speculations on that score I will share with you tomorrow.

In the mean time, remember:

  1. As the author of content or data, you are the person to ask about its semantics.
  2. You should express your semantics in a way comfortable for you.

Flatten entire HBase column families… [Mixing Labels and Data]

Filed under: HBase,Pig,Python — Patrick Durusau @ 4:24 pm

Flatten entire HBase column families with Pig and Python UDFs by Chase Seibert.

From the post:

Most Pig tutorials you will find assume that you are working with data where you know all the column names ahead of time, and that the column names themselves are just labels, versus being composites of labels and data. For example, when working with HBase, it’s actually not uncommon for both of those assumptions to be false. Being a columnar database, it’s very common to be working to rows that have thousands of columns. Under that circumstance, it’s also common for the column names themselves to encode to dimensions, such as date and counter type.

How do you solve this mismatch? If you’re in the early stages of designing a schema, you could reconsider a more row based approach. If you have to work with an existing schema, however, you can with the help of Pig UDFs.

Now there’s an ugly problem.

You can split the label from the data as shown, but that doesn’t help when the label/data is still in situ.

Saying: “Don’t do that!” doesn’t help because it is already being done.

If anything, topic maps need to take subjects as they are found, not as we might wish for them to be.

Curious, would you write an identifier as a regex that parses such a mix of label and data, assigning each to further processing?

Suggestions?

I first saw this at Flatten Entire HBase Column Families With Pig and Python UDFs by Alex Popescu.

Label propagation in GraphChi

Filed under: Artificial Intelligence,Classifier,GraphChi,Graphs,Machine Learning — Patrick Durusau @ 4:12 pm

Label propagation in GraphChi by Danny Bickson.

From the post:

A few days ago I got a request from Jidong, from the Chinese Renren company to implement label propagation in GraphChi. The algorithm is very simple described here: Zhu, Xiaojin, and Zoubin Ghahramani. Learning from labeled and unlabeled data with label propagation. Technical Report CMU-CALD-02-107, Carnegie Mellon University, 2002.

The basic idea is that we start with a group of users that we have some information about the categories they are interested in. Following the weights in the social network, we propagate the label probabilities from the user seed node (the ones we have label information about) into the general social network population. After several iterations, the algorithm converges and the output is labels for the unknown nodes.

I assume there is more unlabeled data for topic maps than labeled data.

Depending upon your requirements, this could prove to be a useful technique for completing those unlabeled nodes.

New Book: ElasticSearch Server!

Filed under: ElasticSearch,Lucene,Solr — Patrick Durusau @ 3:51 pm

New Book: ElasticSearch Server!

In the blog post dedicated to Solr 4.0 Cookbook we give a small hint that cookbook was not the only project that occupies our free time. Today we can officially say that a few month of hard work is slowly coming to an end – we can announce a new book about one of the greatest piece of open-source software – ElasticSearch Server book!

ElasticSearch server book describes the most important and commonly used features of ElasticSearch (at least from our perspective). Example of topics discussed:

  • ElasticSearch installation and configuration
  • Static and dynamic index structure creation
  • Querying ElasticSearch with Query DSL explained
  • Using filters
  • Faceting
  • Routing
  • Indexing data that is not flat

BTW, some wag posted a comment saying a Solr blog should not talk about ElasticSearch.

I bet they don’t see the sunshine very often from that position either. 😉

Microsoft Reveals Rapid Big Data Adoption [No Pain, No Change]

Filed under: BigData,Microsoft — Patrick Durusau @ 3:05 pm

Microsoft Reveals Rapid Big Data Adoption

From the post:

More than 75 percent of midsize to large businesses are implementing big-data-related solutions within the next 12 months — with customer care, marketing and sales departments increasingly driving demand, according to new Microsoft Corp. research released today.

According to Microsoft’s “Global Enterprise Big Data Trends: 2013” study of more than 280 IT decision-makers, the following trends emerged:

  • Although the IT department (52 percent) is currently driving most of the demand for big data, customer care (41 percent), sales (26 percent), finance (23 percent) and marketing (23 percent) departments are increasingly driving demand.
  • Seventeen percent of customers surveyed are in the early stages of researching big data solutions, whereas 13 percent have fully deployed them; nearly 90 percent of customers surveyed have a dedicated budget for addressing big data.
  • Nearly half of customers (49 percent) reported that growth in the volume of data is the greatest challenge driving big data solution adoption, followed by having to integrate disparate business intelligence tools (41 percent) and having tools able to glean the insight (40 percent).

After hunting around the MS News Center I found: Customers Rapidly Adopting Big Data Solutions — Driven By Marketing, Sales and More — Reports New Microsoft Research.

Links from the MS News Center take me back to that infographic so that may be the “publication” they are talking about.

It’s alright but the same thing could have been a one page, suitable for printing/sharing type report.

In any event, it is encouraging news because the greater the adoption of “big data,” the more semantic impedance is going to cause real pain.

No pain, no change.

😉

When it hurts bad enough, take two topic maps and call me in the morning.

O Knoweldge Graph, Where Art Thou?

Filed under: Identity,Knowledge Graph — Patrick Durusau @ 2:49 pm

O Knoweldge Graph, Where Art Thou? by Matthew Hurst.

From the post:

The web search community, in recent months and years, has heard quite a bit about the ‘knowledge graph’. The basic concept is reasonably straightforward – instead of a graph of pages, we propose a graph of knowledge where the nodes are atoms of information of some form and the links are relationships between those statements. The knowledge graph concept has become established enough for it to be used as a point of comparison between Bing and Google.

….

Much of what we see out there in the form of knowledge returned for searches is really isolated pockets of related information (the date and place of brith of a person, for example). The really interesting things start happening when the graphs of information become unified across type, allowing – as suggested by this example – the user to traverse from a performer to a venue to all the performers at that venue, etc. Perhaps ‘knowledge engineer’ will become a popular resume-buzz word in the near future as ‘data scientest’ has become recently.

Read Matthew’s post for the details of the comparison.

+1! to going from graphs of pages to graphs of “atoms of information.”

I am less certain about “…graphs of information become unified across type….”

What I am missing is the reason to think that “type,” unlike any other subject, will have a uniform identification.

If we solve the problem of not requiring “type” to have a uniform identification, why not apply that to other subjects as well?

Without an express or implied requirement for uniform identification, all manner of “interesting things” will be happening in knowledge graphs.

(Note the plural, knowledge graphs, not knowledge graph.)

ResourceSync Framework Specification

Filed under: NISO,OAI,Synchronization — Patrick Durusau @ 2:21 pm

NISO and OAI Release Draft for Comments of ResourceSync Framework Specification

From the post:

NISO and the Open Archives Initiative (OAI) announce the release of a beta draft for comments of the ResourceSync Framework Specification for the web consisting of various capabilities that allow third-party systems to remain synchronized with a server’s evolving resources. The ResourceSync joint project, funded with support from the Alfred P. Sloan Foundation and the JISC, was initiated to develop a new open standard on the real-time synchronization of Web resources.

Increasingly, large-scale digital collections are available from multiple hosting locations, are cached at multiple servers, and leveraged by several services. This proliferation of replicated copies of works or data on the Internet has created an increasingly challenging problem of keeping the repositories’ holdings and the services that leverage them up-to-date and accurate. The ResourceSync draft specification introduces a range of easy to implement capabilities that a server may support in order to enable remote systems to remain more tightly in step with its evolving resources.

The draft specification is available on the OAI website at: www.openarchives.org/rs/0.5/resourcesync. Comments on the draft can be posted on the public discussion forum at: https://groups.google.com/forum/?fromgroups#!forum/resourcesync.

For more on the ResourceSync Framework, see the article in the January/February 2013 issue of D-Lib.

For those interested in synchronization of resources. Say from or to topic maps.

AGROVOC 2013 edition released

Filed under: AGROVOC,Linked Data,SKOS,Vocabularies — Patrick Durusau @ 2:08 pm

AGROVOC 2013 edition released

From the post:

The AGROVOC Team is pleased to announce the release of the AGROVOC 2013 edition.

The updated version contains 32,188 concepts in up to 22 languages, resulting in a total of 626,211 terms (in 2012: 32,061 concepts, 625,096 terms).

Please explore AGROVOC by searching terms, or browsing hierarchies.

AGROVOC 2013 is available for download, and accessible via web services.

From the “about” page:

The AGROVOC thesaurus contains 32,188 concepts in up to 22 languages covering topics related to food, nutrition, agriculture, fisheries, forestry, environment and other related domains.

A global community of editors consisting of librarians, terminologists, information managers and software developers, maintain AGROVOC using VocBench, an open-source multilingual, web-based vocabulary editor and workflow management tool that allows simultaneous, distributed editing. AGROVOC is expressed in Simple Knowledge Organization System (SKOS) and published as Linked Data.

Need some seeds for your topic map in “…food, nutrition, agriculture, fisheries, forestry, environment and other related domains”?

unicodex — High-performance Unicode Library (C++)

Filed under: Software,Unicode — Patrick Durusau @ 11:42 am

unicodex — High-performance Unicode Library (C++) by Dustin Juliano.

From the post:

The following is a micro-optimized Unicode encoder/decoder for C++ that is capable of significant performance, sustaining 6 GiB/s for UTF-8 to UTF-16/32 on an AMD A8-3870 running in a single thread, and 8 GiB/s for UTF-16 to UTF-32. That would allow it to encode nearly the full English Wikipedia in approximately 6 seconds.

It maps between UTF-8, UTF-16, and UTF-32, and properly detects UTF-8 BOM and the UTF-16 BOMs. It has been unit tested with gigabytes of data and verified with binary analysis tools. Presently, only little-endian is supported, which should not pose any significant limitations on use. It is released under the BSD license, and can be used in both proprietary and free software projects.

The decoder is aware of malformed input and will raise an exception if the input sequence would cause a buffer overflow or is otherwise fatally incorrect. It does not, however, ensure that exact codepoints correspond to the specific Unicode planes; this is by design. The implementation has been designed to be robust against garbage input and specifically avoid encoding attacks.

One of those “practical” things that you may need for processing topic maps and or other digital information. 😉

A Tale of Five Languages

Filed under: Biomedical,Medical Informatics,SNOMED — Patrick Durusau @ 10:58 am

Evaluating standard terminologies for encoding allergy information by Foster R Goss, Li Zhou, Joseph M Plasek, Carol Broverman, George Robinson, Blackford Middleton, Roberto A Rocha. (J Am Med Inform Assoc doi:10.1136/amiajnl-2012-000816)

Abstract:

Objective Allergy documentation and exchange are vital to ensuring patient safety. This study aims to analyze and compare various existing standard terminologies for representing allergy information.

Methods Five terminologies were identified, including the Systemized Nomenclature of Medical Clinical Terms (SNOMED CT), National Drug File–Reference Terminology (NDF-RT), Medication Dictionary for Regulatory Activities (MedDRA), Unique Ingredient Identifier (UNII), and RxNorm. A qualitative analysis was conducted to compare desirable characteristics of each terminology, including content coverage, concept orientation, formal definitions, multiple granularities, vocabulary structure, subset capability, and maintainability. A quantitative analysis was also performed to compare the content coverage of each terminology for (1) common food, drug, and environmental allergens and (2) descriptive concepts for common drug allergies, adverse reactions (AR), and no known allergies.

Results Our qualitative results show that SNOMED CT fulfilled the greatest number of desirable characteristics, followed by NDF-RT, RxNorm, UNII, and MedDRA. Our quantitative results demonstrate that RxNorm had the highest concept coverage for representing drug allergens, followed by UNII, SNOMED CT, NDF-RT, and MedDRA. For food and environmental allergens, UNII demonstrated the highest concept coverage, followed by SNOMED CT. For representing descriptive allergy concepts and adverse reactions, SNOMED CT and NDF-RT showed the highest coverage. Only SNOMED CT was capable of representing unique concepts for encoding no known allergies.

Conclusions The proper terminology for encoding a patient’s allergy is complex, as multiple elements need to be captured to form a fully structured clinical finding. Our results suggest that while gaps still exist, a combination of SNOMED CT and RxNorm can satisfy most criteria for encoding common allergies and provide sufficient content coverage.

Interesting article but some things that may not be apparent to the casual reader:

MedDRA:

The Medical Dictionary for Regulatory Activities (MedDRA) was developed by the International Conference on Harmonisation (ICH) and is owned by the International Federation of Pharmaceutical Manufacturers and Associations (IFPMA) acting as trustee for the ICH steering committee. The Maintenance and Support Services Organization (MSSO) serves as the repository, maintainer, and distributor of MedDRA as well as the source for the most up-to-date information regarding MedDRA and its application within the biopharmaceutical industry and regulators. (source: http://www.nlm.nih.gov/research/umls/sourcereleasedocs/current/MDR/index.html

MedDRA has a metathesaurus with translations into: Czech, Dutch, French, German, Hungarian, Italian, Japanese, Portuguese, and Spanish.

Unique Ingredient Identifier (UNII)

The overall purpose of the joint FDA/USP Substance Registration System (SRS) is to support health information technology initiatives by generating unique ingredient identifiers (UNIIs) for substances in drugs, biologics, foods, and devices. The UNII is a non- proprietary, free, unique, unambiguous, non semantic, alphanumeric identifier based on a substance’s molecular structure and/or descriptive information.

The UNII may be found in:

  • NLM’s Unified Medical Language System (UMLS)
  • National Cancer Institutes Enterprise Vocabulary Service
  • USP Dictionary of USAN and International Drug Names (future)
  • FDA Data Standards Council website
  • VA National Drug File Reference Terminology (NDF-RT)
  • FDA Inactive Ingredient Query Application

(source: http://www.fda.gov/ForIndustry/DataStandards/SubstanceRegistrationSystem-UniqueIngredientIdentifierUNII/

National Drug File – Reference Terminology (NDF-RT)

The National Drug File – Reference Terminology (NDF-RT) is produced by the U.S. Department of Veterans Affairs, Veterans Health Administration (VHA).

NDF-RT combines the NDF hierarchical drug classification with a multi-category reference model. The categories are:

  1. Cellular or Molecular Interactions [MoA]
  2. Chemical Ingredients [Chemical/Ingredient]
  3. Clinical Kinetics [PK]
  4. Diseases, Manifestations or Physiologic States [Disease/Finding]
  5. Dose Forms [Dose Form]
  6. Pharmaceutical Preparations
  7. Physiological Effects [PE]
  8. Therapeutic Categories [TC]
  9. VA Drug Interactions [VA Drug Interaction]

(source: http://www.nlm.nih.gov/research/umls/sourcereleasedocs/current/NDFRT/

MedDRA, UNII, and NDF-RT have been in use for years, MedDRA internationally in multiple languages. An uncounted number of medical records, histories and no doubt publications rely upon these vocabularies.

Assume the conclusion: SNOMED CT with RxNorm (links between drug vocabularies) provide the best coverage for “encoding common allergies.”

A critical question remains:

How to access medical records using other terminologies?

Recalling from the adventures of owl:sameAs (The Semantic Web Is Failing — But Why? (Part 5)) that any single string identifier is subject to multiple interpretations. Interpretations that can only be disambiguated by additional information.

You might present a search engine with string to string mappings but those are inherently less robust and harder to maintain than richer mappings.

The sort of richer mappings that are supported by topic maps.

February 10, 2013

Bacon, Pie and Pregnancy

Filed under: Associations,Humor — Patrick Durusau @ 4:20 pm

Searching for “Biscuit Bliss,” a book of biscuit recipes, also had the result:

People also search for

The Glory of Southern Cooking James Villas

The Bacon Cookbook James Villas

Texas home cooking Cheryl Jamison

The Joy of Pregnancy

Pie Ken Haedrich

If I were writing associations for “Biscuit Bliss,” pie would not make the list.

Bacon I can see because it is a major food group along side biscuits.

I suppose the general cooking books are super-classes of biscuit making.

Some female friends have suggested eating is associated with pregnancy.

True, but when I search for “joy of pregnancy,” it doesn’t suggest cookbooks in general or biscuits in particular.

If there is an association, is it non-commutative?*

Suggested associations of biscuits with pregnancy? (mindful of the commutative/non-commutative question)


* I am not altogether certain what a non-commutative association would look like. Partial ignorance from a point of view?

One player in the association having knowledge of the relationship and the other player does not?

Some search engines already produce that result, whether by design or not I don’t know.

« Newer PostsOlder Posts »

Powered by WordPress