Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

May 24, 2011

Music Linked Data Workshop (JISC, London, 12 May 2011)

Filed under: Linked Data,Music Retrieval — Patrick Durusau @ 10:24 am

Slides from the Music Linked Data Workshop (JISC, London, 12 May 2011)

Here you will find:

  • MusicNet: Aligning Musicology’s Metadata – David Bretherton, Daniel Alexander Smith, Joe Lambert and mc schraefel (Music, and Electronics and Computer Science, University of Southampton)
  • Towards Web-Scale Analysis of Musical Structure – J. Stephen Downie (Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign), David De Roure (Oxford e-Research Centre, University of Oxford) and Kevin Page (Oxford e-Research Centre, University of Oxford)
  • LinkedBrainz Live – Simon Dixon, Cedric Mesnage and Barry Norton (Centre for Digital Music, Queen Mary University of London)
  • BBC Music – Using the Web as our Content Management System – Nicholas Humfrey (BBC)
  • Early Music Online: Opening up the British Library’s 16th-Century Music Books – Sandra Tuppen (British Library)
  • Musonto – A Semantic Search Engine Dedicated to Music and Musicians – Jean-Philippe Fauconnier (Université Catholique de Louvain, Belgium) and Joseph Roumier (CETIC, Belgium)
  • Listening to Movies – Creating a User-Centred Catalogue of Music for Films – Charlie Inskip (freelance music consultant)

Look like good candidates for the further review inbox!

#Graph

Filed under: Graphs,Interface Research/Design — Patrick Durusau @ 10:24 am

#Graph

From the website:

#graph is an experimental HTML5 twitter graph visualizator that displays the relations between hashtags on twitter. Enter a hashtag, press OK, and the first node will load. You can now crawl around that node by double clicking on its connections. By hovering the mouse over the line, you can visualize the tweet that includes both hashtags.

Concept by: Julien Verfaillie & Giannina Amato
Development by: Julien Verfaillie

Suggestions of other demonstrations of HTML5 as interface?

May 23, 2011

Workshop on Mathematical Wikis
(MathWikis-2011)

Filed under: Mathematics,Mathematics Indexing,Semantics — Patrick Durusau @ 7:46 pm

Workshop on Mathematical Wikis (MathWikis-2011)

Important Dates:

  • Submission of abstracts: May 30th, 2011, 8:00 UTC+1
  • Notification: June 23rd, 2011
  • Camera ready versions due: July 11th, 2011
  • Workshop: August 27th, 2011

From the website:

Mathematics is increasingly becoming a collaborative discipline. The Internet has simplified the distributed development, review, and improvement of large proofs, theories, libraries, and knowledge repositories, also giving rise to all kinds of collaboratively developed mathematical learning resources. Examples include the PlanetMath free encyclopedia, the Polymath collaborative collaborative proof development efforts, and also large collaboratively developed formal libraries. Interactive computer assistance, semantic representation, and linking with other datasets on the Semantic Web are becoming very interesting aspects of collaborative mathematical developments.

The ITP 2011 MathWikis workshop aims to bring together developers and major users of mathematical wikis and collaborative and social tools for mathematics.

Topics include but are not limited to:

  • wikis and blogs for informal, semantic, semiformal, and formal mathematical knowledge;
  • general techniques and tools for online collaborative mathematics;
  • tools for collaboratively producing, presenting, publishing, and interacting with online mathematics;
  • automation and computer-human interaction aspects of mathematical wikis;
  • practical experiences, usability aspects, feasibility studies;
  • evaluation of existing tools and experiments;
  • requirements, user scenarios and goals.

ISO initiative OntoIOp (Ontology interoperability)

Filed under: Interoperability,Ontology — Patrick Durusau @ 7:46 pm

ISO initiative OntoIOp (Ontology interoperability)

Prof. Dr. Till Mossakowsk post the following note to the ontolog-forum today:

Dear all,

we are currently involved in a new ISO standardisation initiative concerned with ontology interoperability.

This initiative is somehow orthogonal and complementary to Common Logic, because the topic is interoperability. This means interoperability both among ontologies (i.e. concering matching, alignment, and suitable means to write these down) as well as among ontology languages (e.g. OWL, UML, Common Logic, or F-logic, and translations among these). The idea is to have all these languages as part of a meta-standard, such that ontology designers can bring in their ontologies verbatim as they are, and yet relate them to other ontologies (e.g. check that an OWL version of some ontology is entailed by its first-order formulation).

The first official meeting for this is already mid next month in Seoul, and we now quickly have to move forward getting some countries into the boat. It will be essential to have experts from all relevant communities involved in this effort.

If you are interested in this initiative, the rough draft [1] for the standard and a related paper [2] will give you some more info. Please have a look and let me know what you think. We also look for people who want to officially take part in the development of the standard, either actively or just by voting on behalf of your national standardisation body.

All the best,
Till

[1] http://www.dfki.de/sks/till/papers/OntoIOp.pdf
[2] http://www.dfki.de/sks/till/papers/ontotrans.pdf

I haven’t had time to review the documents but given the time frame wanted to bring this to your attention sooner rather than later.

When you have reviewed the documents, comments welcome.

Deconstructing BBC Site Design

Filed under: Interface Research/Design — Patrick Durusau @ 7:46 pm

Deconstructing BBC Site Design

Matthew Hurst walks through a deconstruction of the BBC website.

Studying websites known to attract users is a good way to learn web design.

Attracting users to your topic maps (or at least not repelling them) is a good idea.

At least if you want to promote topic maps and/or have your maps become commercially successful.

Build a distributed realtime tweet search system in no time.

Filed under: Search Engines — Patrick Durusau @ 7:45 pm

Build a distributed realtime tweet search system in no time.

Part 1

Part 2

The next obvious step would be to overlay a topic map onto the tweet store.

Thinking that tweets of interest would be mapped fairly well, tweets that are not, well, that’s just the breaks.

Illustrates the principle that not every subject is going to get mapped.

Some may not be mapped out of lack of interest.

Some may not be mapped because they are outside the scope of a particular project.

Some may not be mapped due to oversight.

There isn’t any moral principle that every post, tweet, email list, or website has to be mapped or even indexed.

Here’s an interesting topic map experiment:

Using a web search engine, create a topic map of an international event but exclude any statements by government agencies or officials.

Think of your topic map as a noise reduction filter.

Suggestions on evaluation mechanisms? How much less noise does your topic map have than CNN?

Health: Public-Use Data Files and Documentation

Filed under: Data Source,Dataset — Patrick Durusau @ 7:45 pm

Health: Public-Use Data Files and Documentation

While looking for other data files, I ran across this resource.

Public health is always a popular topic (sorry!).

May 22, 2011

semantic_information_modeling_for_federation_rfp

Filed under: Federation,Semantics — Patrick Durusau @ 5:35 pm

semantic_information_modeling_for_federation_rfp

From the webpage:

The intent of the “Semantic Information Modeling for Federation” (SIMF) RFP effort is to directly tackle the “data problem” where interoperability, federation, reuse and sharing of information is made difficult due to variation in terminology, viewpoint and representation. Our belief is that this is best tackled with a semantic approach, one which is tuned to the needs and viewpoints of those who create and federate data and messaging models. Currently none of the data modeling capabilities, from E/R to UML to XSD handle the integration of independently conceived models. Even the structural mapping techniques that are used today are non-standard (With the exception of OMG-QVT). On the other hand, the semantic technologies are not providing standard conceptual frameworks or notations that resonate with these users. There has been sufficient progress in existing tools and techniques in this area, with divergent solutions, that suggest the need for standards.

The opportunity exists to make substantial gains in solving the data problem – a problem that has a multi-trillion dollar impact on unnecessary cost and lost opportunity. A 100% solution is not required or expected, providing incremental improvements will be a success. Note that as an OMG effort this is not research or an academic pursuit, but would bring some order to already developed approaches to foster the development of mainstream tools and industry support.

As part of the Semantic Architecture Track, SIMF can also be seen as a start to a semantic approach to architecture, one where multiple viewpoints of systems can co-exist yet share information and be mutually supportive. Models and modeling languages (including those from OMG) have not been sufficiently integrated, causing users problems when they want to look at more than just process, just objects or just information, for a single application or service. In that UML has many diagrams it has served that purpose to some extent, but was not designed for that role and the issues with more and more UML profiles suggest it is not the right foundation for a wide family of architectural languages at all levels. For this reason SIMF is expected to federate with UML, not be based on it.

Status: The status of SIMF is that we are still writing the RFP, which will be presented at the next OMG meeting in Salt Lake City. Note that writing the RFP requires a good specification of what is being asked for, not of the solution – the solutions come in responses to the RFP.

From OMG.

Sound familiar?

I’m signing up for the mailing list at least.

reclab

Filed under: Data Mining,Merging,Topic Map Software — Patrick Durusau @ 5:34 pm

reclab

From the website:

If you can’t bring the data to the code, bring the code to the data.

How do we do this? Simple. RecLab solves the intractable problem of supplying real data to researchers by turning it on is head. Rather than attempt the impossible task of bringing sensitive, proprietary retail data to innovative code, RecLab brings the code to the data on live retailing sites. This is done via the RichRelevance cloud environment-a large-scale, distributed environment that is the backbone of the leading dynamic personalization technology solution for the web’s top retailers.

Two things occurred to me while at this site:

1) Does this foreshadow enterprises being able to conduct competitions on analysis/mining/processing (BI) of their data? Rather than buying solutions and then learning the potential of an acquired solution?

2) For topic maps, is this a way to create competition between “merging” algorithms on “sensitive, proprietary” data? After all, it is users who decide whether appropriate “merging” has taken place.

BTW, this site has links to a contest with a $1 Million dollar prize. Just in case you are using topic maps to power recommender systems.

Data Science Toolkit

Filed under: Data Mining,Software — Patrick Durusau @ 5:34 pm

Data Science Toolkit by Peter Warden.

Interesting collection of data tools. Can use here or download to use locally.

Peter is the author of the Data Source Handbook from O’Reilly.

An essential vocabulary for the R language

Filed under: R — Patrick Durusau @ 5:33 pm

An essential vocabulary for the R language

Blog post and pointer to a list of 350 functions in R.

The essential vocabulary from which you can expand.

May 21, 2011

Erlang – Give it a try! (Marketing TM idea?)

Filed under: Erlang,Marketing — Patrick Durusau @ 5:24 pm

Erlang – Give it a try!

I just spot checked this online shell but it looks interesting.

Might be the sort of thing to get someone interested in Erlang.

Shows just enough to awaken interest in its capabilities.

Reminds me of a mother who read just enough of a story to capture a child’s interest and refused to read the rest of it. So the child had to learn to read it on their own, to find out how the story continued.

Maybe we need to do the same to market topic maps? Topic map enough content in an area that users will want to extend the topic map to make it more useful to them. To complete the story as it were.

I know what interests me, but it isn’t any more marketable than topic maps of 18th century castrati or similar obscurities. Fundable but not marketable.

Suggestions for what might prove to be popular topic maps?

FamilySearch.org

Filed under: Dataset,Marketing — Patrick Durusau @ 5:17 pm

FamilySearch.org

After locating the census record abstracts for record linkage, it occurred to me to look for census records for other countries.

Which fairly quickly put me out at family history sites.

FamilySearch.org looks like one of the better ones.

Pointers to very diverse sets of records which should provide grist for any matching algorithms as well as modeling issues for other information.

I am not familiar with the software in this area but my impression is that a lot of effort has gone into even the free stuff so poor UIs or performing apps need not apply. Topic maps are going to have to offer a real value add to get traction in this area.

If you investigate or are in the family history area, post a note if current software allows merging of family histories together?

HBase 0.90.3

Filed under: HBase — Patrick Durusau @ 5:15 pm

HBase 0.90.3

A bug-fix release of HBase.

opencorporates

Filed under: Authoring Topic Maps,Dataset — Patrick Durusau @ 5:14 pm

opencorporates – The Open Corporate Database of the World

A “alpha” status project that is collecting corporate registration/report information from around the world.

As of 21 May 2011, 12,678,041 companies.

Five US states plus District of Columbia, United Kingdom, Netherlands and a scattering of others.

This is a useful data source, provided the corporations of interest fall in a covered jurisdiction.

The following video illustrates the usefulness of this site:

How to use OpenCorporates to match companies in Google Refine

Certainly looks like a useful tool for populating a topic map to me!

That may be the ultimate value of all the Linked Data efforts. Being the step before reconciliation of information into a reliable form for merger with other reconciled information. At some point raw information has to be gathered together and a rough cut gathering with Linked Data is as good as any other method.

May 20, 2011

Integrated Public Use Microdata Series
(IPUMS-USA)

Filed under: Dataset,Record Linkage — Patrick Durusau @ 4:07 pm

Integrated Public Use Microdata Series (IPUMS-USA)

Lars Marius asked about some test data files for his Duke 0.1 release.

A lot of record linkage work is on medical records so there are disclosure agreements/privacy concerns, etc.

Just poking around for sample data sets and ran across this site.

From the website:

IPUMS-USA is a project dedicated to collecting and distributing United States census data. Its goals are to:

  • Collect and preserve data and documentation
  • Harmonize data
  • Disseminate the data absolutely free!

Goes back to the 1850 US Census and comes forward.

More data sets than I can easily describe and more are being produced.

Occurs to me that this could be good data for testing topic map techniques.

Enjoy!

Writing software is harder than writing books – Post

Filed under: Artificial Intelligence — Patrick Durusau @ 4:06 pm

Writing software is harder than writing books

John D. Cook quotes Donald Knuth’s discovery that writing software such as TeX is much harder than writing books. In part:

Another is that programming demands a significantly higher standard of accuracy. Programs don’t simply have to make sense to another human being, they must make sense to a computer.

It occurs to me that there is a corollary to that statement:

Teaching is hard than writing books.

A book, at a minimum, only has to make sense to its author.

Teaching, the successful kind, has to make sense to other human beings.

And, it demands a significantly higher degree of imagination, to successfully impart information to students.

Perhaps programming and teaching occupy different ends of a “hardness” spectrum with regard to book writing.

Programming is harder because computers are literal and dumb, teaching is harder because students are non-literal and intelligent.

SIREn: Efficient semi-structured Information Retrieval for Lucene

Filed under: Information Retrieval,Lucene,RDF — Patrick Durusau @ 4:06 pm

SIREn: Efficient semi-structured Information Retrieval for Lucene

From the announcement:

Efficient, large scale handling of semi-structured data (including RDF) is increasingly an important issue to many web and enterprise information reuse scenarios.

Querying graph structured data (RDF) is commonly achieved using specific solutions, called triplestores, typically based on DBMS backends. In Sindice we however needed something much more scalable than DBMS and with the desirable features of the typical Web Search engines: top-k query processing, real time updates, full text search, distributed indexes over shards, etc.

While Lucene has long offered these capabilities, its native capabilities are not intended for large semi-structured document collections (or documents with very different schemas). For this reason we developed SIREn – Semantic Information Retrieval Engine – a Lucene plugin to overcome these shortcomings and efficiently index and query RDF, as well as any textual document with an arbitrary amount of metadata fields.

Given its general applicability, we are delighted to release SIREn under the Apache 2.0 open source license. We hope businesses will find SIREn useful in implementing solutions upon the Web of Data.

You can start by looking at the features, review the performance benchmarks, learn more by reading the short tutorial and then download and try SIREn by yourself.

This looks very cool!

It’s tuple processing capabilities in particular!

Getting Started Spring Data Graph

Filed under: Neo4j,Spring Data — Patrick Durusau @ 4:05 pm

A series of videos on the Spring project and Neo4j:

Getting Started Spring Data Graph Part 1

Overview of the Spring project. Crash course in NoSQL. Reviews the four types of NoSQL databases. Two axes of scalability.

Comment: Watch Part 1 only if you are: 1) A glutton for repetition, 2) A reviewer trying to be complete.

Getting Started Spring Data Graph Part 2

Continuation of Part 1. 90% of corporate issues fit well into graph databases. Property graph. Edges represent relationships. Can have properties. Graphs are whiteboard friendly. Jumps to social graph code example. Promotes performance of graph databases. Then gets to Srping = JPA for Graph databases. (updated link, more comments forthcoming)

Getting Started Spring Data Graph Part 3

Continuation of Part 2. (updated link, new comments forthcoming)

Getting Started Spring Data Graph Part 4

Continuation of Part 3. (updated link, new comments forthcoming)

Suggest that you also spend time with:

Spring Data Graph with Neo4J Support

The main page for this project.

Through different links you will find:

The Spring Data Graph Guide Book (HTML)

and

The Spring Data Graph Guide Book (PDF)

I have only scanned the TOCs and they appear to be the same material.

An exciting project and one that bears watching.


I have updated the video links to point to presentations where the slide transitions work. Will be reviewing the new videos and posting updated comments.

Seevl

Filed under: Dataset,Interface Research/Design,Linked Data,Music Retrieval,Semantic Web — Patrick Durusau @ 4:04 pm

Seevl: Reinventing Music Discovery

If you are interested in music or interfaces, this is a must stop location!

Simple search box.

I tried searching for artists, albums, types of music.

In addition to search results you also get suggestions of related information.

The Why is this related? link for related information was particularly interesting. It offers a “why” additional information was offered for a particular search result.

Developers can access their data for non-commercial uses for free.

The simplicity of the interface was a real plus.

May 19, 2011

Duke 0.1 Release

Filed under: Duke,Entity Resolution,Lucene,Record Linkage — Patrick Durusau @ 3:28 pm

Duke 0.1 Release

Lars Marius Garshol on Duke 0.1:

Duke is a fast and flexible deduplication (or entity resolution, or record linkage) engine written in Java on top of Lucene. At the moment (2011-04-07) it can process 1,000,000 records in 11 minutes on a standard laptop in a single thread.

Version 0.1 has been released, consisting of a command-line tool which can read CSV, JDBC, SPARQL, and NTriples data. There is also an API for programming incremental processing and storing the result of processing in a relational database.

The GettingStarted page explains how to get started and has links to further documentation. This blog post describes the basic approach taken to match records. It does not deal with the Lucene-based lookup, but describes an early, slow O(n^2) prototype. This presentation describes the ideas behind the engine and the intended architecture.

If you have questions, please contact the developer, Lars Marius Garshol, larsga at garshol.priv.no.

I will look around for sample data files.

Kill Math

Filed under: Interface Research/Design,Language,Natural Language Processing,Semantics — Patrick Durusau @ 3:27 pm

Kill Math

Bret Victor writes:

The power to understand and predict the quantities of the world should not be restricted to those with a freakish knack for manipulating abstract symbols.

When most people speak of Math, what they have in mind is more its mechanism than its essence. This “Math” consists of assigning meaning to a set of symbols, blindly shuffling around these symbols according to arcane rules, and then interpreting a meaning from the shuffled result. The process is not unlike casting lots.

This mechanism of math evolved for a reason: it was the most efficient means of modeling quantitative systems given the constraints of pencil and paper. Unfortunately, most people are not comfortable with bundling up meaning into abstract symbols and making them dance. Thus, the power of math beyond arithmetic is generally reserved for a clergy of scientists and engineers (many of whom struggle with symbolic abstractions more than they’ll actually admit).

We are no longer constrained by pencil and paper. The symbolic shuffle should no longer be taken for granted as the fundamental mechanism for understanding quantity and change. Math needs a new interface.

A deeply interesting post that argues that Math needs a new interface, one more accessible to more people.

Since computers can present mathematical concepts and operations in visual representations.

Ironic the same computers gave rise to impoverished and difficult to use (for most people) representations of semantics.

Moving away from the widely adopted, easy to use and flexible representations of semantics in natural languages.

Do we need an old interface for semantics?

Designing faceted search: Getting the basics right (part 1)

Filed under: Facets,Interface Research/Design,Search Interface,Searching — Patrick Durusau @ 3:27 pm

Designing faceted search: Getting the basics right (part 1)

Tony Russell-Rose says:

Over the last couple of weeks we’ve looked at some of the more advanced design issues in faceted search, including the strengths and weaknesses of various interaction models and techniques for wayfinding and navigation. In this post, we’ll complement that material with a look at some of the other fundamental design considerations such as layout (i.e. where to place the faceted navigation menus) and default state (e.g. open, closed, or a hybrid). In so doing, I’d like to acknowledge the work of James Kalbach, and in particular his tutorial on faceted search design, which provides an excellent framework for many of the key principles outlined below.

To write or improve a faceted search interface, start with this series of posts.

How to map connections with great circles

Filed under: Geographic Data,Mapping,R — Patrick Durusau @ 3:26 pm

How to map connections with great circles

From the post:

There are various ways to visualize connections, but one of the most intuitive and straightforward ways is to actually connect entities or objects with lines. And when it comes to geographic connections, great circles are a nice way to do this.

This is a very nice R tutorial on using great circles to visualize airline connections.

The same techniques could map “connections” of tweets, phone calls, emails, any type of data that can be associated with a geographic location.

Search Your Gmail Messages with ElasticSearch and Ruby

Filed under: Dataset,ElasticSearch,Search Data,Search Engines,Search Interface — Patrick Durusau @ 3:26 pm

Search Your Gmail Messages with ElasticSearch and Ruby

From the website:

If you’d like to check out ElasticSearch, there’s already lots of options where to get the data to feed it with. You can use a Twitter or Wikipedia river to fill it with gigabytes of public data, or you can feed it very quickly with some RSS feeds.

But, let’s get a bit personal, shall we? Let’s feed it with your own e-mail, imported from your own Gmail account.

A useful way to teach basic searching.

After all, a search of Wikipedia or Twitter may return impressive results, but are they correct results?

Hard for a user to say because both Wikipedia and Twitter are large enough that verification (other than by other programs) of search results isn’t possible.

Assuming your Gmail inbox is smaller than Wikipedia you should be able to recognize what results are “correct” and which ones look “off.”

And you may learn some Ruby in the bargain.

Not a bad day’s work. 😉


PS: You may want to try the links on mining Twitter, Wikipedia and RSS feeds with ElasticSearch.

May 18, 2011

ICON Programming for Humanists, 2nd edition

Filed under: Data Mining,Indexing,Text Analytics,Text Extraction — Patrick Durusau @ 6:50 pm

ICON Programming for Humanists, 2nd edition

From the foreword to the first edition:

This book teaches the principles of Icon in a very task-oriented fashion. Someone commented that if you say “Pass the salt” in correct French in an American university you get an A. If you do the same thing in France you get the salt. There is an attempt to apply this thinking here. The emphasis is on projects which might interest the student of texts and language, and Icon features are instilled incidentally to this. Actual programs are exemplified and analyzed, since by imitation students can come to devise their own projects and programs to fulfill them. A number of the illustrations come naturally enough from the field of Stylistics which is particularly apt for computerized approaches.

I can’t say that the success of ICON is a recommendation for task-oriented teaching but as I recall the first edition, I thought it was effective.

Data mining of texts is an important skill in the construction of topic maps.

This is a very good introduction to that subject.

Practical Machine Learning

Filed under: Algorithms,Machine Learning,Statistical Learning,Statistics — Patrick Durusau @ 6:45 pm

Practical Machine Learning, by Michael Jordan (UC Berkeley).

From the course webpage:

This course introduces core statistical machine learning algorithms in a (relatively) non-mathematical way, emphasizing applied problem-solving. The prerequisites are light; some prior exposure to basic probability and to linear algebra will suffice.

This is the Michael Jordan who gave a Posner Lecture at the 24th Annual Conference on Neural Information Processing Systems 2010.

Datalift

Filed under: Dataset,Linked Data,Semantic Web — Patrick Durusau @ 6:42 pm

Datalift (also available in French)

From the webpage:

Datalift brings raw structured data coming from various formats (relational databases, CSV, XML, …) to semantic data interlinked on the Web of Data.

Datalift is an experimental research project funded by the French national research agency. Its goal is to develop a platform to publish and interlink datasets on the Web of data. Datalift will both publish datasets coming from a network of partners and data providers and propose a set of tools for easing the datasets publication process.

A few steps to data heaven

The project will provide tools allowing to facilitate each step of the publication process:

  • selecting ontologies for publishing data
  • converting data to the appropriate format (RDF using the selected ontology)
  • publishing the linked data
  • interlinking data with other data sources

The project is funded for three years so it needs to hit the ground on the run.

I am sure they would appreciate useful feedback.

Webinar – Neo4j in real-world applications

Filed under: Graphs,Neo4j — Patrick Durusau @ 6:41 pm

Webinar – Neo4j in real-world applications

Thu, May 19, 2011 3:00 PM – 4:00 PM GMT

From the webpage:

Peter Neubauer as presenter. Graph databases are designed to deal with big amounts of complex data structures in a transactional and performant manner. This Webinar is going to give an introduction to the data model and the Neo4j Graph Database, and walk you through some of the application domains where graphs are used in real deployments,

Apologies for the late notice, I thought I had posted a note about this webinar.

Balisage 2011 Preliminary Program

Filed under: Conferences,Data Mining,RDF,SPARQL,XPath,XQuery,XSLT — Patrick Durusau @ 6:40 pm

At-A-Glance

Program (in full)

From the announcement (Tommie Usdin):

Topics this year include:

  • multi-ended hypertext links
  • optimizing XSLT and XQuery processing
  • interchange, interoperability, and packaging of XML documents
  • eBooks and epub
  • overlapping markup and related topics
  • visualization
  • encryption
  • data mining

The acronyms this year include:

XML XSLT XQuery XDML REST XForms JSON OSIS XTemp RDF SPARQL XPath

New this year will be:

Lightning talks: an opportunity for participants to say what they think, simply, clearly, and persuasively.

As I have said before, simply the best conference of the year!

Conference site: http://www.balisage.net/

Registration: http://www.balisage.net/registration.html

« Newer PostsOlder Posts »

Powered by WordPress