Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

October 30, 2017

Smart HTML Form Trick

Filed under: HTML,Search Interface,Searching — Patrick Durusau @ 7:37 pm

An HTML form trick to add some convenience to life by Bob DuCharme.

From the post:

On the computers that I use the most, the browser home page is an HTML file with links to my favorite pages and a “single” form that lets me search the sites that I search the most. I can enter a search term in the field for any of the sites, press Enter, and then that site gets searched. The two tricks that I use to create these fields have been handy enough that I thought I’d share them in case they’re useful to others.

I quote the word “single” above because it appears to be a single form but is actually multiple little forms in the HTML. Here is an example with four of my entries; enter something into any of the fields and press Enter to see what I mean:

As always, an immediately useful tip from DuCharme!

The multiple search boxes reminded me of the early metasearch engines that combined results from multiple search engines.

Will vary by topic but what resources would you search across day to day?

March 9, 2017

How Bad Is Wikileaks Vault7 (CIA) HTML?

Filed under: HTML,Wikileaks,WWW,XQuery — Patrick Durusau @ 8:29 pm

How bad?

Unless you want to hand correct 7809 html files to use with XQuery, grab the latest copy of Tidy

It’s not the worst HTML I have ever seen, but put that in the context of having seen a lot of really poor HTML.

I’ve “tidied” up a test collection and will grab a fresh copy of the files before producing and releasing a clean set of the HTML files.

Producing a document collection for XQuery processing. Working towards something suitable for application of NLP and other tools.

May 5, 2016

TEI XML -> HTML w/ XQuery [+ CSS -> XML]

Filed under: HTML,Text Encoding Initiative (TEI),XML,XQuery — Patrick Durusau @ 1:10 pm

Convert TEI XML to HTML with XQuery and BaseX by Adam Steffanick.

From the post:

We converted a document from the Text Encoding Initiative’s (TEI) Extensible Markup Language (XML) scheme to HTML with XQuery, an XML query language, and BaseX, an XML database engine and XQuery processor. This guide covers the basics of how to convert a document from TEI XML to HTML while retaining element attributes with XQuery and BaseX.

I’ve created a GitHub repository of sample TEI XML files to convert from TEI XML to HTML. This guide references a GitHub gist of XQuery code and HTML output to illustrate each step of the TEI XML to HTML conversion process.

The post only treats six (6) TEI elements but the methods presented could be extended to a larger set of TEI elements.

TEI 5 has 563 elements, which may appear in varying, valid, combinations. It also defines 256 attributes which are distributed among those 563 elements.

Consider using XQuery as a quality assurance (QA) tool to insure that encoded texts conform your project’s definition of expected text encoding.

While I was at Adam’s site I encountered: Convert CSV to XML with XQuery and BaseX, which you should bookmark for future reference.

October 31, 2015

What is Scholarly HTML?

Filed under: HTML,Preservation,Publishing — Patrick Durusau @ 11:03 am

What is Scholarly HTML? by Robin Berjon and Sébastien Ballesteros.

Abstract:

Scholarly HTML is a domain-specific data format built entirely on open standards that enables the interoperable exchange of scholarly articles in a manner that is compatible with off-the-shelf browsers. This document describes how Scholarly HTML works and how it is encoded as a document. It is, itself, written in Scholarly HTML.

The abstract is accurate enough but the “Motivation” section provides a better sense of this project:

Scholarly articles are still primarily encoded as unstructured graphics formats in which most of the information initially created by research, or even just in the text, is lost. This was an acceptable, if deplorable, condition when viable alternatives did not seem possible, but document technology has today reached a level of maturity and universality that makes this situation no longer tenable. Information cannot be disseminated if it is destroyed before even having left its creator’s laptop.

According to the New York Times, adding structured information to their recipes (instead of exposing simply as plain text) improved their discoverability to the point of producing an immediate rise of 52 percent in traffic (NYT, 2014). At this point in time, cupcake recipes are reaping greater benefits from modern data format practices than the whole scientific endeavour.

This is not solely a loss for the high principles of knowledge sharing in science, it also has very immediate pragmatic consequences. Any tool, any service that tries to integrate with scholarly publishing has to spend the brunt of its complexity (or budget) extracting data the author would have willingly shared out of antiquated formats. This places stringent limits on the improvement of the scholarly toolbox, on the discoverability of scientific knowledge, and particularly on processes of meta-analysis.

To address these issues, we have followed an approach rooted in established best practices for the reuse of open, standard formats. The «HTML Vernacular» body of practice provides guidelines for the creation of domain-specific data formats that make use of HTML’s inherent extensibility (Science.AI, 2015b). Using the vernacular foundation overlaid with «schema.org» metadata we have produced a format for the interchange of scholarly articles built on open standards, ready for all to use.

Our high-level goals were:

  • Uncompromisingly enabling structured metadata, accessibility, and internationalisation.
  • Pragmatically working in Web browsers, even if it occasionally incurs some markup overhead.
  • Powerfully customisable for inclusion in arbitrary Web sites, while remaining easy to process and interoperable.
  • Entirely built on top of open, royalty-free standards.
  • Long-term viability as a data format.

Additionally, in view of the specific problem we addressed, in the creation of this vernacular we have favoured the reliability of interchange over ease of authoring; but have nevertheless attempted to cater to the latter as much as possible. A decent boilerplate template file can certainly make authoring relatively simple, but not as radically simple as it can be. For such use cases, Scholarly HTML provides a great output target and overview of the data model required to support scholarly publishing at the document level.

An example of an authoring format that was designed to target Scholarly HTML as an output is the DOCX Standard Scientific Style which enables authors who are comfortable with Microsoft Word to author documents that have a direct upgrade path to semantic, standard content.

Where semantic modelling is concerned, our approach is to stick as much as possible to schema.org. Beyond the obvious advantages there are in reusing a vocabulary that is supported by all the major search engines and is actively being developed towards enabling a shared understanding of many useful concepts, it also provides a protection against «ontological drift» whereby a new vocabulary is defined by a small group with insufficient input from a broader community of practice. A language that solely a single participant understands is of limited value.

In a small, circumscribed number of cases we have had to depart from schema.org, using the https://ns.science.ai/ (prefixed with sa:) vocabulary instead (Science.AI, 2015a). Our goal is to work with schema.org in order to extend their vocabulary, and we will align our usage with the outcome of these discussions.

I especially enjoyed the observation:

According to the New York Times, adding structured information to their recipes (instead of exposing simply as plain text) improved their discoverability to the point of producing an immediate rise of 52 percent in traffic (NYT, 2014). At this point in time, cupcake recipes are reaping greater benefits from modern data format practices than the whole scientific endeavour.

I don’t doubt the truth of that story but after all, a large number of people are interested in baking cupcakes. Not more than three in many cases, are interested in reading any particular academic paper.

The use of schema.org will provide advantages for common concepts but to be truly useful for scholarly writing, it will require serious extension.

Take for example my post yesterday Deep Feature Synthesis:… [Replacing Human Intuition?, Calling Bull Shit]. What microdata from schema.org would help readers find Propositionalisation and Aggregates, 2001, which describes substantially the same technique, without claims of surpassing human intuition? (Uncited by the authors the paper on deep feature synthesis.)

Or the 161 papers on propositionalisation that you can find at CiteSeer?

A crude classification that can be used by search engines is very useful but falls far short of the mark in terms of finding and retrieving scholarly writing.

Semantic uniformity for classifying scholarly content hasn’t been reached by scholars or librarians despite centuries of effort. Rather than taking up that Sisyphean task, let’s map across the ever increasing universe of semantic diversity.

June 3, 2015

W3C Validation Tools – New Location

Filed under: CSS3,HTML,HTML5,W3C,WWW — Patrick Durusau @ 10:52 am

W3C Validation Tools

The W3C graciously hosts the following free validation tools:

CSS Validator – Checks your Cascading Style Sheets (CSS)

Internationalization Checker – Checks level of internationalization-friendliness.

Link Checker – Checks your web pages for broken links.

Markup Validator – Checks the markup of your Web documents (HTML or XHTML).

RSS Feed Validator – Validator for syndicated feeds. (RSS and Atom feeds)

RDF Validator – Checks and visualizes RDF documents.

Unicorn – Unified validator. HTML, CSS, Links & Mobile.

Validator.nu Checks HTML5.

I mention that these tools are free to emphasize there is no barrier to their use.

Just as you wouldn’t submit a research paper with pizza grease stains on it, use these tools to proof draft standards before you submit them for review.

October 28, 2014

HTML5 is a W3C Recommendation

Filed under: HTML,WWW — Patrick Durusau @ 7:05 pm

HTML5 is a W3C Recommendation

From the post:

(graphic omitted) The HTML Working Group today published HTML5 as W3C Recommendation. This specification defines the fifth major revision of the Hypertext Markup Language (HTML), the format used to build Web pages and applications, and the cornerstone of the Open Web Platform.

Today we think nothing of watching video and audio natively in the browser, and nothing of running a browser on a phone,” said Tim Berners-Lee, W3C Director. “We expect to be able to share photos, shop, read the news, and look up information anywhere, on any device. Though they remain invisible to most users, HTML5 and the Open Web Platform are driving these growing user expectations.

HTML5 brings to the Web video and audio tracks without needing plugins; programmatic access to a resolution-dependent bitmap canvas, which is useful for rendering graphs, game graphics, or other visual images on the fly; native support for scalable vector graphics (SVG) and math (MathML); annotations important for East Asian typography (Ruby); features to enable accessibility of rich applications; and much more.

The HTML5 test suite, which includes over 100,000 tests and continues to grow, is strengthening browser interoperability. Learn more about the Test the Web Forward community effort.

With today’s publication of the Recommendation, software implementers benefit from Royalty-Free licensing commitments from over sixty companies under W3C’s Patent Policy. Enabling implementers to use Web technology without payment of royalties is critical to making the Web a platform for innovation.

Read the Press Release, testimonials from W3C Members, and
acknowledgments. For news on what’s next after HTML5, see W3C CEO Jeff Jaffe’s blog post: Application Foundations for the Open Web Platform. We also invite you to check out our video Web standards for the future.

Just in case you have been holding off on HTML5 until it became an W3C Recommendation. 😉

Enjoy!

October 1, 2014

The Case for HTML Word Processors

Filed under: HTML,Software,Word Processing — Patrick Durusau @ 5:07 pm

The Case for HTML Word Processors by Adam Hyde.

From the post:

Making a case for HTML editors as stealth Desktop Word Processors…the strategy has been so stealthy that not even the developers realised what they were building.

We use all these over-complicated softwares to create Desktop documents. Microsoft Word, LibreOffice, whatever you like – we know them. They are one of the core apps in any users operating system. We also know that they are slow, unwieldy and have lots of quirky ways of doing things. However most of us just accept that this is the way it is and we try not to bother ourselves by noticing just how awful these softwares actually are.

So, I think it might be interesting to ask just this simple question – what if we used Desktop HTML Editors instead of Word Processors to do Word Processing? It might sound like an irrational proposition…Word Processors are, after all, for Word Processing. HTML editors are for creating…well, …HTML. But lets just forget that. What if we could allow ourselves to imagine we used an HTML editor for all our word processing needs and HTML replaces .docx and .odt and all those other over-burdened word processing formats. What do we win and what do we lose?

I’m not convinced about HTML word processors but Adam certainly starts with the right question:

What do we win and what do we lose? (emphasis added)

Line your favorite word processing format up along side HTML + CSS and calculate the wins and loses.

Not that HTML word processors can, should or will replace complex typography when appropriate, but how many documents need the full firepower of a modern word processor?

I would ask a similar question about authoring interfaces for topic maps. What is the least interface that can usefully produce a topic map?

The full bells and whistle versions are common now (I omit naming names) but should those be the only choices?

PS: As far as MS Word, I use “open,” “close,” “save,” “copy,” “paste,” “delete,” “hyperlink,” “bold,” and “italic.” What’s that? Nine operations? You experience may vary. 😉

I use LaTeX and another word processing application for most of my writing off the Web.

I first saw this in a tweet by Ivan Herman

March 25, 2014

Shadow DOM

Filed under: CSS3,Graphics,HTML,Visualization,XML — Patrick Durusau @ 3:15 pm

Shadow DOM by Steven Wittens.

From the post:

For a while now I’ve been working on MathBox 2. I want to have an environment where you take a bunch of mathematical legos, bind them to data models, draw them, and modify them interactively at scale. Preferably in a web browser.

Unfortunately HTML is crufty, CSS is annoying and the DOM’s unwieldy. Hence we now have libraries like React. It creates its own virtual DOM just to be able to manipulate the real one—the Agile Bureaucracy design pattern.

The more we can avoid the DOM, the better. But why? And can we fix it?
….

One of the better posts on markup that I have read in a very long time.

Also of interest, Steven’s heavy interest in graphics and visualization.

His MathBox project for example.

December 12, 2012

Identifiers, 404s and Document Security

Filed under: HTML,Identifiers,Security — Patrick Durusau @ 5:28 pm

I am working on a draft about identifiers (using the standard <a> element) when it occurred to me that URLs could play an unexpected role in document security. (At least unexpected by me, your mileage may vary.)

What if I create a document that has URLs like:

<a href="http://server-exists.x/page-does-not.html>text content</a>

So that a user who attempts to follow the link, gets a “404” message back.

Why is that important?

What if I am writing HTML pages at a nuclear weapon factory? I would be very interested in knowing if one of my pages had gotten off the reservation so to speak.

The server being accessed for a page that deliberately does not exist could route the contact information for an appropriate response.

Of course, I would use better names or have pages that load, while transmitting the same contact information.

Or have a very large uuencoded “password” file that burps, bumps and slowly downloads. (Always knew there was a reason to keep a 2400 baud modem around.)

Have suggestions on how to make a non-existent URL work but will save that for another day.

November 30, 2012

October 9, 2012

WebPlatform.org [Pump Up Web Technology Search Clutter]

Filed under: CSS3,HTML,HTML5 — Patrick Durusau @ 2:58 pm

WebPlatform.org

From the webpage:

We are an open community of developers building resources for a better web, regardless of brand, browser or platform. Anyone can contribute and each person who does makes us stronger. Together we can continue to drive innovation on the Web to serve the greater good. It starts here, with you.

From Matt Brian:

In an attempt to create the “definitive resource” for all open Web technologies, Apple, Adobe, Facebook, Google, HP, Microsoft, Mozilla, Nokia, and Opera have joined the W3C to launch a new website called ‘Web Platform

The new website will serve as a single source of relevant, up-to-date and quality information on the latest HTML5, CSS3, and other Web standards, offering tips on web development and best practises for the technologies.

I first saw this at the Semanticweb.com (Angela Guess).

So, maybe having documentation, navigable and good documentation, isn’t so weird after all. 😉

Assume I search for guidance on HTML5, CSS3, etc. Now there is a new site to add to web technology search results.

Glad to see the site, but not the addition to search clutter.

I suppose you could boost the site in response to all searches for web technology. Wonder if that will happen?

Doesn’t help your local silo of links.

October 3, 2012

At or Near Final Calls on W3C Provenance

Filed under: HTML,Provenance — Patrick Durusau @ 7:48 pm

I saw a notice today about the ontology part of the W3C work on provenance. Some of it is at final call or nearly so. If you are interested, see:

  • PROV-DM, the PROV data model for provenance;
  • PROV-CONSTRAINTS, a set of constraints applying to the PROV data model;
  • PROV-N, a notation for provenance aimed at human consumption;
  • PROV-O, the PROV ontology, an OWL2 ontology allowing the mapping of PROV to RDF;
  • PROV-AQ, the mechanisms for accessing and querying provenance;
  • PROV-PRIMER, a primer for the PROV data model.

My first impression is the provenance work is more complex than HTML 3.2 and therefore unlikely to see widespread adoption. (You may want to bookmark that link. It isn’t listed on the HTML page at the W3C, even under obsolete versions.)

September 2, 2012

HTML [Lessons in Semantic Interoperability – Part 3]

Filed under: HTML,Interoperability,Semantics — Patrick Durusau @ 12:06 pm

If HTML is an example of semantic interoperability, are there parts of HTML that can be re-used for more semantic interoperability?

Some three (3) year old numbers on usage of HTML elements:

Element Percentage
a 21.00
td 15.63
br 9.08
div 8.23
tr 8.07
img 7.12
option 4.90
li 4.48
span 3.98
table 3.15
font 2.80
b 2.32
p 1.98
input 1.79
script 1.77
strong 0.97
meta 0.95
link 0.66
ul 0.65
hr 0.37
http://webmasters.stackexchange.com/questions/11406/recent-statistics-on-html-usage-in-the-wild

Assuming they still hold true, the <a> element is by far the most popular.

Implications for a semantic interoperability solution that leverages on the <a> element?

Leave the syntax the hell alone!

As we saw in parts 1 and 2 of this series, the <a> element has:

  • simplicity
  • immediate feedback

If you don’t believe me, teach someone who doesn’t know HTML at all how to create an <a> element and verify its presence in browser. (I’ll wait.)

Back so soon? 😉

To summarize: The <a> element is simple, has immediate feedback and is in widespread use.

All of which makes it a likely candidate to leverage for semantic interoperability. But how?

And what of all the other identifiers in the world? What happens to them?

September 1, 2012

HTML [Lessons in Semantic Interoperability – Part 2]

Filed under: HTML,Interoperability,Semantics,Web Server — Patrick Durusau @ 10:11 am

While writing Elli (Erlang Web Server) [Lessons in Semantic Interoperability – Part 1], I got distracted by the realization that web servers produce semantically interoperable content every day. Lots of it. For hundreds of millions of users.

My question: What makes the semantics of HTML different?

The first characteristic that came to mind was simplicity. Unlike some markup languages, ;-), HTML did not have to await the creation of WYSIWYG editors to catch on. In part I suspect because after a few minutes with it, most users (not all), could begin to author HTML documents.

Think about the last time you learned something new. What is the one thing that brings closure to the learning experience?

Feedback, knowing if your attempt at an answer is right or wrong. If right, you will attempt the same solution under similar circumstances in the future. If wrong, you will try again (hopefully).

When HTML appeared, so did primitive (in today’s terms) web browsers.

Any user learning HTML could get immediate feedback on their HTML authoring efforts.

Not:

  • After installing additional validation software
  • After debugging complex syntax or configurations
  • After millions of other users do the same thing
  • After new software appears to take advantage of it

Immediate feedback means just that immediate feedback.

The second characteristic is immediate feedback.

You can argue that such feedback was an environmental factor and not a characteristic of HTML proper.

Possibly, possibly but if such a distinction is possible and meaningful, how does it help with the design/implementation of the next successful semantic interoperability language?

I would argue by whatever means, any successful semantic interoperability language is going to include immediate feedback, however you classify it.

August 18, 2012

Creating Your First HTML 5 Web Page [HTML5 – Feature Freeze?]

Filed under: HTML,HTML5,WWW — Patrick Durusau @ 4:06 pm

Creating Your First HTML 5 Web Page by Michael Dorf.

From the post:

Whether you have been writing web pages for a while or you are new to writing HTML, the new HTML 5 elements are still within your reach. It is important to learn how HTML 5 works since there are many new features that will make your pages better and more functional. Once you get your first web page under your belt you will find that they are very easy to put together and you will be on your way to making many more.

To begin, take a look at this base HTML page we will be working with. This is just a plain-ol’ HTML page, but we can start adding HTML5 elements to jazz it up!

But that’s not why I am posting it here. 😉

A little later Michael says:

The new, simple DOCTYPE is much easier to remember and use than previous versions. The W3C is trying to stop versioning HTML so that backwards compatibility will become easier, so there are “technically” no more versions of HTML.

I’m not sure I follow on “…to stop versioning HTML so that backwards compatibility will become easier….”

Unless that means that HTML (5 I assume) is going into a feature/semantic freeze?

That would promote backwards compatibility but I am not sure is a good solution.

Just curious if you have heard the same?

Comments?

December 30, 2011

23 Useful Online HTML5 Tools

Filed under: HTML,Web Applications — Patrick Durusau @ 6:11 pm

23 Useful Online HTML5 Tools

Just in case you are working on delivery of topic maps using HTML5.

I am curious about the “Are you aware that HTML5 is captivating the web by leaps and bounds?” lead off line.

Particularly when I read articles like: HTML5: Current progress and adoption rates.

Or the following quote from: HTML5 Adoption Might Hurt Apple’s Profit, Research Finds

The switch from native apps to HTML5 apps will not happen overnight. At the moment, HTML5 apps have some problems that native apps do not. HTML5 apps are typically slower than native apps, which is a particularly important issue for games. An estimated 20 percent of mobile games will most likely never be Web apps, Bernstein said.

Furthermore, there are currently differences in Web browsers across mobile platforms that can raise development costs for HTML5 apps. They can also pose a greater security risk. This can result in restricting access to underlying hardware by handset manufacturers to reduce the possible impact of these risks.

Taking all this into account, Bernstein Research reckoned that HTML5 will mature in the next few years, which will in turn have an impact on Apple’s revenue growth. Nevertheless, the research firm, which itself makes a market in Apple, still recommended investing in the company.

Apple executives are reported to be supporters of HTML5. Which makes sense if you think about it. By the time HTML5 matures enough to be a threat, Apple will have moved on, leaving the HTML5ers to fight over what is left in a diminishing market share. Supporting a technology that makes your competition’s apps slower and less secure makes sense as well.

How are you using HTML5 with topic maps?

November 20, 2011

These Aren’t the Sites You’re Looking For: Building Modern Web Apps

Filed under: HTML,Interface Research/Design — Patrick Durusau @ 4:09 pm

These Aren’t the Sites You’re Looking For: Building Modern Web Apps

Interesting promo for HTML5, which is a developing way to deliver interaction with a topic map.

The presentation does not focus on use of user feedback, the absence of which can leave you with a “really cool” interface that no one outside the development team really likes. To no small degree, it is good interface design with users that tells the tale, not how the interface is seen to work on the “other” side of the screen.

BTW, the slides go out of their way to promote the Chrome browser. Browser usage statistics, you do the math. Marketing is a matter of numbers, not religion.

If you are experimenting with HTML5 as a means to interact with a topic map engine, would appreciate a note when you are ready to go public.

October 27, 2011

HTML5 web dev reading list

Filed under: HTML,Interface Research/Design — Patrick Durusau @ 4:45 pm

HTML5 web dev reading list

I am sure there are more of these than can be easily counted.

Suggestions on others that will be particularly useful for people developing topic map interfaces? (Should not be different from effective interfaces in general.)

Thanks!

October 23, 2011

The Simple Way to Scrape an HTML Table: Google Docs

Filed under: Data Mining,HTML — Patrick Durusau @ 7:22 pm

The Simple Way to Scrape an HTML Table: Google Docs

From the post:

Raw data is the best data, but a lot of public data can still only be found in tables rather than as directly machine-readable files. One example is the FDIC’s List of Failed Banks. Here is a simple trick to scrape such data from a website: Use Google Docs.

OK, not a great trick but if you are in a hurry it may be a useful one.

Of course, I get the excuse from local governments that their staff can’t export data in useful formats (I get images of budget documents in PDF files, how useful is that?).

October 13, 2011

Open – Videos

Filed under: Conferences,HTML — Patrick Durusau @ 6:56 pm

Open – Videos

For those of you who don’t think HTML5 and developers are all that weird:

Full-length videos from the first two TimesOpen events, HTML5 and Beyond, and Innovating Developer Culture, are now available. Approximately five (5!) hours in total, there’s a lot of good information.

We have the lineup in place for the next TimesOpen on Personalization & Privacy, taking place Tuesday October 25, 6:30 p.m., at the Times Building. Details and registration information will be posted soon (like next week).

October 2, 2011

HTML Data Task Force

Filed under: HTML,Microdata,RDFa,W3C — Patrick Durusau @ 6:34 pm

HTML Data Task Force, chaired by Jeni Tennison.

Another opportunity to participate in important work at the W3C without a membership. The “details” of getting diverse formats to work together.

Close analysis may show the need for changes to syntaxes, etc., but as far as mapping goes, topic maps can take syntaxes as they are. Could be an opportunity to demonstrate working solutions for actual use cases.

From the wikipage:

This HTML Data Task Force considers RDFa 1.1 and microdata as separate syntaxes, and conducts a technical analysis on the relationship between the two formats. The analysis discusses specific use cases and provide guidance on what format is best suited for what use cases. It further addresses the question how different formats can be used within the same document when required and how data expressed in the different formats can be combined by consumers.

The task force MAY propose modifications in the form of bug reports and change proposals on the microdata and/or RDFa specifications, to help users to easily transition between the two syntaxes or use them together. As with all such comments, the ultimate decisions on implementing these will rest with the respective Working Groups.

Further, the Task Force should also produce a draft specifications of mapping algorithms from an HTML+microdata content to RDF, as well as a mapping of RDFa to microdata’s JSON format. These MAY serve as input documents to possible future recommendation track works. These mappings should be, if possible, generic, i.e., they should not be dependent on any particular vocabulary. A goal for these mappings should be to facilitate the use of both formats with the same vocabularies without creating incompatibilities.

The Task Force will also consider design patterns for vocabularies, and provide guidance on how vocabularies should be shaped to be usable with both microdata and RDFa and potentially with microformats. These patterns MAY lead to change proposals of existing (RDF) vocabularies, and MAY result in general guidelines for the design of vocabularies for structured data on the web, building on existing community work in this area.

The Task Force liaises with the SWIG Web Schemas Task Force to ensure that lessons from real-world experience are incorporated into the Task Force recommendations and that any best practices described by the Task Force are synchronised with real-world practice.

The Task Force conducts its work through the public-html-data-tf@w3.org mailing list (use this link to subscribe or look at the public archives), as well as on the #html-data-tf channel of the (public) W3C IRC server.

Powered by WordPress