Archive for the ‘Collaborative Annotation’ Category

Authorea

Friday, May 22nd, 2015

Authorea is the collaborative typewriter for academia.

From the website:

Write on the web.
Writing a scientific article should be as easy as writing a blog post. Every document you create becomes a beautiful webpage, which you can share.
Collaborate.
More and more often, we write together. A recent paper coauthored on Authorea by a CERN collaboration counts over 200 authors. If we can solve collaboration for CERN, we can solve it for you too!
Version control.
Authorea uses Git, a robust versioning control system to keep track of document changes. Every edit you and your colleagues make is recorded and can be undone at any time.
Use many formats.
Authorea lets you write in LaTeX, Markdown, HTML, Javascript, and more. Different coauthors, different formats, same document.
Data-rich science.
Did you ever wish you could share with your readers the data behind a figure? Authorea documents can take data alongside text and images, such as IPython notebooks and d3.js plots to make your articles shine with beautiful data-driven interactive visualizations.

Sounds good? Sign up or log in to get started immediately, for free.

Authorea uses a gentle form of open source persuasion. You can have one (1) private article for free but unlimited public articles. As your monthly rate goes up, you can have an increased number of private articles. Works for me because most if not all of my writing/editing is destined to be public anyway.

Standards are most useful when they are writ LARGE so private or “secret” standards have never made sense to me.

Collaborative annotation… [Human + Machine != Semantic Monotony]

Sunday, April 21st, 2013

Collaborative annotation for scientific data discovery and reuse by Kirk Borne. (Borne, K. (2013), Collaborative annotation for scientific data discovery and reuse. Bul. Am. Soc. Info. Sci. Tech., 39: 44–45. doi: 10.1002/bult.2013.1720390414)

Abstract:

Human classification alone, unable to handle the enormous quantity of project data, requires the support of automated machine-based strategies. In collaborative annotation, humans and machines work together, merging editorial strengths in semantics and pattern recognition with the machine strengths of scale and algorithmic power. Discovery informatics can be used to generate common data models, taxonomies and ontologies. A proposed project of massive scale, the Large Synoptic Survey Telescope (LSST) project, will systematically observe the southern sky over 10 years, collecting petabytes of data for analysis. The combined work of professional and citizen scientists will be needed to tag the discovered astronomical objects. The tag set will be generated through informatics and the collaborative annotation efforts of humans and machines. The LSST project will demonstrate the development and application of a classification scheme that supports search, curation and reuse of a digital repository.

A persuasive call to arms to develop “collaborative annotation:”

Humans and machines working together to produce the best possible classification label(s) is collaborative annotation. Collaborative annotation is a form of human computation [1]. Humans can see patterns and semantics (context, content and relationships) more quickly, accurately and meaningfully than machines. Human computation therefore applies to the problem of annotating, labeling and classifying voluminous data streams.

And more specifically for the Large Synoptic Survey Telescope (LSST):

The discovery potential of this data collection would be enormous, and its long-term value (through careful data management and curation) would thus require (for maximum scientific return) the participation of scientists and citizen scientists as well as science educators and their students in a collaborative knowledge mark-up (annotation and tagging) data environment. To meet this need, we envision a collaborative tagging system called AstroDAS (Astronomy Distributed Annotation System). AstroDAS is similar to existing science knowledge bases, such as BioDAS (Biology Distributed Annotation System, www.biodas.org).

As you might expect, semantic diversity is going to be present with “collaborative annotation.”

Semantic Monotony (aka Semantic Web) has failed for machines alone.

No question it will fail for humans + machines.

Are you ready to step up to the semantic diversity of collaborative annotation (humans + machines)?