## Archive for the ‘Documentation’ Category

### ARM Releases Machine Readable Architecture Specification (Intel?)

Saturday, April 22nd, 2017

ARM Releases Machine Readable Architecture Specification by Alastair Reid.

From the post:

Today ARM released version 8.2 of the ARM v8-A processor specification in machine readable form. This specification describes almost all of the architecture: instructions, page table walks, taking interrupts, taking synchronous exceptions such as page faults, taking asynchronous exceptions such as bus faults, user mode, system mode, hypervisor mode, secure mode, debug mode. It details all the instruction formats and system register formats. The semantics is written in ARM’s ASL Specification Language so it is all executable and has been tested very thoroughly using the same architecture conformance tests that ARM uses to test its processors (See my paper “Trustworthy Specifications of ARM v8-A and v8-M System Level Architecture”.)

The specification is being released in three sets of XML files:

• The System Register Specification consists of an XML file for each system register in the architecture. For each register, the XML details all the fields within the register, how to access the register and which privilege levels can access the register.
• The AArch64 Specification consists of an XML file for each instruction in the 64-bit architecture. For each instruction, there is the encoding diagram for the instruction, ASL code for decoding the instruction, ASL code for executing the instruction and any supporting code needed to execute the instruction and the decode tree for finding the instruction corresponding to a given bit-pattern. This also contains the ASL code for the system architecture: page table walks, exceptions, debug, etc.
• The AArch32 Specification is similar to the AArch64 specification: it contains encoding diagrams, decode trees, decode/execute ASL code and supporting ASL code.

Alastair provides starting points for use of this material by outlining his prior uses of the same.

Raises the question why an equivalent machine readable data set isn’t available for Intel® 64 and IA-32 Architectures? (PDF manuals)

The data is there, but not in a machine readable format.

Anyone know why Intel doesn’t provide the same convenience?

### Top considerations for creating bioinformatics software documentation

Wednesday, January 18th, 2017

Abstract

Investing in documenting your bioinformatics software well can increase its impact and save your time. To maximize the effectiveness of your documentation, we suggest following a few guidelines we propose here. We recommend providing multiple avenues for users to use your research software, including a navigable HTML interface with a quick start, useful help messages with detailed explanation and thorough examples for each feature of your software. By following these guidelines, you can assure that your hard work maximally benefits yourself and others.

Introduction

You have written a new software package far superior to any existing method. You submit a paper describing it to a prestigious journal, but it is rejected after Reviewer 3 complains they cannot get it to work. Eventually, a less exacting journal publishes the paper, but you never get as many citations as you expected. Meanwhile, there is not even a single day when you are not inundated by emails asking very simple questions about using your software. Your years of work on this method have not only failed to reap the dividends you expected, but have become an active irritation. And you could have avoided all of this by writing effective documentation in the first place.

Academic bioinformatics curricula rarely train students in documentation. Many bioinformatics software packages lack sufficient documentation. Developers often prefer spending their time elsewhere. In practice, this time is often borrowed, and by ducking work to document their software now, developers accumulate ‘documentation debt’. Later, they must pay off this debt, spending even more time answering user questions than they might have by creating good documentation in the first place. Of course, when confronted with inadequate documentation, some users will simply give up, reducing the impact of the developer’s work.
… (emphasis in original)

Take to heart the authors’ observation on automatic generation of documentation:

The main disadvantage of automatically generated documentation is that you have less control of how to organize the documentation effectively. Whether you used a documentation generator or not, however, there are several advantages to an HTML web site compared with a PDF document. Search engines will more reliably index HTML web pages. In addition, users can more easily navigate the structure of a web page, jumping directly to the information they need.

I would replace “…less control…” with “…virtually no meaningful control…” over the organization of the documentation.

Think about it for a second. You write short comments, sometimes even incomplete sentences as thoughts occur to you in a code or data context.

An automated tool gathers those comments, even incomplete sentences, rips them out of their original context and strings them one after the other.

Do you think that provides a meaningful narrative flow for any reader? Including yourself?

Your documentation doesn’t have to be great literature but as Karimzadeh and Hoffman point out, good documentation can make the difference between use and adoption and your hard work being ignored.

Ping me if you want to take your documentation to the next level.

### The Next Generation R Documentation System [Dynamic R Documentation?]

Wednesday, August 31st, 2016

The R Documentation Task Force: The Next Generation R Documentation System by Joseph Rickert and Hadley Wickham.

From the post:

### Light Table is open source

Thursday, January 9th, 2014

Light Table is open source by Chris Granger.

From the post:

Today Light Table is taking a huge step forward – every bit of its code is now on Github and along side of that, we’re releasing Light Table 0.6.0, which includes all the infrastructure to write and use plugins. If you haven’t been following the 0.5.* releases, this latest update also brings a tremendous amount of stability, performance, and clean up to the party. All of this together means that Light Table is now the open source developer tool platform that we’ve been working towards. Go download it and if you’re new give our tutorial a shot!

If you aren’t already familiar with Light Table, check out The IDE as a value, also by Chris Granger.

Just a mention in the notes, but start listening for “contextuality.” It comes up in functional approaches to graph algorithms.

### Astera Centerprise

Wednesday, January 8th, 2014

Asteria Centerprise

From the post:

The first in our Centerprise Best Practices Webinar Series discusses the features of Centerprise that make it the ideal integration solution for the high volume data warehouse. Topics include data quality (profiling, quality measurements, and validation), translating data to star schema (maintaining foreign key relationships and cardinality with slowly changing dimensions), and performance, including querying data with in-database joins and caching. We’ve posted the Q&A below, which delves into some interesting topics.

You can view the webinar video, as well as all our demo and tutorial videos, at Astera TV.

Very visual approach to data integration.

Be aware that comments on objects in a dataflow are a “planned” feature:

An exteremly useful (and simple) addition to Centerprise would be the ability to pin notes onto a flow to be quickly and easily seen by anyone who opens the flow.

This would work as an object which could be dragged to the flow, and allow the user enter enter a note which would remain on-screen, unlike the existing comments which require you to actually open the object and page to the ‘comments’ pane.

This sort of logging ability will prove very useful to explain to future dataflow maintainers why certain decisions were made in the design, as well as informing them of specific changes/additions and the reasons why they were enacted.

As Centerprise is almost ‘self-documenting’, the note-keeping ability would allow us to avoid maintaining and refering to seperate documentation (which can become lost)

A comment on each data object would be an improvement but a flat comment would be of limited utility.

A structured comment (perhaps extensible comment?) that captures the author, date, data source, target, etc. would make comments usefully searchable.

Including structured comments on the dataflows, transformations, maps and workflows themselves and to query for the presence of structured comments would be very useful.

A query for the existence of structured comments could help enforce local requirements for documenting data objects and operations.

### Setting up a Hadoop cluster

Thursday, November 21st, 2013

Setting up a Hadoop cluster – Part 1: Manual Installation by Lars Francke.

From the post:

In the last few months I was tasked several times with setting up Hadoop clusters. Those weren’t huge – two to thirteen machines – but from what I read and hear this is a common use case especially for companies just starting with Hadoop or setting up a first small test cluster. While there is a huge amount of documentation in form of official documentation, blog posts, articles and books most of it stops just where it gets interesting: Dealing with all the stuff you really have to do to set up a cluster, cleaning logs, maintaining the system, knowing what and how to tune etc.

I’ll try to describe all the hoops we had to jump through and all the steps involved to get our Hadoop cluster up and running. Probably trivial stuff for experienced Sysadmins but if you’re a Developer and finding yourself in the “Devops” role all of a sudden I hope it is useful to you.

While working at GBIF I was asked to set up a Hadoop cluster on 15 existing and 3 new machines. So the first interesting thing about this setup is that it is a heterogeneous environment: Three different configurations at the moment. This is where our first goal came from: We wanted some kind of automated configuration management. We needed to try different cluster configurations and we need to be able to shift roles around the cluster without having to do a lot of manual work on each machine. We decided to use a tool called Puppet for this task.

While Hadoop is not currently in production at GBIF there are mid- to long-term plans to switch parts of our infrastructure to various components of the HStack. Namely MapReduce jobs with Hive and perhaps Pig (there is already strong knowledge of SQL here) and also storing of large amounts of raw data in HBase to be processed asynchronously (~500 million records until next year) and indexed in a Lucene/Solr solution possibly using something like Katta to distribute indexes. For good measure we also have fairly complex geographic calculations and map-tile rendering that could be done on Hadoop. So we have those 18 machines and no real clue how they’ll be used and which services we’d need in the end.

Dated, 2011, but illustrates some of the issues I raised in: Hadoop Ecosystem Configuration Woes?

I first saw this in a tweet by Marko A. Rodriguez.

Monday, November 11th, 2013

Spreadsheets: The Ununderstood Dark Matter of IT by Felienne Hermans.

Description:

Spreadsheets are used extensively in industry: they are the number one tool for financial analysis and are also prevalent in other domains, such as logistics and planning. Their flexibility and immediate feedback make them easy to use for non-programmers. But they are as easy to build, as they are difficult to analyze, maintain and check. Felienne’s research aims at developing methods to support spreadsheet users to understand, update and improve spreadsheets. Inspiration was taken from classic software engineering, as this field is specialized in the analysis of data and calculations. In this talk Felienne will summarize her recently completed PhD research on the topic of spreadsheet structure visualization, spreadsheet smells and clone detection, as well as presenting a sneak peek into the future of spreadsheet research as Delft University.

Some tidbits to interest you in the video:

“95% of all U.S. corporations still use spreadsheets.”

“Spreadsheet can have a long life, 5 years on average.”

“No docs, errors, long life. It looks like software!”

Designing a tool for software users are using, as opposed to designing tools users ought to be using.

What a marketing concept!

Not a lot of details at the PerfectXL website.

Pay particular attention to how Felienne distinguishes a BI dashboard from a spreadsheet. You have seen that before in this blog. (Hint: Search for “F-16” or “VW.”)

No doubt you will also like Felienne’s blog.

I first saw this in a tweet by Lars Marius Garshol.

### Ten Simple Rules for Reproducible Computational Research

Sunday, November 10th, 2013

Ten Simple Rules for Reproducible Computational Research by Geir Kjetil Sandve, Anton Nekrutenko, James Taylor, Eivind Hovig. (Sandve GK, Nekrutenko A, Taylor J, Hovig E (2013) Ten Simple Rules for Reproducible Computational Research. PLoS Comput Biol 9(10): e1003285. doi:10.1371/journal.pcbi.1003285)

From the article:

Replication is the cornerstone of a cumulative science [1]. However, new tools and technologies, massive amounts of data, interdisciplinary approaches, and the complexity of the questions being asked are complicating replication efforts, as are increased pressures on scientists to advance their research [2]. As full replication of studies on independently collected data is often not feasible, there has recently been a call for reproducible research as an attainable minimum standard for assessing the value of scientific claims [3]. This requires that papers in experimental science describe the results and provide a sufficiently clear protocol to allow successful repetition and extension of analyses based on original data [4].

The importance of replication and reproducibility has recently been exemplified through studies showing that scientific papers commonly leave out experimental details essential for reproduction [5], studies showing difficulties with replicating published experimental results [6], an increase in retracted papers [7], and through a high number of failing clinical trials [8], [9]. This has led to discussions on how individual researchers, institutions, funding bodies, and journals can establish routines that increase transparency and reproducibility. In order to foster such aspects, it has been suggested that the scientific community needs to develop a “culture of reproducibility” for computational science, and to require it for published claims [3].

We want to emphasize that reproducibility is not only a moral responsibility with respect to the scientific field, but that a lack of reproducibility can also be a burden for you as an individual researcher. As an example, a good practice of reproducibility is necessary in order to allow previously developed methodology to be effectively applied on new data, or to allow reuse of code and results for new projects. In other words, good habits of reproducibility may actually turn out to be a time-saver in the longer run.

The rules:

Rule 1: For Every Result, Keep Track of How It Was Produced

Rule 2: Avoid Manual Data Manipulation Steps

Rule 3: Archive the Exact Versions of All External Programs Used

Rule 4: Version Control All Custom Scripts

Rule 5: Record All Intermediate Results, When Possible in Standardized Formats

Rule 6: For Analyses That Include Randomness, Note Underlying Random Seeds

Rule 7: Always Store Raw Data behind Plots

Rule 8: Generate Hierarchical Analysis Output, Allowing Layers of Increasing Detail to Be Inspected

Rule 9: Connect Textual Statements to Underlying Results

To bring this a little closer to home, would another researcher be able to modify your topic map or RDF store with some certainty as to the result?

Or take over the maintenance/modification of a Hadoop ecosystem without hand holding by the current operator?

Being unable to answer either of those questions with “yes,” doesn’t show up as a line item in your current budget.

However, when the need to “reproduce” or modify your system becomes mission critical, it may be a budget (and job) busting event.

What’s your tolerance for job ending risk?

I forgot to mention I first saw this in “Ten Simple Rules for Reproducible Computational Research” – An Excellent Read for Data Scientists by Sean Murphy.

Thursday, November 7th, 2013

After listening to Kathleen Ting (Cloudera) describe how 44% of support tickets for the Hadoop ecosystem arise from misconfiguration (Dealing with Data in the Hadoop Ecosystem…), I started to wonder how many opportunities there are for misconfiguration in the Hadoop ecosystem?

That’s probably not an answerable question, but we can look at how configurations are documented in the Hadoop ecosystem:

• Accumulo – XML <!– comment –>
• Avro – Schemas defined in JSON (no comment facility)
• Cassandra – “#” comment indicator
• Chukwa – XML <!– comment –>
• Falcon – XML <!– comment –>
• Flume – “#” comment indicator
• Hadoop – XML <!– comment –>
• Hama – XML <!– comment –>
• HBase – XML <!– comment –>
• Hive – XML <!– comment –>
• Knox – XML <!– comment –>
• Mahout – XML <!– comment –>
• PIG – C style comments
• Sqoop – “#” comment indicator
• Tex – XML <!– comment –>
• ZooKeeper – text but no apparent ability to comment (Zookeeper Administrator’s Guide)

1 Component, Pig uses C style comments

2 Components, Avro and ZooKeeper, have no ability for comments at all.

3 Components, Cassandra, Flume and Sqoop use “#” for comments

10 Components, Accumulo, Chukwa, Falcon, Hama, Hadoop, HBase, Hive, Knox, Mahout and Tex presumably support XML comments

A full one third of the Hadoop ecosystem uses a non-XML comments, if comments are permitted at all. The other two-thirds of the ecosystem uses XML comments in some files and not others.

The entire ecosystem lacks a standard way to associate value or settings in one component with values or settings in another component.

To say nothing of associating values or settings with releases of different components.

Without looking at the details of the possible settings for each component, does that seem problematic to you?

### Cypher shell with logging

Friday, August 23rd, 2013

Cypher shell with logging by Alex Frieden.

From the post:

For those who don’t know, Neo4j is a graph database built with Java. The internet is abound with examples, so I won’t bore you with any.

Our problem was a data access problem. We built a loader, loaded our data into neo4j, and then queried it. However we ran into a little problem: Neo4j at the time of release logs in the home directory (at least on linux redhat) what query was ran (its there as a hidden file). However, it doesn’t log what time it was run at. One other problem as an administrator point of view is not having a complete log of all queries and data access. So we built a cypher shell that would do the logging the way we needed to log. Future iterations of this shell will have REST cypher queries and not use the embedded mode (which is faster but requires a local connection to the data). We also wanted a way in the future to output results to a file.
(…)

Excellent!

Logs are a form of documentation. You may remember that documentation was #1 in the Solr Usability contest.

Documentation is important! Don’t neglect it.

### Light Table 0.5.0

Friday, August 23rd, 2013

Light Table 0.5.0 by Chris Granger.

A little later than the first week or two of August, 2013, but not by much!

Chris says Light Table is a next-gen IDE.

He may be right but to evaluate that claim, you will need to download the alpha here.

I must confess I am curious about his claim:

With the work we did to add background processing and a good deal of effort toward ensuring everything ran fast, LightTable is now comparable in speed to Vim and faster than Emacs or Sublime in most things. (emphasis added)

I want to know what “most things” Light Table does faster than Emacs. 😉

### Plan for Light Table 0.5

Tuesday, July 16th, 2013

The plan for 0.5 by Chris Granger.

From the post:

You guys have been waiting very patiently for a while now, so I wanted to give you an idea of what’s coming in 0.5. A fair amount of the work is in simplifying both the UI/workflow as well as refactoring everything to get ready for the beta (plugins!). I’ve been talking with a fair number of people to understand how they use LT or why they don’t and one of the most common pieces of feedback I’ve gotten is that while it is very simple it still seems heavier than something like Sublime. We managed to attribute this to the fact that it does some unfamiliar things, one of the biggest of which is a lack of standard menus. We don’t really gain anything by not doing menus and while there were some technical reasons I didn’t, I’ve modified node-webkit to fix that. So I’m happy to say 0.5 will use standard menus and the ever-present bar on the left will be disappearing. This makes LT about as light as it possibly can be and should alleviate the feeling that you can’t just use it as a text editor.

Looking forward to the first week or two of August, 2013. Chris’ goal for the 0.5 release!

### 13 Things People Hate about Your Open Source Docs [+ One More]

Saturday, June 22nd, 2013

13 Things People Hate about Your Open Source Docs by Andy Lester.

From the post:

Most open source developers like to think about the quality of the software they build, but the quality of the documentation is often forgotten. Nobody talks about how great a project’s docs are, and yet documentation has a direct impact on your project’s success. Without good documentation, people either do not use your project, or they do not enjoy using it. Happy users are the ones who spread the news about your project – which they do only after they understand how it works, which they learn from the software’s documentation.

Yet, too many open source projects have disappointing documentation. And it can be disappointing in several ways.

The examples I give below are hardly authoritative, and I don’t mean to pick on any particular project. They’re only those that I’ve used recently, and not meant to be exemplars of awfulness. Every project has committed at least a few of these sins. See how many your favorite software is guilty of (whether you are user or developer), and how many you personally can help fix.

Andy’s list:

1. Lacking a good README or introduction
2. Docs not available online
3. Docs only available online
4. Docs not installed with the package
5. Lack of screenshots
6. Lack of realistic examples
8. Forgetting the new user
9. Not listening to the users
10. Not accepting user input
11. No way to see what the software does without installing it
12. Relying on technology to do your writing
13. Arrogance and hostility toward the user

See Andy’s post for the details on his points and the comments that follow.

I do think Andy missed one point:

14. Commercial entity open sources a product, machine generates documentation, expects users to contribute patches to the documentation for free.

What seems odd about that to you?

Developers getting paid to develop poor documentation and their response to user comments on documentation is the “community” should fix it for free.

At least in a true open source project, everyone is contributing and can use the (hopefully) great results equally.

Not so with a, “well…., for that you would need commercial license X” type project.

I first saw this in a tweet by Alexandre.

### Nozzle R Package

Sunday, April 14th, 2013

From the webpage:

Nozzle is an R package for generation of reports in high-throughput data analysis pipelines. Nozzle reports are implemented in HTML, JavaScript, and Cascading Style Sheets (CSS), but developers do not need any knowledge of these technologies to work with Nozzle. Instead they can use a simple R API to design and implement powerful reports with advanced features such as foldable sections, zoomable figures, sortable tables, and supplementary information. Please cite our Bioinformatics paper if you are using Nozzle in your work.

I have only looked at the demo reports but this looks quite handy.

It doesn’t hurt to have extensive documentation to justify a conclusion that took you only moments to reach.

### “Document Design and Purpose, Not Mechanics”

Friday, February 15th, 2013

“Document Design and Purpose, Not Mechanics” by Stephen Turner.

From the post:

If you ever write code for scientific computing (chances are you do if you’re here), stop what you’re doing and spend 8 minutes reading this open-access paper:

Wilson et al. Best Practices for Scientific Computing. arXiv:1210.0530 (2012). (Direct link to PDF).

The paper makes a number of good points regarding software as a tool just like any other lab equipment: it should be built, validated, and used as carefully as any other physical instrumentation. Yet most scientists who write software are self-taught, and haven’t been properly trained in fundamental software development skills.

The paper outlines ten practices every computational biologist should adopt when writing code for research computing. Most of these are the usual suspects that you’d probably guess – using version control, workflow management, writing good documentation, modularizing code into functions, unit testing, agile development, etc. One that particularly jumped out at me was the recommendation to document design and purpose, not mechanics.

We all know that good comments and documentation is critical for code reproducibility and maintenance, but inline documentation that recapitulates the code is hardly useful. Instead, we should aim to document the underlying ideas, interface, and reasons, not the implementation. (emphasis added)

There is no shortage of advice (largely unread) on good writing practices. 😉

Stephen calling out the advice to “…document design and purpose, not mechanics” struck me as relevant to semantic integration solutions.

In both RDF and XTM topic maps, the same URI as an identifier is taken as identifying the same subject.

But that’s mechanics isn’t it? Just string to string comparison.

Mechanics are important but they are just mechanics.

Documenting the conditions for using a URI will help guide you or your successor to using the same URI the same way.

But that takes more than mechanics.

That takes “…document[ing] the underlying ideas, interface, and reasons, not the implementation.”

### Improving User Experience in Manuals

Wednesday, February 6th, 2013

Improving User Experience in Manuals by Anastasios Karafillis.

From the post:

The manual: possibly the most awkward part of a user’s experience with a product. People avoid manuals whenever possible and designers try to build interfaces that need not rely on them. And yet, users and designers would certainly agree that you simply must provide a proper manual.

The manual can be a powerful tool for unleashing the full potential of an application, something of benefit to users and vendors. Why is it, then, that manuals so often seem to confuse users rather than help them?

Let’s look at the most common difficulties faced by technical writers, and how to best deal with them to improve the user experience of manuals.

“…a proper manual.” Doesn’t seem to be a lot to ask for.

I have seen some better than others but they were all fixed compromises of one sort of another.

Ironic because SGML and then XML advocates have been promising users dynamic content for years. Content that could adopt to circumstances and users.

What if you had a manual that improved along with you?

A manual composed of different levels of information, which can be chosen by the user or adapted based on your performance with internal tests.

A beginning sysadmin isn’t going to be confronted with a chapter on diagnosing core dumps or long deprecated backup commands.

A topic map based manual could do that as well as integrate information from other resources.

Imagine a sysadmin manual with text imported from blogs, websites, lists, etc.

A manual that becomes a gateway to an entire area of knowledge.

That would be a great improvement in user experience with manuals!

### Documentation: It Doesn’t Suck! [Topic Maps As Semantic Documentation]

Saturday, January 19th, 2013

Documentation: It Doesn’t Suck! by Jes Schulz Borland.

Jes writes:

Some parts of our jobs are not glamorous, but necessary. For example, I have to brush Brent’s Bob Dylan wig weekly, to make sure it’s shiny and perfect. Documentation is a task many people roll their eyes at, procrastinate about starting, have a hard time keeping up-to-date, and in general avoid.

Stop avoiding it, and embrace the benefits!

The most important part of documentation is starting, so I’d like to help you by giving you a list of things to document. It’s going to take time and won’t be as fun as tuning queries from 20 minutes to 2 seconds, but it could save the day sometime in the future.

You can call this your SQL Server Run Book, your SQL Server Documentation, your SQL Server Best Practices Guide – whatever works for your environment. Make sure it’s filled in for each server, and kept up to date, and you’ll soon realize the benefits

There is even a video: Video: Documentation – It Doesn’t Suck!.

Semantic documentation isn’t the entire story behind topic maps but it is what enables the other benefits from using topic maps.

With a topic map you can document what must be matched by other documentation (other topic maps, yours or someone else’s), for both to be talking about the same subject.

And you get to choose the degree of documentation you want. You could choose a string, like owl:SameAs, and have a variety of groups using it to mean any number of things.

Or, you could choose to require several properties, language, publishing house, journal, any number of properties, and then others are talking about the same subject as yourself.

Doesn’t mean that mis-use is completely avoided, only means it is made less likely. Or easier to avoid might be a better way to say it.

Not to mention that six months or a year from now, it may be easier for you re-use your identification, since it has more than one property that must be matched.

Saturday, January 12th, 2013

13 Things People Hate about Your Open Source Docs by Andy Lester.

From the post:

1. Lacking a good README or introduction

2. Docs not available online

3. Docs only available online

4. Docs not installed with the package

5. Lack of screenshots

6. Lack of realistic examples

8. Forgetting the new user

9. Not listening to the users

10. Not accepting user input

11. No way to see what the software does without installing it

12. Relying on technology to do your writing

13. Arrogance and hostility toward the user

See Andy’s post for the details and suggestions on ways to improve.

Friday, November 23rd, 2012

Javadoc coding standards by Stephen Colebourne.

From the post:

These are the standards I tend to use when writing Javadoc. Since personal tastes differ, I’ve tried to explain some of the rationale for some of my choices. Bear in mind that this is more about the formatting of Javadoc, than the content of Javadoc.

There is an Oracle guide which is longer and more detailed than this one. The two agree in most places, however these guidelines are more explicit about HTML tags, two spaces in @param and null-specification, and differ in line lengths and sentence layout.

Each of the guidelines below consists of a short description of the rule and an explanation, which may include an example:

Documentation of source code is vital to its maintenance. (cant)

But neither Stephen nor Oracle made much of the need to document the semantics of the source and/or data. If I am indexing/mapping across source files, &ltcode> elements aren’t going to be enough to compare field names across documents.

I am assuming that semantic diversity is as present in source code as elsewhere. Would you assume otherwise?

### Meet the new Light Table

Tuesday, November 6th, 2012

Meet the new Light Table by Chris Granger.

From the post: