Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

April 13, 2013

Law Classification Added to Library of Congress Linked Data Service

Filed under: Classification,Law,Linked Data — Patrick Durusau @ 4:39 am

Law Classification Added to Library of Congress Linked Data Service by Kevin Ford.

From the post:

The Library of Congress is pleased to make the K ClassLaw Classification – and all its subclasses available as linked data from the LC Linked Data Service, ID.LOC.GOV. K Class joins the B, N, M, and Z Classes, which have been in beta release since June 2012. With about 2.2 million new resources added to ID.LOC.GOV, K Class is nearly eight times larger than the B, M, N, and Z Classes combined. It is four times larger than the Library of Congress Subject Headings (LCSH). If it is not the largest class, it is second only to the P Class (Literature) in the Library of Congress Classification (LCC) system.

We have also taken the opportunity to re-compute and reload the B, M, N, and Z classes in response to a few reported errors. Our gratitude to Caroline Arms for her work crawling through B, M, N, and Z and identifying a number of these issues.

Please explore the K Class for yourself at http://id.loc.gov/authorities/classification/K or all of the classes at http://id.loc.gov/authorities/classification.

The classification section of ID.LOC.GOV remains a beta offering. More work is needed not only to add the additional classes to the system but also to continue to work out issues with the data.

As always, your feedback is important and welcomed. Your contributions directly inform service enhancements. We are interested in all forms of constructive commentary on all topics related to ID. But we are particularly interested in how the data available from ID.LOC.GOV is used and continue to encourage the submission of use cases describing how the community would like to apply or repurpose the LCC data.

You can send comments or report any problems via the ID feedback form or ID listserv.

Not leisure reading for everyone but if you are interested, this is fascinating source material.

And an important source of information for potential associations between subjects.

I first saw this at: Ford: Law Classification Added to Library of Congress Linked Data Service.

March 29, 2013

Mathematics Cannot Be Patented. Case Dismissed.

Filed under: Law,Mathematics,Patents — Patrick Durusau @ 4:48 am

Mathematics Cannot Be Patented. Case Dismissed. by Alan Schoenbaum.

From the post:

Score one for the good guys. Rackspace and Red Hat just defeated Uniloc, a notorious patent troll. This case never should have been filed. The patent never should have been issued. The ruling is historic because, apparently, it was the first time that a patent suit in the Eastern District of Texas has been dismissed prior to filing an answer in the case, on the grounds that the subject matter of the patent was found to be unpatentable. And was it ever unpatentable.

Red Hat indemnified Rackspace in the case. This is something that Red Hat does well, and kudos to them. They stand up for their customers and defend these Linux suits. The lawyers who defended us deserve a ton of credit. Bill Lee and Cynthia Vreeland of Wilmer Hale were creative and persuasive, and their strategy to bring the early motion to dismiss was brilliant.

The patent at issue is a joke. Uniloc alleged that a floating point numerical calculation by the Linux operating system violated U.S. Patent 5,892,697 – an absurd assertion. This is the sort of low quality patent that never should have been granted in the first place and which patent trolls buy up by the bushel full, hoping for fast and cheap settlements. This time, with Red Hat’s strong backing, we chose to fight.

The outcome was just what we had in mind. Chief Judge Leonard Davis found that the subject matter of the software patent was unpatentable under Supreme Court case law and, ruling from the bench, granted our motion for an early dismissal. The written order, which was released yesterday, is excellent and well-reasoned. It’s refreshing to see that the judiciary recognizes that many of the fundamental operations of a computer are pure mathematics and are not patentable subject matter. We expect, and hope, that many more of these spurious software patent lawsuits are dismissed on similar grounds.

A potential use case for a public topic map on patents?

At least on software patents?

Thinking that a topic map could be constructed of all the current patents that address mathematical operations, enabling academics and researchers to focus on factual analysis of the processes claimed by those patents.

From the factual analysis, other researchers, primarily lawyers and law students, could outline legal arguments, tailored for each patent, as to its invalidity.

A community resource, not unlike a patent bank, that would strengthen the community’s hand when dealing with patent trolls.

PS: I guess this means I need to stop working on my patent for addition. 😉

March 25, 2013

The Tallinn Manual [Laws of War & Topic Maps]

Filed under: Government,Law — Patrick Durusau @ 2:00 pm

The Tallinn Manual

From the webpage:

The Tallinn Manual on the International Law Applicable to Cyber Warfare, written at the invitation of the Centre by an independent ‘International Group of Experts’, is the result of a three-year effort to examine how extant international law norms apply to this ‘new’ form of warfare. The Tallinn Manual pays particular attention to the jus ad bellum, the international law governing the resort to force by States as an instrument of their national policy, and the jus in bello, the international law regulating the conduct of armed conflict (also labelled the law of war, the law of armed conflict, or international humanitarian law). Related bodies of international law, such as the law of State responsibility and the law of the sea, are dealt within the context of these topics.

The Tallinn Manual is not an official document, but instead an expression of opinions of a group of independent experts acting solely in their personal capacity. It does not represent the views of the Centre, our Sponsoring Nations, or NATO. It is also not meant to reflect NATO doctrine. Nor does it reflect the position of any organization or State represented by observers.

So you don’t run afoul of the laws of war with any of your topic map activities.

I first saw this in Nat Torkington’s Four short links: 22 March 2013.

I would normally credit his source but they say:

All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

So I can’t tell you the name of the resource or its location. Sorry.

I did include the direct URL to the Tallinn Manual, which isn’t covered by their copyright.

PS: Remember “war crimes” are defined post-hoc by the victors so choose your side carefully.

March 24, 2013

Mapping the Supreme Court

Filed under: Law,Legal Informatics — Patrick Durusau @ 3:05 pm

Mapping the Supreme Court

From the webpage:

The Supreme Court Mapping Project is an original software-driven initiative currently in Beta development. The project, under the direction of University of Baltimore School of Law Assistant Professor Colin Starger, seeks to use information design and software technology to enhance teaching, learning, and scholarship focused on Supreme Court precedent.

The SCOTUS Mapping Project has two distinct components:

Enhanced development of the Mapper software. This software enables users to create sophisticated interactive maps of Supreme Court doctrine by plotting relationships between majority, concurring and dissenting opinions. With the software, users can both visualize how different “lines” of Supreme Court opinions have evolved, and employ animation to make interactive presentations for audiences.

Building an extensive library of Supreme Court doctrinal maps. By highlighting the relationships between essential and influential Court opinions, these maps promote efficient learning and understanding of key doctrinal debates and can assist students, scholars, and practitioners alike. The library already includes maps of key regions of doctrine surrounding the Due Process Clause, the Commerce Clause, and the Fourth Amendment.

The SCOTUS Mapping Project is in Beta-phase development and is currently seeking Beta participants. If you are interested in participating in the Beta phase of the project, contact Prof. Starger.

For identifying and learning lines of Supreme Court decisions, an excellent tool.

I thought the combined mapping in Maryland v. King (warrantless suspicionless search of DNA violated the Fourth Amendment?):

MD v. King

is particularly useful. (Image is a link to the original image.)

It illustrates that Supreme Court decisions on the Fourth Amendment are more mixed than is represented in the popular press.

Using prior decisions as topics, it would be interesting to see a topic map of the social context of those prior decisions.

No Supreme Court decision occurs in a vacuum.

March 17, 2013

Open Law Lab

Filed under: Education,Law,Law - Sources,Legal Informatics — Patrick Durusau @ 12:36 pm

Open Law Lab

From the webpage:

Open Law Lab is an initiative to design law – to make it more accessible, more usable, and more engaging.

Projects:

Law Visualized

Law Education Tech

Usable Court Systems

Access to Justice by Design

Not to mention a number of interesting blog posts represented by images further down the homepage.

Access/interface issues are universal and law is a particularly tough nut to crack.

Progress in providing access to legal materials could well carry over to other domains.

I first saw this at: Hagan: Open Law Lab.

February 26, 2013

Naming U.S. Statues

Filed under: Government,Law,Law - Sources,Legal Informatics — Patrick Durusau @ 1:53 pm

Strause et al.: How Federal Statutes Are Named, and the Yale Database of Federal Statute Names

Centers on How Federal Statutes Are Named, by Renata E.B. Strause, Allyson R. Bennett, Caitlin B. Tully, M. Douglass Bellis, and Eugene R. Fidell
Law Library Journal, 105, 7-30 (2013), but includes references to a other U.S. statute name resources.

Quite useful if you are developing any indexing/topic map service that involves U.S. statutes.

There is mention of a popular name for French statues resource.

I assume there are similar resources for other legal jurisdictions. If you know of such resources, I am sure the Legal Informatics Blog would be interested.

Wikipedia and Legislative Data Workshop

Filed under: Law,Law - Sources,Wikipedia — Patrick Durusau @ 1:52 pm

Wikipedia and Legislative Data Workshop

From the post:

Interested in the bills making their way through Congress?

Think they should be covered well in Wikipedia?

Well, let’s do something about it!

On Thursday and Friday, March 14th and 15th, we are hosting a conference here at the Cato Institute to explore ways of using legislative data to enhance Wikipedia.

Our project to produce enhanced XML markup of federal legislation is well under way, and we hope to use this data to make more information available to the public about how bills affect existing law, federal agencies, and spending, for example.

What better way to spread knowledge about federal public policy than by supporting the growth of Wikipedia content?

Thursday’s session is for all comers. Starting at 2:30 p.m., we will familiarize ourselves with Wikipedia editing and policy, and at 5:30 p.m. we’ll have a Sunshine Week reception. (You don’t need to attend in the afternoon to come to the reception. Register now!)

On Friday, we’ll convene experts in government transparency, in Wikipedia editorial processes and decisions, and in MediaWiki technology to think things through and plot a course.

I remain unconvinced about greater transparency into the “apparent” legislative process.

On the other hand, it may provide the “hook” or binding point to make who wins and who loses more evident.

If the Cato representatives mention their ideals being founded in the 18th century, you might want to remember that infant mortality was greater than 40% in foundling hospitals of the time.

People who speak glowingly of the 18th century didn’t live in the 18th century. And imagine themselves as landed gentry of the time.

I first saw this at the Legal Informatics Blog.

February 23, 2013

U.S. Statutes at Large 1951-2009

Filed under: Government,Government Data,Law,Law - Sources — Patrick Durusau @ 4:28 pm

GPO is Closing Gap on Public Access to Law at JCP’s Direction, But Much Work Remains by Daniel Schuman.

From the post:

The GPO’s recent electronic publication of all legislation enacted by Congress from 1951-2009 is noteworthy for several reasons. It makes available nearly 40 years of lawmaking that wasn’t previously available online from any official source, narrowing part of a much larger information gap. It meets one of three long-standing directives from Congress’s Joint Committee on Printing regarding public access to important legislative information. And it has published the information in a way that provides a platform for third-party providers to cleverly make use of the information. While more work is still needed to make important legislative information available to the public, this online release is a useful step in the right direction.

Narrowing the Gap

In mid-January 2013, GPO published approximately 32,000 individual documents, along with descriptive metadata, including all bills enacted into law, joint concurrent resolutions that passed both chambers of Congress, and presidential proclamations from 1951-2009. The documents have traditionally been published in print in volumes known as the “Statutes at Large,” which commonly contain all the materials issued during a calendar year.

The Statutes at Large are literally an official source for federal laws and concurrent resolutions passed by Congress. The Statutes at Large are compilations of “slip laws,” bills enacted by both chambers of Congress and signed by the President. By contrast, while many people look to the US Code to find the law, many sections of the Code in actuality are not the “official” law. A special office within the House of Representatives reorganizes the contents of the slip laws thematically into the 50 titles that make up the US Code, but unless that reorganized document (the US Code) is itself passed by Congress and signed into law by the President, it remains an incredibly helpful but ultimately unofficial source for US law. (Only half of the titles of the US Code have been enacted by Congress, and thus have become law themselves.) Moreover, if you want to see the intact text of the legislation as originally passed by Congress — before it’s broken up and scattered throughout the US Code — the place to look is the Statutes at Large.

Policy wonks and trivia experts will have a field day but the value of the Statutes at Large isn’t apparent to me.

I assume there are cases where errors can be found between the U.S.C. (United States Code) and the Statutes at Large. The significance of those errors is unknown.

Like my comments on the SEC Midas program, knowing a law was passed isn’t the same as knowing who benefits from it.

Or who paid for its passage.

Knowing which laws were passed is useful.

Knowing who benefited or who paid, priceless.

February 13, 2013

LobbyPlag: compares text of EU regulation with texts of lobbyists’ proposals

Filed under: EU,Law,Plagiarism — Patrick Durusau @ 1:21 pm

LobbyPlag: compares text of EU regulation with texts of lobbyists’ proposals

From the post:

A service called LobbyPlag lets users view provisions of EU regulations and compare them to provisions of lobbyists’ proposals.

The example currently available on LobbyPlag concerns the General Data Protection Regulation (GDPR).

Click here to see how LobbyPlag compares the GDPR’s forum shopping provision to what the site claims are lobbyists’ proposals for that provision.

LobbyPlag is an interesting use of legal text comparison tools to promote transparency.

See the original post for more details and links.

Another step in the right direction.

February 10, 2013

Lex Machina

Filed under: Law,Law - Sources,Legal Informatics — Patrick Durusau @ 2:44 pm

Lex Machina: IP Litigation and analytics

From the about page:

Every day, Lex Machina’s crawler extracts data and documents from PACER, all 94 District Court sites, ITC’s EDIS site and the PTO site.

The crawler automatically captures every docket event and downloads key District Court case documents and every ITC document. It converts the documents by optical character recognition (OCR) to searchable text and stores each one as a PDF file.

When the crawler encounters an asserted or cited patent, it fetches information about that patent from the PTO site.

Next, the crawler invokes Lex Machina’s state-of-the-art natural language processing (NLP) technology, which includes Lexpressions™, a proprietary legal text classification engine. The NLP technology classifies cases and dockets and resolves entity names. Attorney review of docket and case classification, patents and outcomes ensures high-quality data. The structured text indexer then orders all the data and stores it for search.

Lex Machina’s web-based application enables users to run search queries that deliver easy access to the relevant docket entries and documents. It also generates lists that can be downloaded as PDF files or spreadsheet-ready CSV files.

Finally, the system generates a daily patent litigation update email, which provides links to all new patent cases and filings.

Lex Machina does not:

  • Index the World Wide Web
  • Index legal cases around the world in every language
  • Index all legal cases in the United States
  • Index all state courts in the United States
  • Index all federal court cases in the United States

Instead, Lex Machina chose a finite legal domain, patents, that has a finite vocabulary and range of data sources.

Working in that finite domain, Lex Machina has produced a high quality data product of interest to legal professions and lay persons alike.

I intend to leave conquering world hunger, ignorance and poor color coordination of clothing to Bill Gates.

You?

I first saw this at Natural Language Processing in patent litigation: Lex Machina by Junling Hu.

January 20, 2013

Operation Asymptote – [PlainSite / Aaron Swartz]

Filed under: Government,Government Data,Law,Law - Sources,Legal Informatics,Uncategorized — Patrick Durusau @ 8:06 pm

Operation Asymptote

Operation Asymptote’s goal is to make U.S. federal court data freely available to everyone.

The data is available now, but free only up to $15 worth every quarter.

Serious legal research hits that limit pretty quickly.

The project does not cost you any money, only some of your time.

The result will be another source of data to hold the system accountable.

So, how real is your commitment to doing something effective in memory of Aaron Swartz?

January 13, 2013

U.S. GPO releases House bills in bulk XML

Filed under: Government Data,Law,Law - Sources — Patrick Durusau @ 8:15 pm

U.S. GPO releases House bills in bulk XML

Bills from the current Congress but for bulk download in XML.

Users guide.

GPO press release.

Bulk House Bills Download.

Another bulk data source from the U.S. Congress.

Integration of the legislative sources will be none trivial but it has been done before, manually.

What will be more interesting will be tracking the more complex interpersonal relationships that underlie the surface of legislative sources.

January 12, 2013

Introduction to the Legislative Process in the U.S. Congress

Filed under: Government,Law — Patrick Durusau @ 7:07 pm

Introduction to the Legislative Process in the U.S. Congress from Full Text Reports….

The report: Introduction to the Legislative Process in the U.S. Congress (PDF)

From the post:

This report introduces the main steps through which a bill (or other item of business) may travel in the legislative process─from introduction to committee and floor consideration to possible presidential consideration. However, the process by which a bill can become law is rarely predictable and can vary significantly from bill to bill. In fact, for many bills, the process will not follow the sequence of congressional stages that are often understood to make up the legislative process. This report presents a look at each of the common stages through which a bill may move, but complications and variations abound in practice.

Throughout, the report provides references to a variety of other CRS reports that focus on specific elements of congressional procedure. CRS also has many other reports not cited herein that address some procedural issues in additional detail (including congressional budget and appropriations processes). These reports are organized by subject matter on the Congressional Operations portion of the CRS webpage, a link to which is on the main CRS homepage, but can also be found at http://crs.gov/analysis/Pages/CongressionalOperations.aspx.

Congressional action on bills is typically planned and coordinated by party leaders in each chamber, though as described in this report, majority party leaders in the House have more tools with which to set the floor agenda than do majority party leaders in the Senate. In both chambers, much of the policy expertise resides in the standing committees, panels of Members who typically take the lead in developing and assessing proposed legislation within specified policy jurisdictions.

Most accurate as a guide to the explicit steps in the legislative process in the U.S. Congress.

But those explicit steps are only pale reflections of the social dynamics and self-interest that drive the inputs into the legislative process.

Transparency of the fiscal cliff legislation would have to start with the relationships between senators, lobbyists and vested interests long before the agreements on tax benefits in the summer of 2012.

And trace those relationships and interactions up to and through the inclusion of those benefits in the fiscal cliff legislation.

Publishing the formal steps in that process is like a magician’s redirection of your attention.

You looking at the wrong time and for the wrong information.

December 20, 2012

edX – Spring 2013

Filed under: CS Lectures,Law — Patrick Durusau @ 8:34 pm

edX – Spring 2013

Of particular interest:

This spring also features Harvard’s Copyright, taught by Harvard Law School professor William Fisher III, former law clerk to Justice Thurgood Marshall and expert on the hotly debated U.S. copyright system, which will explore the current law of copyright and the ongoing debates concerning how that law should be reformed. Copyright will be offered as an experimental course, taking advantage of different combinations and uses of teaching materials, educational technologies, and the edX platform. 500 learners will be selected through an open application process that will run through January 3rd 2013.

An opportunity to use a topic map with complex legal issues and sources.

But CS topics are not being neglected:

In addition to these new courses, edX is bringing back several courses from the popular fall 2012 semester: Introduction to Computer Science and Programming; Introduction to Solid State Chemistry; Introduction to Artificial Intelligence; Software as a Service I; Software as a Service II; Foundations of Computer Graphics.

November 3, 2012

2013 Federal Rules by LII Now Available on eLangdell

Filed under: Law,Law - Sources — Patrick Durusau @ 7:17 pm

2013 Federal Rules by LII Now Available on eLangdell by Sarah Glassmeyer.

From the post:

Once again, CALI is proud to partner with our friends at the Legal Information Institute to provide free ebooks of the Federal Rules of Civil Procedure, Federal Rules of Criminal Procedure and the Federal Rules of Evidence. The 2013 Editions (effective December 1, 2012) as well as the 2012 and 2011 editions can be found on the eLangdell Bookstore.

Our Federal Rules ebooks include:

  • The complete rules as of December 1, 2012 (for the 2013 edition).
  • All notes of the Advisory Committee following each rule.
  • Internal links to rules referenced within the rules.
  • External links to the LII website’s version of the US Code.

These rules are absolutely free for you to download, copy and use however you want. However, they aren’t free to make. If you’d like to donate some money to LII instead of paying money to commercial publishers, they’ve set up a donation page. A little money donated to LII goes a long way towards making the law free and accessible to all.

Legal materials are a rich area for development of semantic tools. Decades of research and development by legal publishers set a high mark for something new and useful.

If you are interested in U.S. Federal Procedure, this is your starting point.

The Federal Rules of Civil Procedure are a good example of defining process without vagueness, confusion and contradiction. (Supply your own examples of where the contrary is the case.)

October 23, 2012

Jurimetrics (Modern Uses of Logic in Law (MULL))

Filed under: Law,Legal Informatics,Logic,Semantics — Patrick Durusau @ 10:48 am

Jurimetrics (Modern Uses of Logic in Law (MULL))

From the about page:

Jurimetrics, The Journal of Law, Science, and Technology (ISSN 0897-1277), published quarterly, is the journal of the American Bar Association Section of Science & Technology Law and the Center for Law, Science & Innovation. Click here to view the online version of Jurimetrics.

Jurimetrics is a forum for the publication and exchange of ideas and information about the relationships between law, science and technology in all areas, including:

  • Physical, life and social sciences
  • Engineering, aerospace, communications and computers
  • Logic, mathematics and quantitative methods
  • The uses of science and technology in law practice, adjudication and court and agency administration
  • Policy implications and legislative and administrative control of science and technology.

Jurimetrics was first published in 1959 under the leadership of Layman Allen as Modern Uses of Logic in Law (MULL). The current name was adopted in 1966. Jurimetrics is the oldest journal of law and science in the United States, and it enjoys a circulation of more than 8,000, which includes all members of the ABA Section of Science & Technology Law.

I just mentioned this journal in Wyner et al.: An Empirical Approach to the Semantic Representation of Laws, but wanted to also capture its earlier title, Modern Uses of Logic in Law (MULL), because I am likely to search for it as well.

I haven’t looked at the early issues in some years but as I recall, they were quite interesting.

Wyner et al.: An Empirical Approach to the Semantic Representation of Laws

Filed under: Language,Law,Legal Informatics,Machine Learning,Semantics — Patrick Durusau @ 10:37 am

Wyner et al.: An Empirical Approach to the Semantic Representation of Laws

Legal Informatics brings news of Dr. Adam Wyner’s paper, An Empirical Approach to the Semantic Representation of Laws, and quotes the abstract as:

To make legal texts machine processable, the texts may be represented as linked documents, semantically tagged text, or translated to formal representations that can be automatically reasoned with. The paper considers the latter, which is key to testing consistency of laws, drawing inferences, and providing explanations relative to input. To translate laws to a form that can be reasoned with by a computer, sentences must be parsed and formally represented. The paper presents the state-of-the-art in automatic translation of law to a machine readable formal representation, provides corpora, outlines some key problems, and proposes tasks to address the problems.

The paper originated at Project IMPACT.

If you haven’t looked at semantics and the law recently, this is a good opportunity to catch up.

I have only skimmed the paper and its references but am already looking for online access to early issues of Jurimetrics (a journal by the American Bar Association) that addressed such issues many years ago.

Should be fun to see what has changed and by how much. What issues remain and how they are viewed today.

September 29, 2012

On Legislative Collaboration and Version Control

Filed under: Law,Law - Sources — Patrick Durusau @ 4:33 pm

On Legislative Collaboration and Version Control

John Wonderlich of the Sunlight Foundation writes:

We often are confronted with the idea of legislation being written and tracked online through new tools, whether it’s Clay Shirky’s recent TED talk, or a long, long list of experiments and pilot projects (including Sunlight’s PublicMarkup.org and Rep. Issa’s MADISON) designed to give citizens a new view and voice in the production of legislation.

Proponents of applying version control systems to law have a powerful vision: a bill or law, with its history laid bare and its sections precisely broken out, and real names attached prominently to each one. Why shouldn’t we able to have that? And since version control systems are helpful to the point of absolute necessity in any collaborative software effort, why wouldn’t Congress employ such an approach?

When people first happen upon this idea, their reaction tends to fall into two camps, which I’ll refer to as triumphalist and dismissive.

John’s and the Sunlight Foundation’s view that legislative history of acts of Congress is a form of transparency is the view taught to high school civics classes. And about as naive as it comes.

True enough, there are extensive legislative histories for every act passed by Congress. That has very little to do with how laws come to be written, by who and for whose interests.

Say for example a lobbyist who has contributed to a Senator’s campaign is concerned with the rules for visa’s for computer engineers. He/she visits the Senator and just happens to have a draft of amendments, created by a well known Washington law firm, that addresses their needs. That document is studied by the Senator’s staff.

Lo and behold, similar language appears in a bill introduced by the Senator. (Or as an amendment to some other bill.)

The Senator will even say that he is sponsoring the legislation to further the interests of those “job creators” in the high tech industry. What gets left out is the access to the Senator by the lobbyist and the assistance in bringing that legislation to the fore.

Indulging governments in their illusions of transparency is the surest way to avoid meaningful transparency.

Now you have to ask yourself, who has an interest in avoiding meaningful transparency?

I first saw this at Legal Informatics (which has other links that will interest you).

September 26, 2012

BBC’s Radio 4 on Vagueness in Law

Filed under: Law,Vagueness — Patrick Durusau @ 3:36 pm

BBC’s Radio 4 on Vagueness in Law by Adam Wyner.

From the post:

On the BBC Radio 4 Analysis program, there was an episode about the Sorities Paradoxes. These are the sorts of paradoxes that arise about categories that have no sharp boundaries:

One grain of sand is not a heap of sand; two grains of sand are not a heap of sand; …. ; adding one more grain of sand to some sand is not enough to make a heap of sand; yet, at some point, we agree we have a heap of sand.

So, where are the boundaries?

How would you distinguish “lap dancing” from “dancing?”

Highly entertaining! Will look for other relevant episodes.

September 23, 2012

Congress.gov: New Official Source of U.S. Federal Legislative Information

Filed under: Government,Government Data,Law,Law - Sources,Legal Informatics — Patrick Durusau @ 7:50 pm

Congress.gov: New Official Source of U.S. Federal Legislative Information

Legal Informatics has gathered up links to a number of reviews/comments on the new legislative interface for the U.S. federal government.

You can see the beta version at: Congress.gov.

Personally I like search and popularity being front and center, but that makes me wonder what isn’t available. Like bulk downloads in some reasonable format (can you say XML?).

What do you think about the interface?

The Cost of Strict Global Consistency [Or Rules for Eventual Consistency]

Filed under: Consistency,Database,Finance Services,Law,Law - Sources — Patrick Durusau @ 10:15 am

What if all transactions required strict global consistency? by Matthew Aslett.

Matthew quotes Basho CTO Justin Sheehy on eventual consistency and traditional accounting:

“Traditional accounting is done in an eventually-consistent way and if you send me a payment from your bank to mine then that transaction will be resolved in an eventually consistent way. That is, your bank account and mine will not have a jointly-atomic change in value, but instead yours will have a debit and mine will have a credit, each of which will be applied to our respective accounts.”

And Matthew comments:

The suggestion that bank transactions are not immediately consistent appears counter-intuitive. Comparing what happens in a transaction with a jointly atomic change in value, like buying a house, with what happens in normal transactions, like buying your groceries, we can see that for normal transactions this statement is true.

We don’t need to wait for the funds to be transferred from our accounts to a retailer before we can walk out the store. If we did we’d all waste a lot of time waiting around.

This highlights a couple of things that are true for both database transactions and financial transactions:

  • that eventual consistency doesn’t mean a lack of consistency
  • that different transactions have different consistency requirements
  • that if all transactions required strict global consistency we’d spend a lot of time waiting for those transactions to complete.

All of which is very true but misses an important point about financial transctions.

Financial transactions (involving banks, etc.) are eventually consistent according to the same rules.

That’s no accident. It didn’t just happen that banks adopted ad hoc rules that resulted in a uniform eventual consistency.

It didn’t happen over night but the current set of rules for “uniform eventual consistency” of banking transactions are spelled out by the Uniform Commercial Code. (And other laws, regulations but that is a major part of it.)

Dare we say a uniform semantic for financial transactions was hammered out without the use of formal ontologies or web addresses? And that it supports billions of transactions on a daily basis? To become eventually consistent?

Think about the transparency (to you) of your next credit card transaction. Standards and eventual consistency make that possible.

September 16, 2012

Supreme Court Database–Updated [US]

Filed under: Law,Law - Sources,Legal Informatics — Patrick Durusau @ 1:30 pm

Supreme Court Database–Updated

Michael Heise writes:

An exceptionally helpful source of data for those interested in US Supreme Court decisions was recently updated to include data from OT2011. The Supreme Court Database (2012 release, v.01, here) “contains over two hundred pieces of information about each case decided by the Court between the 19[46] and 20[11] terms. Examples include the identity of the court whose decision the Supreme Court reviewed, the parties to the suit, the legal provisions considered in the case, and the votes of the Justices.” An online codebook for this leading compilation of Supreme Court decisions (particularly for political scientists) can be found here.

The Supreme Court Database sponsors this dataset, tools for analysis and training materials to assist you with both.

Very useful for combining with other data and analysis, ranging from political science and history to more traditional legal approaches.

September 10, 2012

Sunlight Academy (Finding US Government Data)

Filed under: Government,Government Data,Law,Law - Sources — Patrick Durusau @ 4:05 pm

Sunlight Academy

From the website:

Welcome to Sunlight Academy, a collection of interactive tutorials for journalists, activists, researchers and students to learn about tools by the Sunlight Foundation and others to unlock government data.

Be sure to create a profile to access our curriculum, track your progress, watch videos, complete training activities and get updates on new tutorials and tools.

Whether you are an investigative journalist trying to get insight on a complex data set, an activist uncovering the hidden influence behind your issue, or a congressional staffer in need of mastering legislative data, Sunlight Academy guides you through how to make our tools work for you. Let’s get started!

The Sunlight Foundation has created tools to make government data more accessible.

Unlike some governments and software projects, the Sunlight Foundation business model isn’t based on poor or non-existent documentation.

Modules (as of 2012 September 10):

  • Tracking Government
    • Scout Scout is a legislative and governmental tracking tool from the Sunlight Foundation that alerts you when Congress or your state capitol talks about or takes action on issues you care about. Learn how to search and create alerts on federal and state legislation, regulations and the Congressional Record.
    • Scout (Webinar) Recorded webinar and demo of Scout from July 26, 2012. The session covered basic skills such as search terms and bill queries, as well as advanced functions such as tagging, merging outside RSS feeds and creating curated search collections.
  • Unlocking Data
    • Political Ad Sleuth Frustrated by political ads inundating your TV? Learn how you can discover who is funding these ads from the public files at your local television station through this tutorial.
    • Unlocking APIs What are APIs and how do they deliver government data? This tutorial provides an introduction to using APIs and highlights what Sunlight’s APIs have to offer on legislative and congressional data.
  • Lobbying
    • Lobbying Contribution Reports These reports highlight the millions of dollars that lobbying entities spend every year giving to charities in honor of lawmakers and executive branch officials, technically referred to as “honorary fees.” Find out how to investigate lobbying contribution reports, understand the rules behind them and see what you can do with the findings.
    • Lobbying Registration Tracker Learn about the Lobbying Registration Tracker, a Sunlight Foundation tool that allows you to track new registrations for federal lobbyists and lobbying firms. This database allows users to view registrations as they’re submitted, browse by issue, registrant or client, and see the trends in issues and registrations over the last 12 months.
    • Lobbying Report Form Four times a year, groups that lobby Congress and the federal government file reports on their activities. Unlock the important information contained in the quarterly lobbying reports to keep track of who’s influencing whom in Washington. Learn tips on how to read the reports and how they can inform your reporting.
  • Data Analysis
    • Data Visualizations in Google Docs While Google is often used for internet searches and maps, it can also help with data visualizations via Google Charts. Learn how to use Google Docs to generate interactive charts in this training.
    • Mapping Campaign Finance Data Campaign finance data can be complex and confusing — for reporters and for readers. But it doesn’t have to be. One way to make sense of it all is through mapping. Learn how to turn campaign finance information into beautiful maps, all through free tools.
    • Pivot Tables Pivot tables are powerful tools, but it’s not always obvious how to use them. Learn how to create and use pivot tables in Excel to aggregate and summarize data that otherwise would require a database.
  • Research Tools
    • Advanced Google Searches Google has made search easy and effective, but that doesn’t mean it can’t be better. Learn how to effectively use Google’s Advanced Search operators so you can get what you’re looking for without wasting time on irrelevant results.
    • Follow the Unlimited Money (webinar) Recorded webinar from August 8, 2012. This webinar covered tools to follow the millions of dollars being spent this election year by super PACs and other outside groups.
    • Learning about Data.gov Data.gov seeks to organize all of the U.S. government’s data, a daunting and unfinished task. In this module, learn about the powers and limitations of Data.gov, and what other resources to use to fill in Data.gov’s gaps.

Researching Current Federal Legislation and Regulations:…

Filed under: Government,Government Data,Law,Law - Sources,Legal Informatics — Patrick Durusau @ 3:30 pm

Researching Current Federal Legislation and Regulations: A Guide to Resources for Congressional Staff

Description quoted at Full Text Reports:

This report is designed to introduce congressional staff to selected governmental and nongovernmental sources that are useful in tracking and obtaining information federal legislation and regulations. It includes governmental sources such as the Legislative Information System (LIS), THOMAS, the Government Printing Office’s Federal Digital System (FDsys), and U.S. Senate and House websites. Nongovernmental or commercial sources include resources such as HeinOnline and the Congressional Quarterly (CQ) websites. It also highlights classes offered by the Congressional Research Service (CRS) and the Library of Congress Law Library.

This report will be updated as new information is available.

Direct link to PDF: Researching Current Federal Legislation and Regulations: A Guide to Resources for Congressional Staff

A very useful starting point for research on U.S. federal legislation and regulations, but only a starting point.

Each listed resource merits a user’s guide. And no two of them are exactly the same.

Suggestions for research/topic map exercises based on this listing of resources?

August 26, 2012

Linked Legal Data: A SKOS Vocabulary for the Code of Federal Regulations

Filed under: Law,Law - Sources,Linked Data,SKOS — Patrick Durusau @ 1:17 pm

Linked Legal Data: A SKOS Vocabulary for the Code of Federal Regulations by Núria Casellas.

Abstract:

This paper describes the application of Semantic Web and Linked Data techniques and principles to regulatory information for the development of a SKOS vocabulary for the Code of Federal Regulations (in particular of Title 21, Food and Drugs). The Code of Federal Regulations is the codification of the general and permanent enacted rules generated by executive departments and agencies of the Federal Government of the United States, a regulatory corpus of large size, varied subject-matter and structural complexity. The CFR SKOS vocabulary is developed using a bottom-up approach for the extraction of terminology from text based on a combination of syntactic analysis and lexico-syntactic pattern matching. Although the preliminary results are promising, several issues (a method for hierarchy cycle control, expert evaluation and control support, named entity reduction, and adjective and prepositional modifier trimming) require improvement and revision before it can be implemented for search and retrieval enhacement of regulatory materials published by the Legal Information Institute. The vocabulary is part of a larger Linked Legal Data project, that aims at using Semantic Web technologies for the representation and management of legal data.

Considers use of nonregulatory vocabularies, conversion of existing indexing materials and finally settles on NLP processing of the text.

Granting that Title 21, Food and Drugs is no walk in the part, take a peek at the regulations for Title 26, Internal Revenue Code. 😉

A difficulty that I didn’t see mentioned is the changing semantics in statutory law and regulations.

The definition of “person,” for example, varies widely depending upon where it appears. Both chronologically and synchronically.

Moreover, if I have a nonregulatory vocabulary and/or CFR indexes, why shouldn’t that map to the CFR SKOS vocabulary?

I may not have the “correct” index but the one I prefer to use. Shouldn’t that be enabled?

I first saw this at Legal Informatics.

August 11, 2012

Yu and Robinson on The Ambiguity of “Open Government”

Filed under: Ambiguity,Government,Law,Open Government — Patrick Durusau @ 8:14 pm

Yu and Robinson on The Ambiguity of “Open Government”

Legal Informatics calls our attention to the use of ambiguity to blunt, at least in one view, the potency of the phrase “open government.”

Whatever your politics, it is a reminder that for good or ill, semantics originate with us.

Topic maps are one tool to map those semantics, to remove (or enhance) ambiguity.

Lima on Visualization and Legislative Memory of the Brazilian Civil Code

Filed under: Law,Law - Sources,Legal Informatics — Patrick Durusau @ 6:28 pm

Lima on Visualization and Legislative Memory of the Brazilian Civil Code

Legal Informatics report the publication of the legislative history of the Brazilian Civil Code and a visualization of the Brazilian Civil Code.

Tying in Planiol’s Treatise on Civil Law (or other commentators) to such resources would make a nice showcase for topic maps.

August 8, 2012

GitLaw in Germany

Filed under: Law,Law - Sources,Legal Informatics — Patrick Durusau @ 1:51 pm

GitLaw in Germany: Deutsche Bundesgesetze- und verordnungen im Markdown auf GitHub = German Federal Laws and Regulations in Markdown on GitHub

Legal Informatics reports that German Federal Laws and Regulations are available in Markdown.

A useful resource if you have legal resources to make good use of it.

I would not advise self-help based on a Google translation of any of these materials.

August 1, 2012

Updated: Lists of Legal Metadata and Legal Knowledge Representation Resources

Filed under: Law,Legal Informatics — Patrick Durusau @ 7:33 pm

Updated: Lists of Legal Metadata and Legal Knowledge Representation Resources

Updated resource lists for anyone interested in legal informatics.

July 26, 2012

Law Libraries, Government Transparency, and the Internet

Filed under: Government,Law,Library — Patrick Durusau @ 9:35 am

Law Libraries, Government Transparency, and the Internet by Daniel Schuman.

From the post:

This past weekend I was fortunate to attend the American Association of Law Libraries 105th annual conference. On Sunday morning, I gave a presentation to a special interest section entitled “Law Libraries, Government Transparency, and the Internet,” where I discussed the important role that law libraries can play in making the government more open and transparent.

The slides illustrate the range of legal material, which is by definition difficult for the lay reader to access, that is becoming available.

I see an important role for law libraries as curators who create access points for both professional as well as lay researchers.

I first saw this at Legal Informatics.

« Newer PostsOlder Posts »

Powered by WordPress