Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

June 29, 2012

Bruce: How Well Does Current Legislative Identifier Practice Measure Up?

Filed under: Identifiers,Law,Law - Sources,Legal Informatics — Patrick Durusau @ 3:15 pm

Bruce: How Well Does Current Legislative Identifier Practice Measure Up?

From Legal Informatics:

Tom Bruce of the Legal Information Institute at Cornell University Law School (LII) has posted Identifiers, Part 3: How Well Does Current Practice Measure Up?, on LII’s new legislative metadata blog, Making Metasausage.

In this post, Tom surveys legislative identifier systems currently in use. He recommends the use of URIs for legislative identifiers, rather than URLs or URNs.

He cites favorably the URI-based identifier system that John Sheridan and Dr. Jeni Tennison developed for the Legislation.gov.uk system. Tom praises Sheridan’s (here) and Tennison’s (here and here) writings on legislative URIs and Linked Data.

Tom also praises the URI system implemented by Dr. Rinke Hoekstra in the Leibniz Center for Law‘s Metalex Document Server for facilitating point-in-time as well as point-in-process identification of legislation.

Tom concludes by making a series of recommendations for a legislative identifier system:

See the post for his recommendations (in case you are working on such a system) and for other links.

I would point out that existing legislation has identifiers from before it receives the “better” identifiers specified here.

And those “old” identifiers will have been incorporated into other texts, legal decisions and the like.

Oh.

We can’t re-write existing identifiers so it’s a good thing topic maps accept subjects having identifiers, plural.

June 27, 2012

An API for European Union legislation

Filed under: Law,Law - Sources,Legal Informatics — Patrick Durusau @ 1:51 pm

An API for European Union legislation

From the webpage:

The API can help you conduct research, create data visualizations or you can even build applications upon it.

This is an application programming interface (API) that opens up core EU legislative data for further use. The interface uses JSON, meaning that you have easy to use machine-readable access to meta data on European Union legislation. It will be useful if you want to use or analyze European Union legislative data in a way that the official databases are not originally build for. The API extracts, organize and connects data from various official sources.

Among other things we have used the data to conduct research on the decision-making time*, analyze voting patterns*, measure the activity of Commissioners* and visualize the legislative integration process over time*, but you can use the API as you want to. When you use it to create something useful or interesting be sure to let us know, if you want to we can post a link to your project from this site.

For some non-apparent reason, the last paragraph has hyperlinks for the “*” characters. So that is not a typo, that is how it appears in the original text.

There are a large number of relationships captured by data accessible through this API. The sort of relationships that topic maps excel at handling.

I first saw this at: DZone: An API for European Union legislation

June 24, 2012

Report of Second Phase of Seventh Circuit eDiscovery Pilot Program

Filed under: Law,Legal Informatics — Patrick Durusau @ 3:42 pm

Report of Second Phase of Seventh Circuit eDiscovery Pilot Program Published

From Legal Informatics:

The Seventh Circuit Electronic Discovery Pilot Program has published its Final Report on Phase Two, May 2010 to May 2012 (very large PDF file).

A principal purpose of the program is to determine the effects of the use of Principles Relating to the Discovery of Electronically Stored Information in litigation in the Circuit.

The report describes the results of surveys of lawyers who participated in efiling in the Seventh Circuit, and of judges and lawyers who participated in trials in which the Circuit’s Principles Relating to the Discovery of Electronically Stored Information were applied.

True enough, the report is “a very large PDF file.” At 969 pages and 111.5 MB. Don’t try downloading while you are on the road, unless you are in South Korea or Japan.

I don’t have the time today but the report isn’t substantively 969 pages long. Pages of names and addresses, committee minutes, presentations, filler of various kinds. If you find it other than in PDF format, I might be interested in generating a shorter version that might be of more interest.

Bottom line was that cooperation in discovery as it relates to electronically stored information reduces costs and yet maintains standards for representation.

Topic maps can play an important role both in eDiscovery but in relating information together, whatever its original form.

True enough, there are services that perform those functions now, but have you ever taken one of their work products and merged it with another?

By habit or chance, the terms used may be close enough to provide a useful result, but how do you verify the results?

June 23, 2012

Fastcase Introduces e-Books, Beginning with Advance Sheets

Filed under: Law,Law - Sources,Legal Informatics — Patrick Durusau @ 4:07 pm

Fastcase Introduces e-Books, Beginning with Advance Sheets

From the post:

According to the Fastcase blog post, Fastcase advance sheets will be available “for each state, federal circuit, and U.S. Supreme Court”; will be free of charge and “licensed under [a] Creative Commons BY-SA license“; and will include summaries. Each e-Book Advance Sheet will contain “one month’s judicial opinions (designated as published and unpublished) for specific states or courts.”

According to Sean Doherty’s post, future Fastcase e-Books will include “e-book case reporters with official pagination and links” into the Fastcase database, as well as “topical reporters” on U.S. law, covering fields such as securities law and antitrust law.

According to the Fastcase blog post, Fastcase’s approach to e-Books is inspired in part by CALI‘s Free Law Reporter, which makes case law available as e-Books in EPUB format.

For details, see the links in the post at Legal Informatics.

I mention it because not only could you have “topical reporters” but information products that are tied to even narrower areas of case law.

Such as litigation that a firm has pending or very narrow areas of liability (for example) of interest to a particular client. Granting there are “case watch” resources in every trade zine, but hardly detailed enough to do more than “excite the base” as they say.

With curated content from a topic map application, rather than “exciting the base,” you could be sharpening the legal resources you can whistle up on behalf of your client. Increasing their appreciate and continued interest in representation by you.

June 12, 2012

how much of commonsense and legal reasoning is formalizable? A review of conceptual obstacles

Filed under: Law,Law - Sources,Legal Informatics — Patrick Durusau @ 2:19 pm

how much of commonsense and legal reasoning is formalizable? A review of conceptual obstacles by Jame Franklin.

Abstract:

Fifty years of effort in artificial intelligence (AI) and the formalization of legal reasoning have produced both successes and failures. Considerable success in organizing and displaying evidence and its interrelationships has been accompanied by failure to achieve the original ambition of AI as applied to law: fully automated legal decision-making. The obstacles to formalizing legal reasoning have proved to be the same ones that make the formalization of commonsense reasoning so difficult, and are most evident where legal reasoning has to meld with the vast web of ordinary human knowledge of the world. Underlying many of the problems is the mismatch between the discreteness of symbol manipulation and the continuous nature of imprecise natural language, of degrees of similarity and analogy, and of probabilities.

I haven’t (yet) been able to access a copy of this article.

From the abstract,

….mismatch between the discreteness of symbol manipulation and the continuous nature of imprecise natural language, of degrees of similarity and analogy, and of probabilities.

I suspect it will be useful reminder of the boundaries to formal information systems.

I first saw this at Legal Informatics: Franklin: How Much of Legal Reasoning Is Formalizable?

May 27, 2012

SPLeT 2012: Workshop on Semantic Processing of Legal Texts

Filed under: Legal Informatics — Patrick Durusau @ 10:27 am

SPLeT 2012: Workshop on Semantic Processing of Legal Texts

Legal Informatics has a listing of the papers from the SPLeT 2012 workshop.

You know, with an acronym like that, you wonder why you missed it in the past. 😉

In case you did miss it in the past:

SPLet 2010 Proceedings at Legal Informatics

SPLet 2008 Workshop on Semantic Processing of Legal Texts.

Selected papers from SPLet 2008 were expanded into: Semantic Processing of Legal Texts Where the Language of Law Meets the Law of Language edited by Enrico Francesconi, Simonetta Montemagni, Wim Peters and Daniela Tiscornia.

Sixty-three (63) pages (2008 proceedings) versus two hundred forty-nine (249) for the Springer title. I don’t have the printed volume so can’t comment on the value of the expansion. (Even “used” the paperback is > $50.00 US. I would borrow a copy before ordering.)

Assuming meetings every two years, SPLet 2006 should have been the first workshop. That workshop apparently did not co-locate with LREC. A pointer to the workshop and proceedings if possible would be appreciated.

May 25, 2012

Bruce on Legislative Identifier Granularity

Filed under: Identifiers,Law,Law - Sources,Legal Informatics — Patrick Durusau @ 10:23 am

Bruce on Legislative Identifier Granularity

From the post:

In this post, Tom [Bruce] explores legislative identifier granularity, or the level of specificity at which such an identifier functions. The post discusses related issues such as the incorporation of semantics in identifiers; the use of “pure” (semantics-free) legislative identifiers; and how government agency authority and procedural rules influence the use, “persistence, and uniqueness” of identifiers. The latter discussion leads Tom to conclude that

a “gold standard” system of identifiers, specified and assigned by a relatively independent body, is needed at the core. That gold standard can then be extended via known, stable relationships with existing identifier systems, and designed for extensible use by others outside the immediate legislative community.

Interesting and useful reading.

Even though a “gold standard” of identifiers for something as dynamic as legislation, isn’t likely.

Or rather, isn’t going to happen.

There are too many stakeholders in present systems for any proposal to carry the day.

Not to mention decades, if not centuries, of references in other systems.

May 20, 2012

…Commenting on Legislation and Court Decisions

Filed under: Annotation,Law,Legal Informatics — Patrick Durusau @ 6:16 pm

Anderson Releases Prototype System Enabling Citizens to Comment on Legislation and Court Decisions

Legalinformatics brings news that:

Kerry Anderson of the African Legal Information Institute (AfricanLII) has released a prototype of a new software system enabling citizens to comment on legislation, regulations, and court decisions.

There are several initiatives like this one, which is encouraging from the perspective of crowd-sourcing data for annotation.

May 19, 2012

Hands-on examples of legal search

Filed under: e-Discovery,Law,Legal Informatics,Searching — Patrick Durusau @ 7:04 pm

Hands-on examples of legal search by Michael J. Bommarito II.

From the post:

I wanted to share with the group some of my recent work on search in the legal space. I have been developing products and service models, but I thought many of the experiences or guides could be useful to you. I would love to share some of this work to help foster a “hacker” community in which we might collaborate on projects.

The first few posts are based on Amazon’s CloudSearch service. CloudSearch, as the name suggests, is a “cloud-based” search service. Once you decide what and how you would like to search, Amazon handles procuring the underlying infrastructure, scaling to required capacity, stemming, stop-wording, building indices, etc. For those of you who do not have access to “search appliances” or labor to configure products like Solr, this offers an excellent opportunity.

Pointers to several posts by Michael that range from searching U.S. Supreme Court decisions, email archives, to statutory law.

From law to eDiscovery, something for everybody!

May 15, 2012

Electronic Discovery Institute

Filed under: Law,Legal Informatics — Patrick Durusau @ 2:03 pm

Electronic Discovery Institute

From the home page:

The Electronic Discovery Institute is a non-profit organization dedicated to resolving electronic discovery challenges by conducting studies of litigation processes that incorporate modern technologies. The explosion in volume of electronically stored information and the complexity of its discovery overwhelms the litigation process and the justice system. Technology and efficient processes can ease the impact of electronic discovery.

The Institute operates under the guidance of an independent Board of Diplomats comprised of judges, lawyers and technical experts. The Institute’s studies will measure the relative merits of new discovery technologies and methods. The results of the Institute’s studies will be shared with the public free of charge. In order to obtain our free publications, you must create a free log-in with a legitimate user profile. We do not sell your information. Please visit our sponsors – as they provide altruistic support to our organization.

I encountered the Electronic Discovery Institute while researching information on electronic discovery. Since law was and still is an interest of mine, wanted to record it here.

The area of e-discovery is under rapid development, in terms rules that govern it, the technology that it employs and its practice in real world situations with consequences for the players.

Commend this site/organization to anyone interested in e-discovery issues.

May 9, 2012

Crowdsourced Legal Case Annotation

Filed under: Annotation,Law,Law - Sources,Legal Informatics — Patrick Durusau @ 12:38 pm

Crowdsourced Legal Case Annotation

From the post:

This is an academic research study on legal informatics (information processing of the law). The study uses an online, collaborative tool to crowdsource the annotation of legal cases. The task is similar to legal professionals’ annotation of cases. The result will be a public corpus of searchable, richly annotated legal cases that can be further processed, analysed, or queried for conceptual annotations.

Adam and Wim are computer scientists who are interested in language, law, and the Internet.

We are inviting people to participate in this collaborative task. This is a beta version of the exercise, and we welcome comments on how to improve it. Please read through this blog post, look at the video, and get in contact.

Non-trivial annotation of complex source documents.

What you do with the annotations, such as create topic maps, etc. would be a separate step.

The early evidence for the enhancement of our own work, based on the work of others, Picking the Brains of Strangers…, should make this approach even more exciting.

PS: I saw this at Legal Informatics but wanted to point you directly to the source article.
Just musing for a moment but what if the conclusion on collaboration and access is that by restricting access we impoverish not only others, but ourselves as well?

Bruce on the Functions of Legislative Identifiers

Filed under: Identifiers,Law,Law - Sources,Legal Informatics — Patrick Durusau @ 12:06 pm

Bruce on the Functions of Legislative Identifiers

From Legal Informatics:

In this post, Tom [Bruce] discusses the multiple functions that legislative document identifiers serve. These include “unique naming,” “navigational reference,” “retrieval hook / container label,” “thread tag / associative marker,” “process milestone,” and several more.

A promised second post will examine issues of identifier design.

Enjoy and pass along!

May 8, 2012

@Zotero 4 Law and OpenCongress.org

Filed under: Law,Law - Sources,Legal Informatics — Patrick Durusau @ 3:39 pm

@Zotero 4 Law and OpenCongress.org

I don’t suppose one more legal resource from Legal Informatics for today will hurt anything. 😉

A post on MLZ (Multilingual Zotero), a legal research and citation processor. Operates as a plugin to Firefox.

Even if you don’t visit the original post, do watch the video on using MLZ. Not slick but you will see the potential that it offers.

It should also give you some ideas about user friendly interfaces and custom topic map applications.

Mill: US Code Citation Extraction Library in JavaScript, with Node API

Filed under: Law,Law - Sources,Legal Informatics — Patrick Durusau @ 10:51 am

Mill: US Code Citation Extraction Library in JavaScript, with Node API

Legal Informatics brings news of new scripts by Eric Mill of Sunlight Labs to extract US Code citations in texts.

Legal citations being a popular means of identifying laws, these would be of interest for law related topic maps.

Monique da Silva Moore, et al. v. Publicis Group SA, et al, 11 Civ. 1279

Filed under: e-Discovery,Law,Legal Informatics — Patrick Durusau @ 10:44 am

Monique da Silva Moore, et al. v. Publicis Group SA, et al, 11 Civ. 1279

The foregoing link is something of a novelty. It is a link to the opinion by US Magistrate Andrew Peck, approving the use of predictive coding (computer-assisted review) as part of e-discovery.

It is not a pointer to an article with no link to the opinion. It is not a pointer to an article on the district judge’s opinion, upholding the magistrate’s order but adding nothing of substance on the use of predictive coding. It is not a pointer to a law journal that requires “free” registration.

I think readers have a reasonable expectation that articles contain pointers to primary source materials. Otherwise, why not write for the tabloids?

Sorry, I just get enraged when resources do not point to primary sources. Not only is it poor writing, it is discourteous to readers.

Magistrate Peck’s opinion is said to be the first that approves the use of predictive coding as part of e-discovery.

In very summary form, the plaintiff (the person suing) has requested the defendant (the person being sued), produce documents, including emails, in its possession that are responsive to a discovery request. A discovery request is where the plaintiff specifies what documents it wants the defendant to produce, usually described as a member of a class of documents. For example, all documents with statements about [plaintiff’s name] employment with X, prior to N date.

In this case, there are 3 million emails to be searched and then reviewed by the defense lawyers (for claims of privilege, non-disclosure authorized by law, such as advice of counsel in some cases) prior to production for review by the plaintiff, who may then use one or more of the emails at trial.

The question is: Should the defense lawyers use a few thousand documents to train a computer to search the 3 million documents or should they use other methods, which will result in much higher costs because lawyers have to review more documents?

The law, facts and e-discovery issues weave in and out of Magistrate Peck’s decision but if you ignore the obviously legalese parts you will get the gist of what is being said. (If you have e-discovery issues, please seek professional assistance.)

I think topic maps could be very relevant in this situation because subjects permeate the discovery process, under different names and perspectives, to say nothing of sharing analysis and data with co-counsel.

I am also mindful that analysis of presentations, speeches, written documents, emails, discovery from other cases, could well develop profiles of potential witnesses in business litigation in particular. A topic map could be quite useful in mapping the terminology most likely to be used by a particular defendant.

BTW, it will be a long time coming, in part because it would reduce the fees of the defense bar, but I would say, “OK, here are the 3 million emails. We reserve the right to move to exclude any on the basis of privilege, relevancy, etc.”

That ends all the dancing around about discovery and if the plaintiff wants to slough through 3 million emails, fine. They still have to disclose what they intend to produce as exhibits at trial.

April 29, 2012

Legal Entity Identifier – Preparing for the Inevitable

Filed under: Identifiers,Law,Legal Entity Identifier (LEI),Legal Informatics — Patrick Durusau @ 2:04 pm

Legal Entity Identifier – Preparing for the Inevitable by Peter Ku.

From the post:

Most of the buzz around the water cooler for those responsible for enterprise reference data in financial services has been around the recent G20 meeting in Switzerland on the details of the proposed Legal Entity Identifier (LEI). The LEI is designed to help regulators manage and monitor systemic risk in the financial markets by creating a unique ID to recognize legal entities/counterparties shared by the global financial companies and government regulators. Agreement to adoption is expected to be decided at the G20 leaders’ summit coming up in June in Mexico as regulators decide the details as to the administration, implementation and enforcement of the standard. Will the new LEI solve the issues that led to the recent financial crisis?

Looking back at history, this is not the first time the financial industry has attempted to create a unique ID system for legal entities, remember the Data Universal Numbering System (DUNS) identifier as an example? What is different from the past is that the new LEI standard is set at a global vs. regional level which had caused past attempts to fail. Unfortunately, the LEI standard will not replace existing IDs that firms deal with every day. Instead, it creates further challenges requiring companies to map existing IDs to the new LEI, reconciling naming differences, maintain legal hierarchy relationships between parent and subsidiary entities from ongoing corporate actions, and also link it to the securities and loans to the legal entities.

….

While many within the industry are waiting to see what the regulators decide in June, existing issues related to the quality, consistency, and delivery of counterparty reference data and the downstream impact on managing risk needs to be dealt with regardless if LEI is passed. In the same report, I shared the challenges firms will face incorporating the LEI including:

  • Accessing, reconciling, and relating existing counterparty information and IDs to the new LEI
  • Effectively identifying and resolving data quality issues from external and internal systems
  • Accurately identifying legal hierarchy relationships which LEI will not maintain in its first instantiation.
  • Cross referencing legal entities with financial and securities instruments
  • Extending both counterparty and securities instruments to downstream front, mid, and back office systems.

As a topic map person, do any of these issues sound familiar to you?

In particular creating a new identifier to solve problems with resolving multiple “old” ones?

Being mindful that all data systems are capable of and/or contain errors, intentional (dishonest) and otherwise.

Presuming perfect records, and perfect data in those records, not only guarantees failure, but avenues for abuse.

Peter cites resources you will need to read.

April 27, 2012

Scout, in Open Beta

Filed under: Law,Law - Sources,Legal Informatics — Patrick Durusau @ 6:11 pm

Scout, in Open Beta

Eric Mill writes:

Scout is an alert system for the things you care about in state and national government. It covers Congress, regulations across the whole executive branch, and legislation in all 50 states.

You can set up notifications for new things that match keyword searches. Or, if you find a particular bill you want to keep up with, we can notify you whenever anything interesting happens to it — or is about to.

Just to emphasize, this is a beta – it functions well and looks good, but we’re really hoping to hear from the community on how we can make it stronger. You can give us feedback by using the Feedback link at the top of the site, or by writing directly to scout@sunlightfoundation.com.

Legal terminology variation between states plus the feds is going to make keyword searches iffy.

Will vary among areas of law.

Greatest variation in family and criminal law, least among some parts of commercial law.

Anyone know if there is a cross-index of terminology between the legal systems of the states?

April 26, 2012

CodeX: Standard Center for Legal Informatics

Filed under: Law,Legal Informatics — Patrick Durusau @ 6:30 pm

CodeX: Standard Center for Legal Informatics

Language and semantics are noticed more often with regard to legal systems than they are elsewhere. Failing to “get” a joke on television show doesn’t have the same consequences, potentially, as breaking a law.

Within legal systems topic maps are important for capturing and collating complex factual and legal semantics. As the world grows more international, legal system bump up against each other and topic maps provide a way to map across such systems.

From the website:

CodeX is a multidisciplinary laboratory operated by Stanford University in association with affiliated organizations from industry, government, and academia. The staff of the Center includes a core of full-time employees, together with faculty and students from Stanford and professionals from affiliated organizations.

CodeX’s primary mission is to explore ways in which information technology can be used to enhance the quality and efficiency of our legal system. Our goal is “legal technology” that empowers all parties in our legal system and not solely the legal profession. Such technology should help individuals find, understand, and comply with legal rules that govern their lives; it should help law-making bodies analyze proposed laws for cost, overlap, and inconsistency; and it should help enforcement authorities ensure compliance with the law.

Projects carried out under the CodeX umbrella typically fall into one or more of the following areas:

  • Legal Document Management: Legal Document Management:is concerned with the creation, storage, and retrieval of legal documents of all types, including statutes, case law, patents, regulations, etc. The $50B e-discovery market is heavily dependent on Information Retrieval (IR) technology. By automating information retrieval, cost can be dramatically reduced. Furthermore, it is generally the case that well-tuned automated procedures can outperform manual search in terms of accuracy. CodeX is investigating various innovative legal document management methodologies and helping to facilitate the use of such methods across the legal spectrum.
  • Legal Infrastructure: Some CodeX projects focus on building the systems that allow the stakeholders in the legal system to connect and collaborate more efficiently. Leveraging advances in the field of computer science and building upon national and international standardization efforts, these projects have the potential to provide economic and social benefits by streamlining the interactions of individuals, organizations, legal professionals and government as they acquire and deliver legal services. By combining the development of such platforms with multi-jurisdictional research on relevant regulations issued by governments and bar associations, the Center supports responsible, forward-looking innovation in the legal industry.
  • Computational Law: Computational law is an innovative approach to legal informatics based on the explicit representation of laws and regulations in computable form. Computational Law techniques can be used to “embed” the law in systems used by individuals and automate certain legal decision making processes or in the alternative bring the legal information as close to the human decision making as possible. The Center’s work in this area includes theoretical research on representations of legal information, the creation of technology for processing and utilizing information expressed within these representations, and the development of legal structures for ratifying and exploiting such technology. Initial applications include systems for helping individuals navigate contractual regimes and administrative procedures, within relatively discrete e-commerce and governmental domains.

April 24, 2012

Mandelbaum on How XML Can Improve Transparency and Workflows for Legislatures

Filed under: Law,Legal Informatics — Patrick Durusau @ 7:16 pm

Mandelbaum on How XML Can Improve Transparency and Workflows for Legislatures

From Legal Informatics Blog a post reporting on the use of XML in legislatures.

You need to read Mandelbaum’s post (lots of good pointers), where Mandelbaum concedes that open formats != transparency but offers the following advantages to get legislatures around to XML:

  • Preservation.
  • Efficiency.
  • Cost-Effectiveness.
  • Flexibility.
  • Ease of Use.

Personally I would get a group of former legislators to invest in XML based solutions and have them lobby their former colleagues for the new technology. That would take less time than waiting for current vendors to get up to speed on XML.

The various benefits to XML while true, would be how the change to XML is explained to members of the public.

Topic maps could be used by others to track such relationships and changes. That might result in free advertising for the former members of the legislature. A sort of external validation of their effectiveness.

April 22, 2012

Texas Library Association: The Next Generation of Knowledge Management

Filed under: Law,Legal Informatics — Patrick Durusau @ 7:06 pm

Texas Library Association: The Next Generation of Knowledge Management

Greg Lambert writes:

I had the honor of presenting to at the Texas Library Association Conference here in Houston today. The topic was on Library and Knowledge Management’s collaborative roles within a firm, and how they can work together to bring in better processes, automate certain manual procedures, and add analyze data in a way that makes it (and as a result, KM and Library) more valuable.

Below are the thoughts I wrote down to discuss six questions. These questions were raised at the ARK KM meeting earlier this year and, although the audience was substantially different, I thought it would be a good reference point to cover what is expected of us, and how we can contribute to the operations of the firm in unexpected ways. Thanks to Sean Luman for stepping in and co-presenting with me after Toby suddenly had a conflict.

[Note: Click here to see the Prezi that went along with the presentation.]

My first time to see a “Prezi.” See what you think about it. Comments?

BTW, I thought the frame with:

Lawyers like to think all work is “custom” work.

Clients tend to think most work is “repetitive” (but lawyers are still charging as if it is custom work.

Was quite amusing. I suspect the truth lies somewhere in between those two positions.

I think topic maps can help to integrate not only traditional information sources with case analysis, pleadings, discovery, but non-traditional resources as well. News sources for example. Government agency rulings, opinions, treatment of similarly situated parties. The current problem being that an attorney has to search separate resources for all of those sources of information and more.

Skillful collation of diverse information sources using topic maps would allow attorneys to bill at full rate for the exercise of their knowledge and analytical skills, while eliminating charges for largely rote work of ferreting out resources to be analyzed.

For example, a patent topic map in a particular area, could deliver to a patent attorney just those portions of patents that are relevant for their review, not all patents in a searched area or even the full patents. And the paths taken on the analysis of one patent, could be available to other attorneys in the same firm, enabling a more efficient response to later queries in a particular area (think of it as legal bread crumbs).

The Public Library of Law

Filed under: Law - Sources,Legal Informatics — Patrick Durusau @ 7:06 pm

The Public Library of Law

From the website:

Searching the Web is easy. Why should searching the law be any different? That’s why Fastcase has created the Public Library of Law — to make it easy to find the law online. PLoL is one of the largest free law libraries in the world, because we assemble law available for free scattered across many different sites — all in one place. PLoL is the best starting place to find law on the Web.

Well…., yes, I suppose “[s]earching the Web is easy” but getting useful results is not.

Getting useful results from searching the law is even more difficult. Far more difficult.

The Federal Rules of Civil Procedure (US Federal Courts) run just under one hundred pages (one sixty-eight (168) with forms). For law student there is The Law of Federal Courts, 7th Ed. by Charles A. Wright and Mary Kay Kane at ten (10) times that long and is a blizzard of case citations and detailed analysis. Professionals use Federal Practice and Procedure, Wright & Miller, which covers criminal and other aspects of federal procedure, at thirty-one volumes. A professional would also be using other resources of equal depth to Wright & Miller on relevant legal issues.

I fully support what the Public Library of Law is trying to do. But, want you to be aware that useful legal research requires more than finding language you like or happen to agree with. Perhaps more than most places, in law words don’t always mean what you think they mean. And vary from place to place more than you would expect.

Deeply fascinating reading awaits you but if you need legal advice, there is no substitute for consulting someone with professional training who reads the law everyday.

I have included the PLoL here because I think topic maps have a tremendous potential for legal research and practice.

Imagine:

  • Mapping case analysis, law, to pleadings, depositions, etc.
  • Mapping pleadings, motions, etc. to particular trial judges.
  • Mapping appeals decisions to particular trial judges and attorneys.
  • Mapping appeals decisions to detailed case facts.
  • Mapping appeals decisions to judges and attorneys.
  • Recording paths through depositions to other evidence.
  • Mapping different terminologies between witnesses.
  • Mapping portions of pleadings, discovery, etc., to specific facts, courts.
  • Harvesting anecdotal stories to create internal resources.
  • Or creating a service that offers one or more of these services to attorneys.

April 19, 2012

Building an AWS CloudSearch domain for the Supreme Court

Filed under: Amazon CloudSearch,Law - Sources,Legal Informatics — Patrick Durusau @ 7:20 pm

Building an AWS CloudSearch domain for the Supreme Court by Michael J Bommarito II.

Michael writes:

It should be pretty clear by now that two things I’m very interested in are cloud computing and legal informatics. What better way to show it than to put together a simple AWS CloudSearch tutorial using Supreme Court decisions as the context? The steps below should take you through creating a fully functional search domain on AWS CloudSearch for Supreme Court decisions.

A sure to be tweeted and read (at least among legal informatics types) introduction to AWS CloudSearch.

The source file only covers U.S. Supreme Court decisions announced by March of 2008. I am looking for later sources of information. And documentation on the tagging/metadata of the files.

April 11, 2012

GovTrack Adds Probabilities to Bill Prognosis

Filed under: Law,Legal Informatics — Patrick Durusau @ 6:16 pm

GovTrack Adds Probabilities to Bill Prognosis

From the post:

a href=”http://razor.occams.info/”>Dr. Joshua Tauberer of GovTrack has posted Even Better Bill Prognosis: Now with Real Probabilities, on the GovTrack Blog.

In this post, Dr. Tauberer describes the new probability-of-passage figure added to GovTrack’s bill prognosis feature. According to the post:

The analysis has a lot of the factors you would expect but more are certainly possible. Topic maps certainly would be a way to help discover additional factors that should be added.

Personally I favor a “show me the money” type analysis for political decision making processes.

April 9, 2012

Iowa Government Gets a Digital Dictionary Provided By Access

Filed under: Indexing,Law,Legal Informatics,Thesaurus — Patrick Durusau @ 4:32 pm

Iowa Government Gets a Digital Dictionary Provided By Access

Whitney Grace writes:

How did we get by without the invention of the quick search to look up information? We used to use dictionaries, encyclopedias, and a place called the library. Access Innovations, Inc. has brought the Iowa Legislature General Assembly into the twenty-first century.

The write-up “Access Innovations, Inc. Creates Taxonomy for Iowa Code, Administrative Code and Acts” tells us the data management industry leader has built a thesaurus that allows the Legislature to search its library of proposed laws, bills, acts, and regulations. Users can also add their unstructured data to the thesaurus. Access used their Data Harmony software to provide subscription-based delivery and they built the thesaurus on MAIstro.

Sounds very much like a topic map-like project doesn’t it? Will be following up for more details.

April 8, 2012

Casellas et al. on Linked Legal Data: Improving Access to Regulatory Information

Filed under: Law - Sources,Legal Informatics,Linked Data — Patrick Durusau @ 4:21 pm

Casellas et al. on Linked Legal Data: Improving Access to Regulatory Information

From the post:

Dr. Núria Casellas of the Legal Information Institute at Cornell University Law School, and colleagues, have posted Linked Legal Data: Improving Access to Regulatory Information, a poster presented at Bits on Our Mind (BOOM) 2012, held 4 April 2012 at the Cornell University Department of Computing and Information Science, in Ithaca, New York, USA.

Here are excerpts from the poster:

The application of Linked Open Data (LOD) principles to legal information (URI naming of resources, assertions about named relationships between resources or between resources and data values, and the possibility to easily extend, update and modify these relationships and resources) could offer better access and understanding of legal knowledge to individual citizens, businesses and government agencies and administrations, and allow sharing and reuse of legal information across applications, organizations and jurisdictions. […]

With this project, we will enhance access to the Code of Federal Regulations (a text with 96.5 million words in total; ~823MB XML file size) with an RDF dataset created with a number of semantic-search and retrieval applications and information extraction techniques based on the development and the reuse of RDF product taxonomies, the application of semantic matching algorithms between these materials and the CFR content (Syntactic and Semantic Mapping), the detection of product-related terms and relations (Vocabulary Extraction), obligations and product definitions (Definition and Obligations Extraction). […]

You know, lawyers always speculated if the “Avoid Probate” (for non-U.S. readers, a publication to help citizens avoid the use of lawyers for inheritance issues) were in fact shadow publications of the bar association to promote the use of lawyers.

You haven’t seen a legal mess until someone tries “self-help” in a legal context. Probably doubles if not triples the legal fees involved.

Still, this may be an interesting source of data for services for lawyers and foolhardy citizens.

I shudder though at the “sharing of legal information across jurisdictions.” In most of the U.S., a creditor can claim say a car where a mortgage is past due. Without going to court. In Louisiana, at least a number of years ago, there was another name for self-help repossession. It was called felony theft. Like I said, self-help when it comes to the law isn’t a good idea.

April 6, 2012

URN:LEX: New Version 06 Available

Filed under: Identifiers,Law,Law - Sources,Legal Informatics — Patrick Durusau @ 6:47 pm

URN:LEX: New Version 06 Available

From the purpose of the namespace “lex:”

The purpose of the “lex” namespace is to assign an unequivocal identifier, in standard format, to documents that are sources of law. To the extent of this namespace, “sources of law” include any legal document within the domain of legislation, case law and administrative acts or regulations; moreover potential “sources of law” (acts under the process of law formation, as bills) are included as well. Therefore “legal doctrine” is explicitly not covered.

The identifier is conceived so that its construction depends only on the characteristics of the document itself and is, therefore, independent from the document’s on-line availability, its physical location, and access mode.

This identifier will be used as a way to represent the references (and more generally, any type of relation) among the various sources of law. In an on-line environment with resources distributed among different Web publishers, uniform resource names allow simplified global interconnection of legal documents by means of automated hypertext linking.

If creating names just for law “sources” sounds like low-lying fruit to you, take some time to become familiar with the latest draft.

March 21, 2012

European Legislation Identifier: Document and Slides

Filed under: EU,Government,Law,Legal Informatics — Patrick Durusau @ 3:31 pm

European Legislation Identifier: Document and Slides

From LegalInformatics:

John Dann of the Luxembourg Service Central de Législation has kindly given his permission for us to post the following documents related to the proposed European Legislation Identifier (ELI) standard:

If you are interested in legal identifiers or legislative materials in Europe more generally, this should be of interest.

March 2, 2012

Call for Participation: OASIS LegalDocumentML (LegalDocML) Technical Committee

Filed under: Law,Law - Sources,Legal Informatics — Patrick Durusau @ 8:04 pm

Call for Participation: OASIS LegalDocumentML (LegalDocML) Technical Committee

If you are interested in topic maps and legal documents, take note of the following:

Those wishing to become voting members of the committee must join by 22 March 2012.

The committee’s first meeting will be held 29 March 2012, by telephone.

Legal publishers take particular note if your publication system is using other formats.

Topic maps can provide mappings between the deliverables of this TC and your current format.

How large that step will be, will depend on the outcome of TC deliberations. Participation in the TC may influence those deliberations.

Let me know if you need more information.

February 28, 2012

Juriscraper: A New Tool for Scraping Court Websites

Filed under: Law,Law - Sources,Legal Informatics — Patrick Durusau @ 8:43 pm

Juriscraper: A New Tool for Scraping Court Websites

Legalinformatics reports a new tool for scraping court websites.

I understand the need for web scraping tools but fail to understand why public data sources make it necessary? It is getting to where it is a fairly trivia exercise so it is only impeding access, not denying it.

Not that denying access is acceptable but at least it would be an understandable motivation. To try knowing you are going to fail makes you look dumb. Perhaps that is its reward.

February 16, 2012

Akoma Ntoso

Filed under: Law,Legal Informatics — Patrick Durusau @ 6:54 pm

Akoma Ntoso

From the webpage:

Akoma Ntoso (“linked hearts“ in Akan language of West Africa) defines a “machine readable“ set of simple technology-neutral electronic representations (in XML format) of parliamentary, legislative and judiciary documents.

Akoma Ntoso is a set of simple, technology-neutral XML machine-readable descriptions of official documents such as legislation, debate record, minutes, etc. that enable addition of descriptive structure (markup) to the content of parliamentary and legislative documents.

Akoma Ntoso XML schema make “accessible” structure and semantic components of digital documents supporting the creation of high value information services to deliver the power of ICTs to support efficiency and accountability in the parliamentary, legislative and judiciary contexts.

Akoma Ntoso is an initiative of “Africa i-Parliament Action Plan” (www.parliaments.info) a programme of UN/DESA.

Be aware that a new TC has been proposed at OASIS, LeDML, to move Akoma Ntoso towards becoming an international standard.

Applying Akoma Ntoso to the United States Code is a post by Grant Vergottini about his experiences converting the US Code markup into Akoma Ntoso.

Markup can, not necessarily will, simplify the task of creating topic maps of legal materials.

« Newer PostsOlder Posts »

Powered by WordPress