Archive for the ‘IT’ Category

TOGAF® 9.1 Translation Glossary: English – Norwegian

Sunday, February 8th, 2015

TOGAF® 9.1 Translation Glossary: English – Norwegian (PDF)

From the Wikipedia entry The Open Group Architecture Framework

The Open Group Architecture Framework (TOGAF) is a framework for enterprise architecture which provides an approach for designing, planning, implementing, and governing an enterprise information technology architecture.[2] TOGAF has been a registered trademark of The Open Group in the United States and other countries since 2011.[3]

TOGAF is a high level approach to design. It is typically modeled at four levels: Business, Application, Data, and Technology. It relies heavily on modularization, standardization, and already existing, proven technologies and products.

I saw a notice of this publication today and created a local copy for your convenience (the offical copy requires free registration and login). The downside is that over time, this copy will not be the latest version. The latest version can be downloaded from:

You can purchase TOGAF 9.1 here: I haven’t read it but at $39.95 for the PDF version, it compares favorably to other standards pricing.

Extended Artificial Memory:…

Monday, October 27th, 2014

Extended Artificial Memory: Toward an Integral Cognitive Theory of Memory and Technology by Lars Ludwig. (PDF) (Or you can contribute to the cause by purchasing a printed or Kindle copy of: Information Technology Rethought as Memory Extension: Toward an integral cognitive theory of memory and technology.)

Convention book selling wisdom is that a title should provoke people to pick up the book. First step towards a sale. Must be the thinking behind this title. Just screams “Read ME!”


Seriously, I have read some of the PDF version and this is going on the my holiday wish list as a hard copy request.


This thesis introduces extended artificial memory, an integral cognitive theory of memory and technology. It combines cross-scientific analysis and synthesis for the design of a general system of essential knowledge-technological processes on a sound theoretical basis. The elaboration of this theory was accompanied by a long-term experiment for understanding [Erkenntnisexperiment]. This experiment included the agile development of a software prototype (Artificial Memory) for personal knowledge management.

In the introductory chapter 1.1 (Scientific Challenges of Memory Research), the negative effects of terminological ambiguity and isolated theorizing to memory research are discussed.

Chapter 2 focuses on technology. The traditional idea of technology is questioned. Technology is reinterpreted as a cognitive actuation process structured in correspondence with a substitution process. The origin of technological capacities is found in the evolution of eusociality. In chapter 2.2, a cognitive-technological model is sketched. In this thesis, the focus is on content technology rather than functional technology. Chapter 2.3 deals with different types of media. Chapter 2.4 introduces the technological role of language-artifacts from different perspectives, combining numerous philosophical and historical considerations. The ideas of chapter 2.5 go beyond traditional linguistics and knowledge management, stressing individual constraints of language and limits of artificial intelligence. Chapter 2.6 develops an improved semantic network model, considering closely associated theories.

Chapter 3 gives a detailed description of the universal memory process enabling all cognitive technological processes. The memory theory of Richard Semon is revitalized, elaborated and revised, taking into account important newer results of memory research.

Chapter 4 combines the insights on the technology process and the memory process into a coherent theoretical framework. Chapter 4.3.5 describes four fundamental computer-assisted memory technologies for personally and socially extended artificial memory. They all tackle basic problems of the memory-process (4.3.3). In chapter 4.3.7, the findings are summarized and, in chapter 4.4, extended into a philosophical consideration of knowledge.

Chapter 5 provides insight into the relevant system landscape (5.1) and the software prototype (5.2). After an introduction into basic system functionality, three exemplary, closely interrelated technological innovations are introduced: virtual synsets, semantic tagging, and Linear Unit tagging.

The common memory capture (of two or more speakers) imagery is quite powerful. It highlights a critical aspect of topic maps.

Be forewarned this is European style scholarship, where the reader is assumed to be comfortable with philosophy, linguistics, etc., in addition to the more narrow aspects of computer science.

To see these ideas in practice:

Slides on What is Artificial Memory.

I first saw this in a note from Jack Park, the source of many interesting and useful links, papers and projects.

Harvard gives new meaning to meritocracy

Thursday, December 12th, 2013

Harvard gives new meaning to meritocracy by Kaiser Fung.

From the post:

Due to the fastidious efforts of Professor Harvey Mansfield, Harvard has confirmed the legend that “the hard part is to get in”. Not only does it appear impossible to flunk out but according to the new revelation (link), the median grade given is A- and “the most frequently awarded grade at Harvard College is actually a straight A”.

The last sentence can be interpreted in two ways. If “straight A” means As across the board, then he is saying a lot of graduates end up with As in all courses taken. If “straight A” is used to distinguish between A and A-, then all he is saying is that the median grade is A- and the mode is A. Since at least 50% of the grades given are A or A- and there are more As than A-s, there would be at least 25% As, possibly a lot more.

Note also that the median being A- tells us nothing about the bottom half of the grades. If no professor even gave out anything below an A-, the median would still be A-. If such were to be the case, then the 5th percentile, 10th percentile, 25th percentile, etc. would all be A-.

For full disclosure, Harvard should tell us what proportion fo grades are As and what proportion are A-s.

And to think, I complain about government contractors having a sense of entitlement, divorced from their performance.

Looks like that is also true for all those Harvard (and other) graduates that are now employed by the U.S. government.

Nothing you or I can do about it but something you need to take into account when dealing with the U.S. government.

I keep hoping that some department, agency, government or government in waiting will become interested in weapons grade IT.

Reasoning that when other departments, agencies, governments or governments in waiting start feeling the heat, it may set off an IT arms race.

Not the waste for the sake of waste sort of arms race we had in the 1960’s but one with real winners and losers.

Who’s accountable for IT failure? (Parts 1 & 2)

Thursday, April 19th, 2012

Michael Krigsman has an excellent two part series IT failure:

Who’s accountable for IT failure? (Part One)

Who’s accountable for IT failure? (Part Two)

Michael goes through the horror stories and stats about IT failures (about 70%) in some detail.

But think about just the failure rate for a minute: 70%?

Would you drive a car with a 70% chance of failure?

Would you fly in a plane with a 70% chance of failure?

Would you trade securities with 70% chance your information is wrong?

Would you use a bank account where the balance has a 70% inaccuracy rate?

But, the government is about to embark on IT projects to make government more transparent and accountable.

Based on past experience, how many of those IT projects are going to fail?

If you said 70%, your right!

The senior management responsible for those IT projects needs a pointer to the posts by Michael Krigsman.

For that matter, I would like to see Michael post a PDF version that can be emailed to senior management and project participants at the start of each project.

Graphs in Operations

Tuesday, March 20th, 2012

Graphs in Operations by John E. Vincent.

From the post:

Anyone who has ever used Puppet or Git has dabbled in graphs even if they don’t know it. However my interest in graphs in operations relates to the infrastructure as a whole. James Turnbull expressed it very well last year in Mt. View when discussion orchestration. Obviously this is a topic near and dear to my heart.

Right now much of orchestration is in the embryonic stages. We define relationships manually. We register watches on znodes. We define hard links between components in a stack. X depends on Y depends on Z. We’re not really being smart about it. If someone disagrees, I would LOVE to see a tool addressing the space.

Interesting post from a sysadmin perspective on the relationships that graphs could make explicit. And being made explicit, we could attach properties to those relationships (or associations in topic map talk).

Imagine the various *nix tools monitoring a user’s activities at multiple locations on the network and that data long with the relationships being merged with other data.

First saw this at Alex Popescu’s myNoSQL.