Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

April 17, 2013

In-Memory Computing

Filed under: Computation,Computer Science,Programming — Patrick Durusau @ 1:23 pm

Why In-Memory Computing Is Cheaper And Changes Everything by Timo Elliott.

From the post:

What is the difference? Database engines today do I/O. So if they want to get a record, they read. If they want to write a record, they write, update, delete, etc. The application, which in this case is a DBMS, thinks that it’s always writing to disk. If that record that they’re reading and writing happens to be in flash, it will certainly be faster, but it’s still reading and writing. Even if I’ve cached it in DRAM, it’s the same thing: I’m still reading and writing.

What we’re talking about here is the actual database is physically in in-memory. I’m doing a fetch to get data and not a read. So the logic of the database changes. That’s what in-memory is about as opposed to the traditional types of computing.

Why is it time for in-memory computing?

Why now? The most important thing is this: DRAM costs are dropping about 32% every 12 months. Things are getting bigger, and costs are getting lower. If you looked at the price of a Dell server with a terabyte of memory three years ago, it was almost $100,000 on their internet site. Today, a server with more cores — sixteen instead of twelve — and a terabyte of DRAM, costs less than $40,000.

In-memory results in lower total cost of ownership

So the costs of this stuff is not outrageous. For those of you who don’t understand storage, I always get into this argument: the total cost of acquisition of an in-memory system is likely higher than a storage system. There’s no question. But the total cost of TCO is lower – because you don’t need storage people to manage memory. There are no LUNs [logical unit numbers]: all the things your storage technicians do goes away.

People cost more than hardware and software – a lot more. So the TCO is lower. And also, by the way, power: one study IBM did showed that memory is 99% less power than spinning disks. So unless you happen to be an electric company, that’s going to mean a lot to you. Cooling is lower, everything is lower.

Timo makes a good case for in-memory computing but I have a slightly different question.

If both data and program are stored in memory, where is the distinction between program and data?

Or in topic map terms, can’t we then speak about subject identities in the program and even in data at particular points in the program?

That could be a very powerful tool for controlling program behavior and re-purposing data at different stages of processing.

February 9, 2013

…no Hitchhiker’s Guide…

Filed under: Computation,Computer Science,Mathematical Reasoning,Mathematics — Patrick Durusau @ 8:22 pm

Why there is no Hitchhiker’s Guide to Mathematics for Programmers by Jeremy Kun.

From the post:

Do you really want to get better at mathematics?

Remember when you first learned how to program? I do. I spent two years experimenting with Java programs on my own in high school. Those two years collectively contain the worst and most embarrassing code I have ever written. My programs absolutely reeked of programming no-nos. Hundred-line functions and even thousand-line classes, magic numbers, unreachable blocks of code, ridiculous code comments, a complete disregard for sensible object orientation, negligence of nearly all logic, and type-coercion that would make your skin crawl. I committed every naive mistake in the book, and for all my obvious shortcomings I considered myself a hot-shot programmer! At leaa st I was learning a lot, and I was a hot-shot programmer in a crowd of high-school students interested in game programming.

Even after my first exposure and my commitment to get a programming degree in college, it was another year before I knew what a stack frame or a register was, two more before I was anywhere near competent with a terminal, three more before I fully appreciated functional programming, and to this day I still have an irrational fear of networking and systems programming (the first time I manually edited the call stack I couldn’t stop shivering with apprehension and disgust at what I was doing).

A must read post if you want to be on the cutting edge of programming.

October 12, 2012

Best Practices for Scientific Computing

Filed under: Computer Science,Programming — Patrick Durusau @ 3:22 pm

Best Practices for Scientific Computing by D. A. Aruliah, C. Titus Brown, Neil P. Chue Hong, Matt Davis, Richard T. Guy, Steven H. D. Haddock, Katy Huff, Ian Mitchell, Mark Plumbley, Ben Waugh, Ethan P. White, Greg Wilson, and Paul Wilson.

Abstract:

Scientists spend an increasing amount of time building and using software. However, most scientists are never taught how to do this efficiently. As a result, many are unaware of tools and practices that would allow them to write more reliable and maintainable code with less effort. We describe a set of best practices for scientific software development that have solid foundations in research and experience, and that improve scientists’ productivity and the reliability of their software.

If programming, or should I say good practices at programming, isn’t second nature to you, you will find something to learn/relearn in this paper.

I first saw this at Simply Statistics.

September 12, 2012

A Raspberry Pi Supercomputer

Filed under: Computer Science,Parallel Programming,Supercomputing — Patrick Durusau @ 9:55 am

A Raspberry Pi Supercomputer

If you need a supercomputer for processing your topic maps, an affordable one is at hand.

Some assembly required. With Legos no less.

From the ScienceDigest post:

Computational Engineers at the University of Southampton have built a supercomputer from 64 Raspberry Pi computers and Lego.

The team, led by Professor Simon Cox, consisted of Richard Boardman, Andy Everett, Steven Johnston, Gereon Kaiping, Neil O’Brien, Mark Scott and Oz Parchment, along with Professor Cox’s son James Cox (aged 6) who provided specialist support on Lego and system testing.

Professor Cox comments: “As soon as we were able to source sufficient Raspberry Pi computers we wanted to see if it was possible to link them together into a supercomputer. We installed and built all of the necessary software on the Pi starting from a standard Debian Wheezy system image and we have published a guide so you can build your own supercomputer.”

The racking was built using Lego with a design developed by Simon and James, who has also been testing the Raspberry Pi by programming it using free computer programming software Python and Scratch over the summer. The machine, named “Iridis-Pi” after the University’s Iridis supercomputer, runs off a single 13 Amp mains socket and uses MPI (Message Passing Interface) to communicate between nodes using Ethernet. The whole system cost under £2,500 (excluding switches) and has a total of 64 processors and 1Tb of memory (16Gb SD cards for each Raspberry Pi). Professor Cox uses the free plug-in ‘Python Tools for Visual Studio’ to develop code for the Raspberry Pi.

You may also want to visit the Rasberry PI Foundation. Which has the slogan: “An ARM GNU/Linux box for $25. Take a byte!”

In an age with ready access to cloud computing resources, to say nothing of weapon quality toys (Playstation 3’s), for design simulations, there is still a place for inexpensive experimentation.

What hardware configurations will you test out on your Raspberry Pi Supercomputer?

Are there specialized configurations that work better for some subject identity tests than others?

How do hardware constraints influence our approaches to computational problems?

Are we missing solutions because they don’t fit current architectures and therefore aren’t considered? (Not rejected, just don’t come up at all.)

June 24, 2012

The Turing Digital Archive

Filed under: Computer Science,Semantics,Turing Machines — Patrick Durusau @ 8:18 pm

The Turing Digital Archive

From the webpage:

Alan Turing (1912-54) is best-known for helping decipher the code created by German Enigma machines in the Second World War, and for being one of the founders of computer science and artificial intelligence.

This archive contains many of Turing’s letters, talks, photographs and unpublished papers, as well as memoirs and obituaries written about him. It contains images of the original documents that are held in the Turing collection at King’s College, Cambridge. For more information about this digital archive and tips on using the site see About the archive.

I ran across this archive when I followed a reference to the original paper on Turing machines, http://www.turingarchive.org/viewer/?id=466&title=01a.

I will be returning to this original description in one or more posts on Turing machines and semantics.

June 13, 2012

On the value of being inexact

Filed under: Computation,Computer Science,Inexact,Semantics — Patrick Durusau @ 12:31 pm

Algorithmic methodologies for ultra-efficient inexact architectures for sustaining technology scaling by Avinash Lingamneni, Kirthi Krishna Muntimadugu, Richard M. Karp, Krishna V. Palem, and Christian Piguet.

The following non-technical blurb caught my eye:

Researchers have unveiled an “inexact” computer chip that challenges the industry’s dogmatic 50-year pursuit of accuracy. The design improves power and resource efficiency by allowing for occasional errors. Prototypes unveiled this week at the ACM International Conference on Computing Frontiers in Cagliari, Italy, are at least 15 times more efficient than today’s technology.

[ads deleted]

The research, which earned best-paper honors at the conference, was conducted by experts from Rice University in Houston, Singapore’s Nanyang Technological University (NTU), Switzerland’s Center for Electronics and Microtechnology (CSEM) and the University of California, Berkeley.

“It is exciting to see this technology in a working chip that we can measure and validate for the first time,” said project leader Krishna Palem, who also serves as director of the Rice-NTU Institute for Sustainable and Applied Infodynamics (ISAID). “Our work since 2003 showed that significant gains were possible, and I am delighted that these working chips have met and even exceeded our expectations.” [From: Computing experts unveil superefficient ‘inexact’ chip which I saw in a list of links by Greg Linden.

Think about it. We are inexact and so are our semantics.

But we attempt to model our inexact semantics with increasingly exact computing platforms.

Does that sound like a modeling mis-match to you?

BTW, if you are interested in the details, see: Algorithmic methodologies for ultra-efficient inexact architectures for sustaining technology scaling

Abstract:

Owing to a growing desire to reduce energy consumption and widely anticipated hurdles to the continued technology scaling promised by Moore’s law, techniques and technologies such as inexact circuits and probabilistic CMOS (PCMOS) have gained prominence. These radical approaches trade accuracy at the hardware level for significant gains in energy consumption, area, and speed. While holding great promise, their ability to influence the broader milieu of computing is limited due to two shortcomings. First, they were mostly based on ad-hoc hand designs and did not consider algorithmically well-characterized automated design methodologies. Also, existing design approaches were limited to particular layers of abstraction such as physical, architectural and algorithmic or more broadly software. However, it is well-known that significant gains can be achieved by optimizing across the layers. To respond to this need, in this paper, we present an algorithmically well-founded cross-layer co-design framework (CCF) for automatically designing inexact hardware in the form of datapath elements. Specifically adders and multipliers, and show that significant associated gains can be achieved in terms of energy, area, and delay or speed. Our algorithms can achieve these gains with adding any additional hardware overhead. The proposed CCF framework embodies a symbiotic relationship between architecture and logic-layer design through the technique of probabilistic pruning combined with the novel confined voltage scaling technique introduced in this paper, applied at the physical layer. A second drawback of the state of the art with inexact design is the lack of physical evidence established through measuring fabricated ICs that the gains and other benefits that can be achieved are valid. Again, in this paper, we have addressed this shortcoming by using CCF to fabricate a prototype chip implementing inexact data-path elements; a range of 64-bit integer adders whose outputs can be erroneous. Through physical measurements of our prototype chip wherein the inexact adders admit expected relative error magnitudes of 10% or less, we have found that cumulative gains over comparable and fully accurate chips, quantified through the area-delay-energy product, can be a multiplicative factor of 15 or more. As evidence of the utility of these results, we demonstrate that despite admitting error while achieving gains, images processed using the FFT algorithm implemented using our inexact adders are visually discernible.

Why the link to the ACM Digital library or to the “unoffiical version” were not reported in any of the press stories I cannot say.

June 1, 2012

A Computable Universe,
Understanding Computation and
Exploring Nature As Computation

Filed under: Cellular Automata,Computation,Computer Science — Patrick Durusau @ 9:53 am

Foreword: A Computable Universe, Understanding Computation and Exploring Nature As Computation by Roger Penrose.

Abstract:

I am most honoured to have the privilege to present the Foreword to this fascinating and wonderfully varied collection of contributions, concerning the nature of computation and of its deep connection with the operation of those basic laws, known or yet unknown, governing the universe in which we live. Fundamentally deep questions are indeed being grappled with here, and the fact that we find so many different viewpoints is something to be expected, since, in truth, we know little about the foundational nature and origins of these basic laws, despite the immense precision that we so often find revealed in them. Accordingly, it is not surprising that within the viewpoints expressed here is some unabashed speculation, occasionally bordering on just partially justified guesswork, while elsewhere we find a good deal of precise reasoning, some in the form of rigorous mathematical theorems. Both of these are as should be, for without some inspired guesswork we cannot have new ideas as to where look in order to make genuinely new progress, and without precise mathematical reasoning, no less than in precise observation, we cannot know when we are right — or, more usually, when we are wrong.

An unlikely volume to search for data mining or semantic modeling algorithms or patterns.

But one that should be read for the mental exercise/discipline of its reading.

The asking price of $138 (US) promises a limited readership.

Plus a greatly diminished impact.

When asked to participate in collections, scholars/authors should ask themselves:

How many books have I read from publisher X?*

*Read, not cited, is the appropriate test. Make your decision appropriately.


If republished as an accessible paperback, may I suggest: “Exploring the Nature of Computation”?

The committee title makes the collage nature of the volume a bit too obvious.

December 31, 2011

FreeBookCentre.Net

Filed under: Books,Computer Science,Mathematics — Patrick Durusau @ 7:20 pm

FreeBookCentre.Net

Books and online materials on:

  • Computer Science
  • Physics
  • Mathematics
  • Electronics

I just scanned a few of the categories and the coverage isn’t systematic. Still, if you need a text for quick study, the price is right.

December 19, 2011

Journal of Computing Science and Engineering

Filed under: Bioinformatics,Computer Science,Linguistics,Machine Learning,Record Linkage — Patrick Durusau @ 8:09 pm

Journal of Computing Science and Engineering

From the webpage:

Journal of Computing Science and Engineering (JCSE) is a peer-reviewed quarterly journal that publishes high-quality papers on all aspects of computing science and engineering. The primary objective of JCSE is to be an authoritative international forum for delivering both theoretical and innovative applied researches in the field. JCSE publishes original research contributions, surveys, and experimental studies with scientific advances.

The scope of JCSE covers all topics related to computing science and engineering, with a special emphasis on the following areas: embedded computing, ubiquitous computing, convergence computing, green computing, smart and intelligent computing, and human computing.

I got here from following a sponsor link at a bioinformatics conference.

Then just picking at random from the current issue I see:

A Fast Algorithm for Korean Text Extraction and Segmentation from Subway Signboard Images Utilizing Smartphone Sensors by Igor Milevskiy, Jin-Young Ha.

Abstract:

We present a fast algorithm for Korean text extraction and segmentation from subway signboards using smart phone sensors in order to minimize computational time and memory usage. The algorithm can be used as preprocessing steps for optical character recognition (OCR): binarization, text location, and segmentation. An image of a signboard captured by smart phone camera while holding smart phone by an arbitrary angle is rotated by the detected angle, as if the image was taken by holding a smart phone horizontally. Binarization is only performed once on the subset of connected components instead of the whole image area, resulting in a large reduction in computational time. Text location is guided by user’s marker-line placed over the region of interest in binarized image via smart phone touch screen. Then, text segmentation utilizes the data of connected components received in the binarization step, and cuts the string into individual images for designated characters. The resulting data could be used as OCR input, hence solving the most difficult part of OCR on text area included in natural scene images. The experimental results showed that the binarization algorithm of our method is 3.5 and 3.7 times faster than Niblack and Sauvola adaptive-thresholding algorithms, respectively. In addition, our method achieved better quality than other methods.

Secure Blocking + Secure Matching = Secure Record Linkage by Alexandros Karakasidis, Vassilios S. Verykios.

Abstract:

Performing approximate data matching has always been an intriguing problem for both industry and academia. This task becomes even more challenging when the requirement of data privacy rises. In this paper, we propose a novel technique to address the problem of efficient privacy-preserving approximate record linkage. The secure framework we propose consists of two basic components. First, we utilize a secure blocking component based on phonetic algorithms statistically enhanced to improve security. Second, we use a secure matching component where actual approximate matching is performed using a novel private approach of the Levenshtein Distance algorithm. Our goal is to combine the speed of private blocking with the increased accuracy of approximate secure matching.

A Survey of Transfer and Multitask Learning in Bioinformatics by Qian Xu, Qiang Yang.

Abstract:

Machine learning and data mining have found many applications in biological domains, where we look to build predictive models based on labeled training data. However, in practice, high quality labeled data is scarce, and to label new data incurs high costs. Transfer and multitask learning offer an attractive alternative, by allowing useful knowledge to be extracted and transferred from data in auxiliary domains helps counter the lack of data problem in the target domain. In this article, we survey recent advances in transfer and multitask learning for bioinformatics applications. In particular, we survey several key bioinformatics application areas, including sequence classification, gene expression data analysis, biological network reconstruction and biomedical applications.

And the ones I didn’t list from the current issue are just as interesting and relevant to identity/mapping issues.

This journal is a good example of people who have deliberately reached further across disciplinary boundaries than most.

About the only excuse for not doing so left is the discomfort of being the newbie in a field not your own.

Is that a good enough reason to miss possible opportunities to make critical advances in your home field? (Only you can answer that for yourself. No one can answer it for you.)

November 10, 2011

Machine Learning (Carnegie Mellon University)

Filed under: Computer Science,CS Lectures,Machine Learning — Patrick Durusau @ 6:33 pm

Machine Learning 10-701/15-781, Spring 2011 Carnegie Mellon University by Tom Mitchell.

Course Description:

Machine Learning is concerned with computer programs that automatically improve their performance through experience (e.g., programs that learn to recognize human faces, recommend music and movies, and drive autonomous robots). This course covers the theory and practical algorithms for machine learning from a variety of perspectives. We cover topics such as Bayesian networks, decision tree learning, Support Vector Machines, statistical learning methods, unsupervised learning and reinforcement learning. The course covers theoretical concepts such as inductive bias, the PAC learning framework, Bayesian learning methods, margin-based learning, and Occam’s Razor. Short programming assignments include hands-on experiments with various learning algorithms, and a larger course project gives students a chance to dig into an area of their choice. This course is designed to give a graduate-level student a thorough grounding in the methodologies, technologies, mathematics and algorithms currently needed by people who do research in machine learning.

I don’t know how other disciplines are faring but for a variety of CS topics, there are enough excellent online materials to complete the equivalent of an undergraduate if not master’s degree in CS.

October 30, 2011

Computer Science Teachers Association

Filed under: Computer Science,Teaching — Patrick Durusau @ 7:05 pm

Computer Science Teachers Association

From the website:

The Computer Science Teachers Association is a membership organization that supports and promotes the teaching of computer science and other computing disciplines. CSTA provides opportunities for K-12 teachers and students to better understand the computing disciplines and to more successfully prepare themselves to teach and learn.

I suspect that the issues that face teachers in more formal classroom settings are largely the same ones that face us when we try to teach topic maps to users. As a matter of fact, other than being age/gender/culture adaptations, I would venture to say that the basic teaching techniques remain largely the same over a lifetime.

I can remember very enthusiastic teachers who had great examples that got kids (myself included) interested in literature, math, science, etc., and I can remember those who were putting in the hours until the end of the school day. I saw the same techniques, with some dressing up because we become more “serious” the older we get (allegedly) in college and a couple of rounds of graduate school.

Not that we have to make silly techniques to teach topic maps but having a few of those isn’t going to hurt anyone or detract from the “gravitas” of the paradigm.

BTW, the price is right for the Computer Science Teachers Association, it’s free! Who knows? You might learn something and perhaps get better at learning with others (what else would teaching be?).

October 20, 2011

Learning Richly Structured Representations From Weakly Annotated Data

Filed under: Artificial Intelligence,Computer Science,Machine Learning — Patrick Durusau @ 6:42 pm

Learning Richly Structured Representations From Weakly Annotated Data by Daphne Koller. (DeGroot Lecture, Carnegie Mellon University, October 14, 2011).

Abstract:

The solution to many complex problems require that we build up a representation that spans multiple levels of abstraction. For example, to obtain a semantic scene understanding from an image, we need to detect and identify objects and assign pixels to objects, understand scene geometry, derive object pose, and reconstruct the relationships between different objects. Fully annotated data for learning richly structured models can only be obtained in very limited quantities; hence, for such applications and many others, we need to learn models from data where many of the relevant variables are unobserved. I will describe novel machine learning methods that can train models using weakly labeled data, thereby making use of much larger amounts of available data, with diverse levels of annotation. These models are inspired by ideas from human learning, in which the complexity of the learned models and the difficulty of the training instances tackled changes over the course of the learning process. We will demonstrate the applicability of these ideas to various problems, focusing on the problem of holistic computer vision.

If your topic map application involves computer vision, this is a must see video.

For text/data miners, are you faced with similar issues? Limited amounts of richly annotated training data?

I saw a slide, will run it down later, that had text running from plain text to annotated with ontological data. I mention that because that isn’t what a user sees when they “read” a text. They see implied relationships, references to other subjects, other instances of a particular subject, and all that passes in the instance of recognition.

Perhaps the problem of correct identification in text is one of too few dimensions than too many.

October 17, 2011

Science Conference Proceedings Portal

Filed under: Bibliography,Computer Science,Conferences — Patrick Durusau @ 6:41 pm

Science Conference Proceedings Portal

From the website:

Welcome to the DOE Office of Scientific and Technical Information’s (OSTI) Science Conference Proceedings Portal. This distributed portal provides access to science and technology conference proceedings and conference papers from a number of authoritative sites (professional societies and national labs, largely) whose areas of interest in the physical sciences and technology intersect those of the Department of Energy. Proceedings and papers from scientific meetings can be found in these fields, among others: particle physics, nuclear physics, chemistry, petroleum, aeronautics and astronautics, meteorology, engineering, computer science, electric power, fossil fuels. From here you can simultaneously query any or all of the listed organizations and collections for scientific and technical conference proceedings or papers. Simply enter your search term(s) in the “Search” box, check one or more of the listed sites (or check “Select All”), and click the “Search” button.

One of the conference organizations listed is the Association for Computing Machinery (ACM).

No doubt a very good site but I wonder about conferences that only appear as Springer publications, for example? Or that are concerned with computers but only appear as publications of other publishers or organizations?

Question: In a week, how many indexes that include computer science conferences can you find? How do they differ in terms of coverage?

September 24, 2011

How do I become a data scientist?

Filed under: Computer Science,Data Analysis,Data Science — Patrick Durusau @ 6:58 pm

How do I become a data scientist?

Whether you call yourself a “data scientist” or not is up to you.

Acquiring the skills relevant to your area of interest is the first step towards success with topic maps.

September 21, 2011

Online Master of Science in Predictive Analytics

Filed under: Computer Science,CS Lectures,Degree Program,Library,Prediction — Patrick Durusau @ 7:07 pm

Online Master of Science in Predictive Analytics

As businesses seek to maximize the value of vast new stores of available data, Northwestern University’s Master of Science in Predictive Analytics program prepares students to meet the growing demand in virtually every industry for data-driven leadership and problem solving.

Advanced data analysis, predictive modeling, computer-based data mining, and marketing, web, text, and risk analytics are just some of the areas of study offered in the program. As a student in the Master of Science in Predictive Analytics program, you will:

  • Prepare for leadership-level career opportunities by focusing on statistical concepts and practical application
  • Learn from distinguished Northwestern faculty and from the seasoned industry experts who are redefining how data improve decision-making and boost ROI
  • Build statistical and analytic expertise as well as the management and leadership skills necessary to implement high-level, data-driven decisions
  • Earn your Northwestern University master’s degree entirely online

Just so you know, libraries schools were offering mostly online degrees a decade or so ago. Nice to see other disciplines catching up. 😉

It would be interesting to see short courses in subject analysis, as in subject identity and the properties that compose a particular identity, in specific domains.

September 16, 2011

Development at the Speed and Scale of Google

Filed under: Computer Science,Dependency,Software — Patrick Durusau @ 6:38 pm

Development at the Speed and Scale of Google by Ashish Kumar.

Interesting overview of development at Google. I included it as a background for the question:

How would you use topic maps as part of documenting a development process?

Or perhaps better: Are you using topic maps as part of a development process and if so, how?

Now that I think about it, there may be another way to approach the use of topic map in software engineering. Harvest the bug reports and push those through text processing tools. I haven’t ever thought of bug reports as a genre but I suspect it has all the earmarks of one.

Thoughts? Comments?

August 31, 2011

Turtles all the way down

Filed under: Computation,Computer Science,Processing,Virtualization — Patrick Durusau @ 7:41 pm

Turtles all the way down

From the website:

Decisive breakthrough from IBM researchers in Haifa introduces efficient nested virtualization for x86 hypervisors

What is nested virtualization and who needs it? Classical virtualization takes a physical computer and turns it into multiple logical, or virtual, computers. Each virtual machine can then interact independently, run its own operating environment, and basically behave like a separate physical resource. Hypervisor software is the secret sauce that makes virtualization possible by sitting in between the hardware and the operating system. It manages how the operating system and applications access the hardware.

IBM researchers found an efficient way to take one x86 hypervisor and run other hypervisors on top of it. For virtualization, this means that a virtual machine can be ‘turned into’ many machines, each with the potential to have its own unique environment, configuration, operating system, or security measures—which can in turn each be divided into more logical computers, and so on. With this breakthrough, x86 processors can now run multiple ‘hypervisors’ stacked, in parallel, and of different types.

This nested virtualization using one hypervisor on top of another is reminiscent of a tale popularized by Stephen Hawking. A little old lady argued with a lecturing scientist and insisted that the world is really a flat plate supported on the back of a giant tortoise. When the scientist asked what the tortoise is standing on, the woman answered sharply “But it’s turtles all the way down!” Inspired by this vision, the researchers named their solution the Turtles Project: Design and Implementation of Nested Virtualization

This awesome advance has been incorporated into the latest Linux release.

This is what I like about IBM, fundamental advances in computer science that can be turned into services for users.

One obvious use of this advance would be to segregate merging models in separate virtual machines. I am sure there are others.

June 22, 2011

Computer Musings by Professor Donald E. Knuth

Filed under: Computer Science — Patrick Durusau @ 6:42 pm

Computer Musings by Professor Donald E. Knuth

From the website:

View Computer Musings, lectures given by Donald E. Knuth, Professor Emeritus of the Art of Computer Programming at Stanford University. The Stanford Center for Professional Development has digitized more than one hundred tapes of Knuth’s musings, lectures, and selected classes and posted them online. These archived tapes resonate with not only his thoughts, but with insights from students, audience members, and other luminaries in mathematics and computer science. They are available to the public free of charge.

Knuth brings both clear understanding and expression to every subject he addresses.

Both are essential for a useful topic map.

June 11, 2011

Advanced Computer Science Courses

Filed under: Computer Science — Patrick Durusau @ 12:41 pm

Advanced Computer Science Courses

Interesting collection of links to advanced computer science courses on the WWW.

All entertaining and most of interest for anyone developing topic map applications.

June 10, 2011

PragPub

Filed under: Computer Science,Programming — Patrick Durusau @ 6:34 pm

PragPub

Edited by Michael Swaine so this is thinking specific publication with a fairly wide range.

That it is also entertaining is just an added bonus.

Not topic map specific but it never hurts to improve one’s thinking skills. (Or at least that is what I was told as a child.)

February 7, 2011

Haskell – Typeclassopedia

Filed under: Computer Science,Examples — Patrick Durusau @ 7:01 am

Typeclassopedia appears in The Monad.Reader Issue 13

I was looking for Calculating Monads with Category Theory by Derek Elkins (in this issue of the Monad.Reader) when I ran across The Typeclassopedia by Brent Yorgey.

From the abstract:

The standard Haskell libraries feature a number of type classes with algebraic or category-theoretic underpinnings. Becoming a fluent Haskell hacker requires intimate familiarity with them all, yet acquiring this familiarity often involves combing through a mountain of tutorials, blog posts, mailing list archives, and IRC logs.

The goal of this article is to serve as a starting point for the student of Haskell wishing to gain a firm grasp of its standard type classes. The essentials of each type class are introduced, with examples, commentary, and extensive references for further reading.

Doesn’t combing through a mountain of tutorials, blog posts, mailing list archives, and IRC logs just cry topic map?

Will be using this article as a jumping of point for exploring a topic map interface for authoring a topic map about Haskell as well as what would an interface for a topic map about Haskell look like?

Quite serious about this being an exploration because I don’t think there is a one size fits all authoring or using/viewing interface.

Your thoughts, suggestions, comments, etc. are most welcome.

First step: I am going to start mapping out this article and not worry about other sources of information. Want to start from a known source and then incorporate other sources.

January 19, 2011

CEUR-WS

Filed under: Computer Science,Conferences — Patrick Durusau @ 1:48 pm

CEUR-WS

From the website:

CEUR-WS.org: fast and cost-free provision of online proceedings for scientific workshops

As of 2011-01-19, 692 proceedings, with 132 of those from meetings in 2010.

An excellent source of research, both recent and not so recent.

January 12, 2011

ACM Digital Library for Computing Professionals

Filed under: Computer Science,Digital Library,Library — Patrick Durusau @ 2:59 pm

ACM Digital Library for Computing Professionals

The ACM has released a new version of it digital library, and, is offering a free three-month trial of it.

From the announcement:

  • Reorganized author profile pages that present a snapshot of author contributions and metrics of author influence by monitoring publication and citation counts and download usage volume
  • Broadened citation pages for individual articles with tabs for metadata and links to facilitate exploration and discovery of the depth of content in the DL
  • Enhanced interactivity tools such as RSS feeds, bibliographic exports, and social network channels to retrieve data, promote user engagement, and introduce user content
  • Redesigned binders for creating personal, annotatable collections of bibliographies or reading lists, and sharing them with ACM and non-ACM members, or exporting them into standard authoring tools like self-generated virtual PDF publications
  • Expanded table-of-contents opt-in service for all publications in the DL—from ACM and other publishers—that alerts users via email and RSS feeds to new issues of journals, magazines, newsletters, and proceedings.

I mention it here for a couple of reasons:

1) For resources on computing, whether contemporary or older materials, I can’t think of a better starting place for research. I am here more often than not.

2) It sets a benchmark for what is available in terms of digital libraries. If you are going to use topic maps to build a digital library, what would you do better?

October 30, 2010

Edinburgh Research Archives – Informatics

Filed under: Computer Science — Patrick Durusau @ 9:48 am

Edinburgh Research Archives – Informatics.

Another research collection for searching/browsing.

You might want to ask yourself how access to such archives could be improved.

Faced with the Scylla of searching on one hand and the Charybdis of browsing on the other.

Cornell University Library: Technical Reports and Papers

Filed under: Computer Science — Patrick Durusau @ 9:01 am

Cornell University Library: Technical Reports and Papers is a collection that reaches almost to the creation of the CS department at Cornell (1965, collection starts with 1968).

Excellent source of historical and current CS research.

I found it while tracking down Gerald Salton’s work on indexing.

Lots of other goodies as well.

« Newer Posts

Powered by WordPress