Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

March 10, 2011

MongoDB Development Prize!

Filed under: MongoDB,NoSQL,Topic Maps — Patrick Durusau @ 11:39 am

MongoDB Development Prize!

From the website:

10gen is pleased to announce the first MongoDB developer blog contest of 2011. We’ll be announcing the contest categories at the beginning of each month, and the 10gen engineering team will pick the winner. We hope that these contests will be a way for developers to share their experiences with MongoDB and learn from one another. And we are giving away some pretty cool prizes 🙂 This month we’re teaming up with (mt) Media Temple, who is offering free hosting to the winner.

Check out the blog for the March contest.

Could be a good way to get some PR for topic maps, your project, not to mention yourself.

Topic Maps: From Information to Discourse Architecture

Filed under: Information Theory,Interface Research/Design,Topic Maps — Patrick Durusau @ 10:27 am

Topic Maps: From Information to Discourse Architecture

Lars Johnsen writes in the Journal of Information Architecture that:

Topic Maps is a standards-based technology and model for organizing and integrating digital information in a range of applications and domains. Drawing on notions adapted from current discourse theory, this article focuses on the communicative, or explanatory, potential of topic maps. It is demonstrated that topic maps may be structured in ways that are “text-like” in character and, therefore, conducive to more expository or discursive forms of machine-readable information architecture. More specifically, it is exemplified how a certain measure of “texture”, i.e. textual cohesion and coherence, may be built into topic maps. Further, it is argued that the capability to represent and organize discourse structure may prove useful, if not essential, in systems and services associated with the emerging Socio-Semantic Web. As an example, it is illustrated how topic maps may be put to use within an area such as distributed semantic micro-blogging ….

I very much liked his “expository topic maps” metaphor, although I would extend to to say that topic maps can represent an intersection of “expository” spaces, each unique in its own right.

Highly recommended!

Top Ten Algorithms in Data Mining

Filed under: Algorithms,Data Mining — Patrick Durusau @ 9:13 am

Top Ten Algorithms in Data Mining

Summary of paper on data mining algorithms nominated and voted on by ACM KDD Innovation Award and IEEE ICDM Research Contributions Award winners to come up with a top 10 list.

I was curious about how the entries on the list from 2007 have fared.

I searched CiteseerX limiting the publication year to 2010.

The results, algorithm followed by citation count, were as follows:

  1. C4.5 – 41
  2. The k-Means algorithm – 86
  3. Support Vector Machines – 64
  4. The Apriori algorithm – 46
  5. Expectation-Maximization – 41
  6. PageRank – 19
  7. AdaBoost – 11
  8. k-Nearest Neighbor Classification – 36*
  9. Naive Bayes – 25
  10. CART (Classification and Regression Trees) – 11

*Searched as “k-Nearest Neighbor”.

Not a scientific study but enough variation to make me curious about:

  1. Broader survey of algorithm citation.
  2. What articles cite more than one algorithm?
  3. Are there any groupings by subject of study?

Not a high priority item but something I want to return to examine more closely.

Pentaho BI Suite Enterprise Edition (TM/SW Are You Listening?)

Filed under: BI,Linked Data,Marketing,Semantic Web — Patrick Durusau @ 8:12 am

Pentaho BI Suite Enterprise Edition

From the website:

Pentaho is the open source business intelligence leader. Thousands of organizations globally depend on Pentaho to make faster and better business decisions that positively impact their bottom lines. Download the Pentaho BI Suite today if you want to speed your BI development, deploy on-premise or in the cloud or cut BI licensing costs by up to 90%.

There are several open source offerings like this, Talend is another one that comes to mind.

I haven’t looked at its data integration in detail but suspect I know the answer to the question:

Say I have an integration of some BI assets using Pentaho and other BI assets integrated using Talend, how do I integrate those together while maintaining the separately integrated BI assets?

Or for that matter, how do I integrate BI that has been gathered and integrated by others, say Lexis/Nexis?

Interesting too to note that this is the sort of user slickness and ease that topic maps and (cough) linked data (see, I knew I could say it), faces in the marketplace.

Does it offer all the bells and whistles of more sophisticated subject identity or reasoning approaches?

No, but if it offers all that users are interested in using, what is your complaint?

Both topic maps and semantic web/linked data approaches need to listen more closely to what users want.

As opposed to deciding what users need.

And delivering the latter instead of the former.

Homomorphic Encryption System

Filed under: Encryption — Patrick Durusau @ 8:11 am

The rationale for a homomorphic encryption system (FHE = fully homomorphic encryption):

“Homomorphic” is a mathematical term meaning that if you do two things to a bit of data – say, encrypt it and process it – the order in which you do them won’t matter. In other words, in FHE, data can be processed after it is encrypted, as well as before. This means that a Gmail user could someday send an encrypted search query to the servers in the cloud, and those severs could carry out that query even though the query and the e-mails are completely inscrutable to them. Only the user who holds secret key can ever decrypt the original data, the query, or the query results.

For another example, imagine how FHE could help the proprietor of an online movie streaming service – call it Hackbuster Video– protect the privacy of customers while still giving them all the features they want. A customer’s request for a new movie would be encrypted, as would the movie itself, meaning that Hackbuster would not know what movie the customer was watching. Despite the privacy, the Hackbuster’s servers could still charge the correct amount, offer playback features such as pause and rewind, and even still make recommendations of similar movies, all without ever being privy to the movies involved.

From: Encryption that allows privacy and access to co-exist earns top dissertation award

Craig Gentry solved this problem (he has a law degree as well) in his dissertation at Stanford.

Not quite ready for prime time due to performance issues but definitely a step in the right direction.

Of interest to topic mappers because of the need for secure interaction with remote topic map facilities.

Additional resources of interest:

Craig Gentry’s dissertation: A fully homomorphic encryption scheme.

Craig’s “easy” version for ACM members: Computing Arbitrary Functions of Encrypted Data. (CACM, March 2010)

Fields Institute Presentation (slides) http://av.fields.utoronto.ca/slides/08-09/crypto/gentry/download.pdf

Fields Institute Presentation (audio) http://www.fields.utoronto.ca:8080/ramgen/08-09/crypto/gentry.rm

Mahout/Hadoop on Amazon EC2 – part 1 – Installation

Filed under: Hadoop,Mahout — Patrick Durusau @ 8:11 am

Mahout/Hadoop on Amazon EC2 – part 1 – Installation

The first of 6 posts where Danny Bickson walks through use of Mahout/Hadoop on Amazon EC2.

Other posts in the series:

Mahout on Amazon EC2 – part 2 – Running Hadoop on a single node

Mahout on Amazon EC2 – part 3 – Debugging

Hadoop on Amazon EC2 – Part 4 – Running on a cluster

Mahout on Amazon EC2 – part 5 – installing Hadoop/Mahout on high performance instance (CentOS/RedHat)

Tunning Hadoop configuration for high performance – Mahut on Amazon EC2

While you are here, take some time to look around. Lots of other interesting material on “distributed/parallel large scale algorithms and applications.”

11th International Conference on Document Analysis and Recognition

Filed under: Conferences — Patrick Durusau @ 8:09 am

11th International Conference on Document Analysis and Recognition

From the website:

We are pleased to announce that the Eleventh International Conference on Document Analysis and Recognition (ICDAR 2011), sponsored by the International Association for Pattern Recognition (IAPR) TC-10 (Graphics Recognition) and TC-11 (Reading Systems), will be held at BEIJING FRIENDSHIP HOTEL, Beijing, China during September 18-21, 2011.

ICDAR 2011 is organized by the Center for Intelligent Image and Document Information Processing (CIDIP) of Tsinghua University, the Institute of Automation of Chinese Academy of Sciences (CASIA) and the Chinese Association of Automation(CAA). The CIDIP group has long devoted to research and development in the fields of document analysis, face recognition and biometric technology. CASIA is one of the earliest institutions in China involving pattern recognition and document analysis research.

ICDAR is an outstanding international forum for researchers and practitioners at all levels of experience for identifying, encouraging and exchanging ideas on the state-of-the-art in document analysis, understanding, retrieval, and performance evaluation, including various forms of multimedia documents. ICDAR 2011 will be the 11th Conference in the series. The previous ones of this series were ICDAR’91 in Saint Malo (France), ICDAR’93 in Tsukuba (Japan), ICDAR’95 in Montreal (Canada), ICDAR’97 in Ulm (Germany), ICDAR’99 in Bangalore (India), ICDAR’01 in Seattle (USA), ICDAR’03 in Edinburgh (Scotland), ICDAR’05 in Seoul (Korea), ICDAR’07 in Curitiba (Brazil), and ICDAR’09 in Barcelona (Spain).

Since topic maps grew out of document analysis/indexing, I don’t suppose document analysis conferences are too far afield. 😉

March 9, 2011

Getting Genetics Done

Filed under: Bioinformatics,Biomedical,Uncategorized — Patrick Durusau @ 4:28 pm

Getting Genetics Done

Interesting blog site for anyone interested in genetics research and/or data mining issues related to genetics.

If you are looking for a community building exercise, see the Journal club entries.

Neo4j Spatial, Part 1: Finding things close to other things

Filed under: Geographic Data,Geographic Information Retrieval,Neo4j — Patrick Durusau @ 4:25 pm

Neo4j Spatial, Part 1: Finding things close to other things

Start of a great series of posts on geographic information processing.

Topic maps for travel, military, disaster and other applications will face this type of issue.

Not to mention needing to map across different systems with different approaches to resolving these issues.

Big Data Cookbook

Filed under: BigData — Patrick Durusau @ 4:23 pm

Big Data Cookbook

Very short but interesting overview of the components to use with “big data.”

CouchDB: JSON, HTTP & MapReduce

Filed under: CouchDB,MapReduce — Patrick Durusau @ 4:23 pm

CouchDB: JSON, HTTP & MapReduce

Good introductory presentation focused on CouchDB.

Unfortunately posted to another annoying slide service that repeats popup ads over and over and over….

Topincs Style/Store

Filed under: Topincs — Patrick Durusau @ 4:23 pm

Topincs Styles/Store

Something to look forward to in Topincs 5.4.0:

It should be possible to customize the style of a Topincs installation and a Topincs Store. By using configuration parameters custom.css and custom.header files the files store/style/custom.css and store/style/header.html should be used.

Looking forward to it!

March 8, 2011

Topic Maps: Less Garbage In, Less Garbage Out

Filed under: Authoring Topic Maps,Marketing,Topic Maps — Patrick Durusau @ 10:03 am

The latest hue and cry over changes to the Google search algorithm (search for “Google farmer update,” I don’t want to dignify any of it with a link) seems like a golden advertising opportunity for topic maps.

The slogan?

Topic Maps: Less Garbage In, Less Garbage Out

That is one of the value-adds of any curated data source isn’t it?

Instead of say 200,000 “hits” post-Farmer update on some subject, what if a topic map offered 20?

Or 0.0001% of the 200,000?

Of course, there are those who would rush forward to say that I might miss an important email or blog posting on subject X.

True, but if it were truly an important email or blog posting then a curator is likely to have picked it up. Yes?

The point of curation is to save users the time and effort of winnowing (wading?) through information garbage.

Here’s a topic map construction idea:

  1. Capture all the out-going search requests from your location.
  2. Throw away all the porn searches.
  3. Create a topic map of the useful answers to the remaining searches.
  4. Use filtering software to block access to search engines and/or redirect to the topic map.

Your staff is looking for answers to work related questions, yes?

A curated resource, like a topic map, would save them time and effort in finding answers to those questions.

Thomaner Project

Filed under: Examples,Music Retrieval — Patrick Durusau @ 9:58 am

Thomaner Project

Press coverage of a project in connection with the 800th anniversary of the famous boy choral Thomaner.

The topic map project is a database for the chorale’s repertoire from 1808 to 2008.

The German newspaper article report notes that only 20 years of the 200 year span are complete.

Funding is being sought to complete the remainder.

Not exactly the Rolling Stone or Lady Gaga is it?

Tenth ACM SIGPLAN Erlang Workshop

Filed under: Conferences,Erlang — Patrick Durusau @ 9:57 am

Tenth ACM SIGPLAN Erlang Workshop

From the website:

Erlang is a concurrent, distributed functional programming language aimed at systems with requirements on massive concurrency, soft real time response, fault tolerance, and high availability. It has been available as open source for over 10 years, creating a community that actively contributes to its already existing rich set of libraries and applications. Originally created for telecom applications, its usage has spread to other domains including e-commerce, banking, databases, and computer telephony and messaging.

Erlang programs are today among the largest applications written in any functional programming language. These applications offer new opportunities to evaluate functional programming and functional programming methods on a very large scale and suggest new problems for the research community to solve.

This workshop will bring together the open source, academic, and industrial programming communities of Erlang. It will enable participants to familiarize themselves with recent developments on new techniques and tools tailored to Erlang, novel applications, draw lessons from users’ experiences and identify research problems and common areas relevant to the practice of Erlang and functional programming.

Important Dates

Submission deadline: Friday, June 3, 2011
Author notification: Friday, June 17, 2011
Final submission for the publisher: Friday, July 13, 2011
Workshop date: Friday, September 23, 2011

The 16th ACM SIGPLAN International Conference on Functional Programming (ICFP 2011)

Filed under: Conferences,Functional Programming — Patrick Durusau @ 9:57 am

The 16th ACM SIGPLAN International Conference on Functional Programming (ICFP 2011)

From the call for papers:

ICFP 2011 seeks original papers on the art and science
of functional programming. Submissions are invited on all
topics from principles to practice, from foundations to
features, and from abstraction to application. The scope
includes all languages that encourage functional
programming, including both purely applicative and
imperative languages, as well as languages with objects,
concurrency, or parallelism. Particular topics of
interest include

  • Language Design: type systems; concurrency and
    distribution; modules; components and composition;
    metaprogramming; relations to imperative,
    object-oriented, or logic programming;
    interoperability
  • Implementation: abstract machines; virtual
    machines; interpretation; compilation; compile-time and
    run-time optimization; memory management;
    multi-threading; exploiting parallel hardware;
    interfaces to foreign functions, services, components,
    or low-level machine resources
  • Software-Development Techniques: algorithms and
    data structures; design patterns; specification;
    verification; validation; proof assistants; debugging;
    testing; tracing; profiling
  • Foundations: formal semantics; lambda calculus;
    rewriting; type theory; mathematical logic; monads;
    continuations; delimited continuations; global,
    delimited, or local effects
  • Transformation and Analysis: abstract
    interpretation; partial evaluation; program
    transformation; program calculation; program proofs;
    normalization by evaluation
  • Applications and Domain-Specific Languages:
    symbolic computing; formal-methods tools; artificial
    intelligence; systems programming; distributed-systems
    and web programming; hardware design; databases; XML
    processing; scientific and numerical computing;
    graphical user interfaces; multimedia programming;
    scripting; system administration; security;
    education
  • Functional Pearls: elegant, instructive, and fun
    essays on functional programming
  • Experience Reports: short papers that provide
    evidence that functional programming really works or
    describe obstacles that have kept it from working in a
    particular application

Important Dates:

Titles, abstracts & keywords due: Thursday 17 March 2011 at 14:00 UTC
Submissions due: Thursday 24 March 2011 at 14:00 UTC
Author response: Tuesday & Wednesday 17-18 May
Notification: Monday 30 May 2011
Final copy due: Friday 01 July 2011
Conference: Monday-Wednesday 19-21 September 2011

12th IEEE International Conference on Information Reuse and Integration (IEEE IRI-2011)

Filed under: Conferences,Information Integration,Information Reuse — Patrick Durusau @ 9:56 am

12th IEEE International Conference on Information Reuse and Integration (IEEE IRI-2011)

From the announcement:

Given the emerging global Information-centric IT landscape that has tremendous social and economic implications, effectively processing and integrating humongous volumes of information from diverse sources to enable effective decision making and knowledge generation have become one of the most significant challenges of current times. Information Reuse and Integration (IRI) seeks to maximize the reuse of information by creating simple, rich, and reusable knowledge representations and consequently explores strategies for integrating this knowledge into systems and applications. IRI plays a pivotal role in the capture, representation, maintenance, integration, validation, and extrapolation of information; and applies both information and knowledge for enhancing decision-making in various application domains.

This conference explores three major tracks: information reuse, information integration, and reusable systems. Information explores theory and practice of optimizing representation; information integration focuses on innovative strategies and algorithms for applying integration approaches in novel domains; and reusable systems focus on developing and deploying models and corresponding processes that enable Information Reuse and Integration to play a pivotal role in enhancing decision-making processes in various application domains.

Important dates:

March 28, 2011 Submission of abstract (Recommended)
April 5, 2011 Paper submission deadline
May 14, 2011 Notification of acceptance
May 28, 2011 Camera-ready paper due
May 28, 2011 Presenting author registration due
June 30, 2011 Advance (discount) registration for general public and other co-author
July 15, 2011 Hotel reservation (special discount rate) closing date
August 3-5, 2011 Conference events

Alabama is a foreign country – Post

Filed under: Humor — Patrick Durusau @ 9:55 am

Alabama is a foreign country

I have to admit, being from the American South, the title caught my attention.

Amusing report of data errors that listed Alabama as a foreign country (an old record?), residents of Australia being located in Austria (is that you Robert?), and similar follies.

Makes me wonder about the quality of data for more serious purposes.

jtm-writer

Filed under: JTM — Patrick Durusau @ 9:54 am

jtm-writer

The Topic Maps lab has released 1.0 of jtm-writer, a deserializer for JSON topic map notation.

A pointer to the synatx would be helpful: JSON Topic Maps 1.0 by Robert Cerny,

Summify’s Technology Examined

Filed under: Data Analysis,Data Mining,MongoDB,MySQL,Redis — Patrick Durusau @ 9:54 am

Summify’s Technology Examined

Phil Whelan writes an interesting review of the underlying technology for Summify.

Many those same components are relevant to the construction of topic map based services.

Interesting that Summify uses MySQL, Redis and MongoDB.

I rather like the idea of using the best tool for a particular job.

Worth a close read.

Toward Topic Search on the Web – Paper

Filed under: Probalistic Models,Search Engines — Patrick Durusau @ 9:53 am

Toward Topic Search on the Web was reported by Infodocket.com.

Report from Microsoft researchers on “…framework that improves web search experiences through the use of a probabilistic knowledge base.”

Interesting report.

Even more so if you think about topic maps as declarative knowledge bases and consider the use of probabilistic knowledge bases as a means to add to the former.

BTW, user satisfaction was used as the criteria for success.

Now that is local semantics.

Probably at just about the right level as well.

Comments?

March 7, 2011

Rock-Paper-Scissors

Filed under: Artificial Intelligence,Humor,Interface Research/Design — Patrick Durusau @ 7:54 am

Rock-Paper-Scissors

From the nice people at Flowing Data.

Not entirely silly to post to a topic maps forum.

Watch the dedication of other users to “beat” the machine.

Now imagine extracting that sort of dedication in an interface that hand a more meaningful purpose.

Such as creating representatives for subjects and/or rules for mapping between the same.

Sure, it sounds boring and tedious when I say it, the question is how to make that dynamic and exciting.

Succeed at that, even partially, and you will have to fend off investors with a stick.

Hibernate

Filed under: Hibernate — Patrick Durusau @ 7:15 am

Hibernate

From the website:

Relational Persistence for Java and .NET

Historically, Hibernate facilitated the storage and retrieval of Java domain objects via Object/Relational Mapping. Today, Hibernate is a collection of related projects enabling developers to utilize POJO-style domain models in their applications in ways extending well beyond Object/Relational Mapping.

Interesting factoid from the documentation:

A RDBMS defines exactly one notion of ‘sameness’: the primary key. Java, however, defines both object identity (a==b) and object equality (a.equals(b)).

Unfortunately, neither an RDBMS nor Java define tests for the identity of the subjects they represent. Nor appear to realize that the structures in both are subjects in their own right.

SearchBlox

Filed under: Lucene — Patrick Durusau @ 7:13 am

SearchBlox

From the website:

SearchBlox is an out-of-the-box Enterprise Search Solution built on top of Apache Lucene. It is fast to deploy, easy to manage and available for both on-premise and cloud deployment.

Best of all, it is free. No limitations. No restrictions.

Do note that downloading SearchBlox did not require registration or surrender of my phone number.

Usual questions, ease of use, ease of integration across deployments, etc.

I am gathering up a rather large data set for other purposes that I will be using to test this and other search software.

Lucid Imagination

Filed under: Lucene,Solr — Patrick Durusau @ 7:12 am

Lucid Imagination

Enterprise level search capabilities based entirely upon Apache Solr/Lucene software.

I had to register to download the installer. Will be installing it later this week/next. Expect posts on how that does.

Was annoying that I had to provide my phone number as part of the registration.

It isn’t like I will forget how to contact them should I encounter a need for their services.

Nor am I likely to be receptive to pesky calls/emails, have you tried our software yet?

I actually have a life separate and apart from the various software packages that I use/evaluate so I tend to work at my schedule.

Two aspects of interest:

First, simply using this as an appliance for indexing/searching in the usual way.

Second, how difficult would it be to leverage that indexing/searching across Lucid installations?

That is two separate and distinct enterprises have used Lucid to index/search mission critical materials that now require merging.

Do we toss the time and experience that went into the separate indexes and build anew? Or can we leverage that investment?

Nhibernate Search Tutorial with Lucene.Net and NHibernate 3.0

Filed under: .Net,Hibernate,Lucene,NHibernate — Patrick Durusau @ 7:11 am

Nhibernate Search Tutorial with Lucene.Net and NHibernate 3.0

From the website:

Here’s another quickstart tutorial on NHibernate Search for NHibernate 3.0 using Lucene.Net. We’re going to be using Fluent NHibernate for NHibernate but attributes for NHibernate Search.

Uses Nhibernate:

NHibernate is a mature, open source object-relational mapper for the .NET framework. It’s actively developed , fully featured and used in thousands of successful projects.

For those of you who are more comfortable in a .Net environment.

Another Python Graph Library (APGL)

Filed under: Graphs,Software — Patrick Durusau @ 7:10 am

Another Python Graph Library (APGL)

From the website:

Another Python Graph Library is a simple, fast and easy to use graph library with some machine learning features. The main characteristics are as follows:

  • Directed, undirected and multigraphs designed under a
    hierarchical class structure using numpy and scipy matrices for fast linear algebra computations. The PySparseGraph and SparseGraph classes can scale up to 1,000,000s of vertices and edges on a standard PC.
  • Set operations including finding subgraphs, complements, unions, intersections of graphs.
  • Graph properties such as diameter, geodesic distance, degree distributions, eigenvector betweenness, and eigenvalues.
  • Other algorithms: search, Floyd-Warshall, Dijkstra’s algorithm
  • Erdos-Renyi, Small-World and Albert-Barabasi and Kronecker graph generation
  • Write to Pajek, and simple CSV files
  • Machine learning features – data preprocessing, kernels, PCA, KCCA, wrappers for LibSVM, and some mlpy learning algorithms
  • Unit tested using the Python unittest framework

As if you can’t tell from my posts, I have a great deal of interest in graph approaches to topic maps.

Pointers to graph work relevant to topic maps appreciated!

Microsoft Academic Search

Filed under: Dataset,Search Engines — Patrick Durusau @ 7:09 am

Microsoft Academic Search

I ran across a reference to this search engine in a thread bitching about ranking of publications, etc.

I suppose but my first reaction was like a kid in a candy store.

Hard to know of:

  • Algorithms & Theory
  • Artificial Intelligence
  • Bioinformatics & Computational Biology
  • Computer Education
  • Computer Vision
  • Databases
  • Data Mining
  • Distributed & Parallel Computing
  • Graphics
  • Hardware & Architecture
  • Human-Computer Interaction
  • Information Retrieval
  • Machine Learning & Pattern Recognition
  • Multimedia
  • Natural Language & Speech
  • Networks & Communications
  • Operating Systems
  • Programming Languages
  • Real-Time & Embedded Systems
  • Scientific Computing
  • Security & Privacy
  • Simulation
  • Software Engineering
  • World Wide Web
  • Computer Science Overall
  • Other Domains Overall

…which to choose first!

As far as the critics of this site, I have to agree it isn’t everything it could be.

But that is a good thing because it leaves Microsoft and everyone else something to strive for.

I don’t have any illusions about corporate entities, including Microsoft.

But, all of them have people working for them who do good work, that benefits the public interest, and who are doing so while working for a corporate entity.

I know that because I know people who work for a number of the larger software corporate entities.

I am sure you know some of them too.

NPTEL – Computer Science

Filed under: CS Lectures,Dataset — Patrick Durusau @ 7:08 am

NPTEL – Computer Science

An extensive set of computer science lectures courtesy of a joint venture of the Indian Institutes of Technology and the Indian Institute of Science.

I listed this both as a CS lecture and a dataset as it occurs to me that it would be really useful to have a topic map of online CS courses.

If a student doesn’t “get” a concept when explained in one lecture, another approach, by another lecturer, could turn the trick.

Not something I am going to get to soon but the type of thing I need to create as a framework to capture that sort of information as I encounter it.

Or even better a framework to which others could contribute to a map as they find such resources.

Seed it with courses from NPTEL, MIT, Stanford and maybe a couple of other places. Enough to make it worthwhile on its own.

Something to think about.

Shout out if you are interested or want to take the lead.

Scala Language Tour

Filed under: Scala,Software — Patrick Durusau @ 7:07 am

Scala Language Tour

A more recent Scala (2010) language tour.

Take the time, it will be time well spent.

« Newer PostsOlder Posts »

Powered by WordPress