Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

November 12, 2014

Online Master of Science In Computer Science [Georgia Tech]

Filed under: Computer Science,Education — Patrick Durusau @ 11:24 am

Online Master of Science In Computer Science

From the homepage:

The Georgia Institute of Technology, Udacity and AT&T have teamed up to offer the first accredited Master of Science in Computer Science that students can earn exclusively through the Massive Open Online Course (MOOC) delivery format and for a fraction of the cost of traditional, on-campus programs.

This collaboration—informally dubbed “OMS CS” to account for the new delivery method—brings together leaders in education, MOOCs and industry to apply the disruptive power of massively open online teaching to widen the pipeline of high-quality, educated talent needed in computer science fields.

Whether you are a current or prospective computing student, a working professional or simply someone who wants to learn more about the revolutionary program, we encourage you to explore the Georgia Tech OMS CS: the best computing education in the world, now available to the world.

A little more than a year old, the Georgia Tech OMS CS program continues to grow. Carl Straumsheim writes in One Down, Many to Go of high marks for the program by students and administrators feeling their way along in this exercise in delivery of education.

At an estimated cost of less than $7,000 for a Master of Science in Computer Science, this program has the potential to change the complexion of higher education in computer science at least.

How many years (decades?) it will take for this delivery model to trickle down to the humanities is uncertain. Acknowledging that J.J. O’Donnell made waves in 2004 by teaching Augustine: the Seminar to a global audience. There has been no rush of humanities scholars to follow his example.

August 15, 2014

XPERT (Xerte Public E-learning ReposiTory)

Filed under: Education,Open Source — Patrick Durusau @ 12:43 pm

XPERT (Xerte Public E-learning ReposiTory)

From the about page:

XPERT (Xerte Public E-learning ReposiTory) project is a JISC funded rapid innovation project (summer 2009) to explore the potential of delivering and supporting a distributed repository of e-learning resources created and seamlessly published through the open source e-learning development tool called Xerte Online Toolkits. The aim of XPERT is to progress the vision of a distributed architecture of e-learning resources for sharing and re-use.

Learners and educators can use XPERT to search a growing database of open learning resources suitable for students at all levels of study in a wide range of different subjects.

Creators of learning resources can also contribute to XPERT via RSS feeds created seamlessly through local installations of Xerte Online Toolkits. Xpert has been fully integrated into Xerte Online Toolkits, an open source content authoring tool from The University of Nottingham.

Other useful links:

Xerte Project Toolkits

Xerte Community.

You may want to start with the browse option because the main interface is rather stark.

The Google interface is “stark” in the same sense but Google has indexed a substantial portion of all online content. I’m not very likely to draw a blank. Xpert, with a base of 364,979 resources, the odds of my drawing a blank are far higher.

The keywords are in three distinct alphabetical segments, starting with “a” or a digit, ending and then another digit or “a” follows and end, one after the other. Hebrew and what appears to be Chinese appears at the end of the keyword list, in no particular order. I don’t know if that is an artifact of the software or of its use.

The same repeated alphabetical segments occurs in Author. Under Type there are some true types such as “color print” but the majority of the listing is file sizes in bytes. Not sure why file size would be a “type.” Institution has similar issues.

If you are looking for a volunteer opportunity, helping XPert with alphabetization would enhance the browsing experience for the resources it has collected.

I first saw this in a tweet by Graham Steel.

August 11, 2014

Getting Good Tip

Filed under: Education,Learning — Patrick Durusau @ 3:49 pm

I first saw:

“if you want to get good at R (or anything really) the trick is to find a reason to use it every day”

in a tweet by Neil Saunders, quoting Tony Ojeda in How to Transition from Excel to R.

That sounds more doable than saying: “I will practice R for an hour every day this week.” Some days you will and some days you won’t. But finding a reason to use R (or anything else) once a day, I suspect it will creep into your regular routine.

Enjoy!

June 1, 2014

Pen vs. Keyboard: Choose Wisely

Filed under: Education,Interface Research/Design — Patrick Durusau @ 6:12 pm

Students retain information better with pens than laptops by Laura Sanders.

From the post:

When it comes to taking notes, the old-fashioned way might be best. Students who jotted down notes by hand remembered lecture material better than their laptop-wielding peers did, researchers report April 23 in Psychological Science.

People taking notes on laptops have a shallower grasp of a subject than people writing with their hands, and not just because laptops distract users with other activities such as web surfing, the new study suggests.
….

The study in question: P.A. Mueller and D.M. Oppenheimer. The pen is mightier than the keyboard: advantages of longhand over laptop note taking. Psychological Science. Published online April 23, 2014. doi: 10.1177/0956797614524581.

Laura lists some resources for further reading.

What do you think this study means for the design of UIs?

I ask because some topic map UIs will be for information retrieval, where conceptual understanding isn’t at issue and others will be for imparting conceptual understandings.

What would you do differently in UI terms for those cases and just as importantly, why?

I first saw this in a tweet by Carl Anderson.

May 15, 2014

Speak and learn with Spell Up, our latest Chrome Experiment

Filed under: Education,Language — Patrick Durusau @ 7:15 pm

Speak and learn with Spell Up, our latest Chrome Experiment by Xavier Barrade.

From the post:

As a student growing up in France, I was always looking for ways to improve my English, often with a heavy French-to-English dictionary in tow. Since then, technology has opened up a world of new educational opportunities, from simple searches to Google Translate (and our backpacks have gotten a lot lighter). But it can be hard to find time and the means to practice a new language. So when the Web Speech API made it possible to speak to our phones, tablets and computers, I got curious about whether this technology could help people learn a language more easily.

That’s the idea behind Spell Up, a new word game and Chrome Experiment that helps you improve your English using your voice—and a modern browser, of course. It’s like a virtual spelling bee, with a twist.

This rocks!

If Google is going to open source another project and support it, Spell Up should be it.

The machine pronunciation could use some work, or at least it seems that way to me. (My hearing may be a factor there.)

Thinking of the impact of Spell Up for lesser often taught languages.

April 25, 2014

SlideRule [Online Course Collection]

Filed under: Education — Patrick Durusau @ 7:18 pm

SlideRule

From the about page:

Education is changing, with great educators from around the world increasingly putting their amazing courses online. We believe we are in the early days of a revolution that will not only increase access to great education, but also transform the way people learn.

SlideRule is our way of contributing to the movement. We help you discover the world’s best online courses in every subject – courses that your friends and thousands of other learners have loved.

I counted thirty-three (33) content providers who are supplying content. Some of it free, some not.

It looks extensive enough to be worth mentioning.

February 22, 2014

Project Laboratory in Mathematics

Filed under: Education,Mathematics — Patrick Durusau @ 3:17 pm

Project Laboratory in Mathematics by Prof. Haynes Miller, Dr. Nat Stapleton, Saul Glasman, and Susan Ruff.

From the description:

Project Laboratory in Mathematics is a course designed to give students a sense of what it’s like to do mathematical research. In teams, students explore puzzling and complex mathematical situations, search for regularities, and attempt to explain them mathematically. Students share their results through professional-style papers and presentations.

This course site was created specifically for educators interested in offering students a taste of mathematical research. This site features extensive description and commentary from the instructors about why the course was created and how it operates.

Aside from the introductory lecture by Prof. Miller, the next best part are two problem sets, the editing process and resulting final paper.

Something like this, adjusted for grade level, looks far more valuable rote coding exercises.

January 23, 2014

The Mole

Filed under: Cheminformatics,Education — Patrick Durusau @ 3:29 pm

The Mole

From the homepage:

The Mole is the Royal Society of Chemistry’s magazine for students, and anyone inspired to dig deeper into chemistry.

In the latest issue (as of today):

Find out how chemistry plays a central role in revealing how our ancestors once lived • Discover how lucrative markets are found in leftover lobster • Make your own battery from the contents of your fruit bowl • What did Egyptian mummies have for dinner? • How to control the weather so it rains where we need it to • Excavate the facts about a chemist working as an archaeologist • Discover how chemistry can reveal secrets hidden in art

Of course there is a wordsearch puzzle and a chemical acrostic on the final page.

Always interesting to learn new information and to experience “other” views of the world. May lessen your chances of answering a client before they finish outlining their problem.

I first learned of the Mole in a tweet by ChemistryWorld.

…Textbooks for $0 [Digital Illiterates?]

Filed under: Books,Education,Publishing — Patrick Durusau @ 3:15 pm

OpenStax College Textbooks for $0

From the about page:

OpenStax College is a nonprofit organization committed to improving student access to quality learning materials. Our free textbooks are developed and peer-reviewed by educators to ensure they are readable, accurate, and meet the scope and sequence requirements of your course. Through our partnerships with companies and foundations committed to reducing costs for students, OpenStax College is working to improve access to higher education for all.

OpenStax College is an initiative of Rice University and is made possible through the generous support of several philanthropic foundations. …

Available now:

  • Anatomy and Physiology
  • Biology
  • College Physics
  • Concepts of Biology
  • Introduction to Sociology
  • Introductory Statistics

Coming soon:

  • Chemistry
  • Precalculus
  • Principles of Economics
  • Principles of Macroeconomics
  • Principles of Microeconomics
  • Psychology
  • U.S. History

Check to see if I missed any present or forthcoming texts on data science. No, I didn’t see any either.

I looked at the Introduction to Sociology, which has a chapter on research methods, but no opportunity for students to experience data methods. Such as Statwing’s coverage of the General Social Survey (GSS), which I covered in Social Science Dataset Prize!

Data science should not be an aside or extra course any more than language literacy is a requirement for an education.

Consider writing or suggesting edits to subject textbooks to incorporate data science. Solely data science books will be necessary as well, just like there are advanced courses in English Literature.

Let’s not graduate digital illiterates. For their sake and ours.

I first saw this in a tweet by Michael Peter Edson.

July 13, 2013

Strategies for Effective Teaching…

Filed under: Education,Teaching — Patrick Durusau @ 12:39 pm

Strategies for Effective Teaching: A Handbook for Teaching Assistants – University of Wisconsin – Madison College of Engineering.

From the foreword:

We help our students understand engineering concepts and go beyond the knowledge level to higher levels of thinking. We help them to apply, analyze, and synthesize, to create new knowledge, and solve new problems. So, too, as teachers, we need to recognize our challenge to go beyond knowledge about effective teaching. We need to apply these strategies, analyze what works, and take action to modify or synthesize our learnings to help our students learn in a way that works for us as individuals and teams of teachers.

The learning community consists of both students and teachers. Students benefit from effective teaching and learning strategies inside and outside the classroom. This Handbook focuses on teaching strategies you can use in the classroom to foster effective learning.

Helping students learn is our challenge as teachers. Identifying effective teaching strategies, therefore, is our challenge as we both assess the effectiveness of our current teaching style and consider innovative ways to improve our teaching to match our students’ learning styles.

I mention this as a resource for anyone who is trying to educate others, students, clients or a more general audience about topic maps.

July 8, 2013

Online College Courses (Academic Earth)

Filed under: Data,Education — Patrick Durusau @ 4:04 pm

Online College Courses (Academic Earth)

No new material but a useful aggregation of online course materials at forty-nine (49) institutions. (as of today)

Not that hard to imagine topic map-based value add services that link up materials and discussions with course materials.

Most courses are offered on a regular cycle and knowing what has helped before may be useful to you.

June 6, 2013

edX Code

Filed under: Education,Python — Patrick Durusau @ 2:28 pm

edX Code

From the homepage:

Welcome to edX Code, where developers around the globe are working to create a next-generation online learning platform that will bring quality education to students around the world.

EdX is a not-for-profit enterprise composed of 27 leading global institutions, the xConsortium. Since our founding in May 2012, edX has been committed to an open source vision. We believe in pursuing non-profit, open-source opportunities for expanding online education around the world. We believe it’s important to support these efforts in visible and substantive ways, and that’s why we are opening up our platform and inviting the world to help us make it better.

If you think topic maps are relevant to education, then they should be relevant to online education.

Yes?

A standalone topic map application not needed in this context but I don’t recall any standalong application requirement.

I first saw this at: edX learning platform now all open source.

May 14, 2013

HeadStart for Planet Earth [Titan]

Filed under: Education,Graphs,Networks,Titan — Patrick Durusau @ 8:45 am

Educating the Planet with Pearson by Marko A. Rodriguez.

From the post:

Pearson is striving to accomplish the ambitious goal of providing an education to anyone, anywhere on the planet. New data processing technologies and theories in education are moving much of the learning experience into the digital space — into massive open online courses (MOOCs). Two years ago Pearson contacted Aurelius about applying graph theory and network science to this burgeoning space. A prototype proved promising in that it added novel, automated intelligence to the online education experience. However, at the time, there did not exist scalable, open-source graph database technology in the market. It was then that Titan was forged in order to meet the requirement of representing all universities, students, their resources, courses, etc. within a single, unified graph. Moreover, beyond representation, the graph needed to be able to support sub-second, complex graph traversals (i.e. queries) while sustaining at least 1 billion transactions a day. Pearson asked Aurelius a simple question: “Can Titan be used to educate the planet?” This post is Aurelius’ answer.

Liking the graph approach in general and Titan in particular does not make me any more comfortable with some aspects of this posting.

You don’t need to spin up a very large Cassandra database on Amazon to see the problems.

Consider the number of concepts for educating the world, some 9,000 if the chart is to be credited.

Suggested Upper Merged Ontology (SUMO) has “~25,000 terms and ~80,000 axioms when all domain ontologies are combined.

The SUMO totals being before you get into the weeds of any particular subject, discipline or course material.

Or the subset of concepts and facts represented in DBpedia:

The English version of the DBpedia knowledge base currently describes 3.77 million things, out of which 2.35 million are classified in a consistent Ontology, including 764,000 persons, 573,000 places (including 387,000 populated places), 333,000 creative works (including 112,000 music albums, 72,000 films and 18,000 video games), 192,000 organizations (including 45,000 companies and 42,000 educational institutions), 202,000 species and 5,500 diseases.

In addition, we provide localized versions of DBpedia in 111 languages. All these versions together describe 20.8 million things, out of which 10.5 million overlap (are interlinked) with concepts from the English DBpedia. The full DBpedia data set features labels and abstracts for 10.3 million unique things in up to 111 different languages; 8.0 million links to images and 24.4 million HTML links to external web pages; 27.2 million data links into external RDF data sets, 55.8 million links to Wikipedia categories, and 8.2 million YAGO categories. The dataset consists of 1.89 billion pieces of information (RDF triples) out of which 400 million were extracted from the English edition of Wikipedia, 1.46 billion were extracted from other language editions, and about 27 million are data links to external RDF data sets. The Datasets page provides more information about the overall structure of the dataset. Dataset Statistics provides detailed statistics about 22 of the 111 localized versions.

I don’t know if the 9,000 concepts cited in the post would be sufficient for a world wide HeadStart program in multiple languages.

Moreover, why would any sane person want a single unified graph to represent course delivery from Zaire to the United States?

How is a single unified graph going to deal with the diversity of educational institutions around the world? A diversity that I take as a good thing.

It sounds like Pearson is offering a unified view of education.

My suggestion is to consider the value of your own diversity before passing on that offer.

March 17, 2013

Open Law Lab

Filed under: Education,Law,Law - Sources,Legal Informatics — Patrick Durusau @ 12:36 pm

Open Law Lab

From the webpage:

Open Law Lab is an initiative to design law – to make it more accessible, more usable, and more engaging.

Projects:

Law Visualized

Law Education Tech

Usable Court Systems

Access to Justice by Design

Not to mention a number of interesting blog posts represented by images further down the homepage.

Access/interface issues are universal and law is a particularly tough nut to crack.

Progress in providing access to legal materials could well carry over to other domains.

I first saw this at: Hagan: Open Law Lab.

February 27, 2013

School of Data

Filed under: Data,Education,Marketing,Topic Maps — Patrick Durusau @ 2:55 pm

School of Data

From their “about:”

School of Data is an online community of people who are passionate about using data to improve our understanding of the world, in particular journalists, researchers and analysts.

Our mission

Our aim is to spread data literacy through the world by offering online and offline learning opportunities. With School of Data you’ll learn how to:

  • scout out the best data sources
  • speed up and hone your data handling and analysis
  • visualise and present data creatively

Readers of this blog are very unlikely to find something they don’t know at this site.

However, readers of this blog know a great deal that doesn’t appear on this site.

Such as information on topic maps? Yes?

Something to think about.

I can’t really imagine data literacy without some awareness of subject identity issues.

Once you get to subject identity issues, semantic diversity, topic maps are just an idle thought away!

I first saw this at Nat Torkington’s Four Short Links: 26 Feb 2013.

February 12, 2013

Some principles of intelligent tutoring

Filed under: Education,Knowledge,Knowledge Representation — Patrick Durusau @ 6:20 pm

Some principles of intelligent tutoring by Stellan Ohlsson. (Instructional Science May 1986, Volume 14, Issue 3-4, pp 293-326)

Abstract:

Research on intelligent tutoring systems is discussed from the point of view of providing moment-by-moment adaptation of both content and form of instruction to the changing cognitive needs of the individual learner. The implications of this goal for cognitive diagnosis, subject matter analysis, teaching tactics, and teaching strategies are analyzed. The results of the analyses are stated in the form of principles about intelligent tutoring. A major conclusion is that a computer tutor, in order to provide adaptive instruction, must have a strategy which translates its tutorial goals into teaching actions, and that, as a consequence, research on teaching strategies is central to the construction of intelligent tutoring systems.

Be sure to notice the date: 1986, when you could write:

The computer offers the potential for adapting instruction to the student at a finer grain-level than the one which concerned earlier generations of educational researchers. First, instead of adapting to global traits such as learning style, the computer tutor can, in principle, be programmed to adapt to the student dynamically, during on-going instruction, at each moment in time providing the kind of instruction that will be most beneficial to the student at that time. Said differently, the computer tutor takes a longitudinal, rather than cross-sectional, perspective, focussing on the fluctuating cognitive needs of a single learner over time, rather than on stable inter-individual differences. Second, and even more important, instead of adapting to content-free characteristics of the learner such as learning rate, the computer can, in principle, be programmed to adapt both the content and the form of instruction to the student’s understanding of the subject matter. The computer can be programmed, or so we hope, to generate exactly that question, explanation, example, counter-example, practice problem, illustration, activity, or demonstration which will be most helpful to the learner. It is the task of providing dynamic adaptation of content and form which is the challenge and the promise of computerized instruction*

That was written decades before we were habituated to users adapting to the interface, not the other way around.

More on point, the quote from Ohlsson, Principle of Non-Equifinality of Learning, was proceeded by:

But there are no canonical representations of knowledge. Any knowledge domain can be seen from several different points of view, each view showing a different structure, a different set of parts, differently related. This claim, however broad and blunt – almost impolite – it may appear when laid out in print, is I believe, incontrovertible. In fact, the evidence for it is so plentiful that we do not notice it, like the fish in the sea who never thinks about water. For instance, empirical studies of expertise regularly show that human experts differ in their problem solutions (e.g., Prietula and Marchak, 1985); at the other end of the scale, studies of young children tend to show that they invent a variety of strategies even for simple tasks, (e.g., Young, 1976; Svenson and Hedenborg, 1980). As a second instance, consider rational analyses of thoroughly codified knowledge domains such as the arithmetic of rational numbers. The traditional mathematical treatment by Thurstone (1956) is hard to relate to the didactic analysis by Steiner (1969), which, in turn, does not seem to have much in common with the informal, but probing, analyses by Kieren (1976, 1980) – and yet, they are all experts trying to express the meaning of, for instance, “two-thirds”. In short, the process of acquiring a particular subject matter does not converge on a particular representation of that subject matter. This fact has such important implications for instruction that it should be stated as a principle.

The first two sentences capture the essence of topic maps as well as any I have ever seen:

But there are no canonical representations of knowledge. Any knowledge domain can be seen from several different points of view, each view showing a different structure, a different set of parts, differently related.
(emphasis added)

Single knowledge representations, such as in bank accounting systems can be very useful. But when multiple banks with different accounting systems try to roll knowledge up to the Federal Reserve, different (not better) representations may be required.

Could even require representations that support robust mappings between different representations.

What do you think?

Principle of Non-Equifinality of Learning

Filed under: Education,Knowledge Representation,Topic Maps — Patrick Durusau @ 6:20 pm

In “Educational Concept Maps: a Knowledge Based Aid for Instructional Design.” by Giovanni Adorni, Mauro Coccoli, Giuliano Vivanet (DMS 2011: 234-237), you will find the following passage:

…one of the most relevant problems concerns the fact that there are no canonical representations of knowledge structures and that a knowledge domain can be structured in different ways, starting from various points of view. As Ohlsson [2] highlighted, this fact has such relevant implications for authoring systems, that it should be stated as the “Principle of Non-Equifinality of Learning”. According to this, “The state of knowing the subject matter does not correspond to a single well-defined cognitive state. The target knowledge can always be represented in different ways, from different perspectives; hence, the process of acquiring the subject matter have many different, equally valid, end states”. Therefore it is necessary to re-think learning models and environments in order to enable users to better build represent and share their knowledge. (emphasis in original)

Nominees for representing “target knowledge…in different ways, from different perspectives….?”

In the paper, the authors detail their use of topic maps, XTM topic maps in particular and the Vizigator for visualization of their topic maps.

Sorry, I was so excited about the quote I forgot to post the article abstract:

This paper discusses a knowledge-based model for the design and development of units of learning and teaching aids. The idea behind this model originates from both the analysis of the open issues in instructional authoring systems, and the lack of a well-defined process able to merge pedagogical strategies with systems for the knowledge organization of the domain. In particular, it is presented the Educational Concept Map (ECM): a, pedagogically founded (derived from instructional design theories), abstract annotation system that was developed with the aim of guaranteeing the reusability of both teaching materials and knowledge structures. By means of ECMs, it is possible to design lessons and/or learning paths from an ontological structure characterized by the integration of hierarchical and associative relationships among the educational objectives. The paper also discusses how the ECMs can be implemented by means of the ISO/IEC 13250 Topic Maps standard. Based on the same model, it is also considered the possibility of visualizing, through a graphical model, and navigate, through an ontological browser, the knowledge structure and the relevant resources associated to them.

BTW, you can find the paper in DMS 2011 Proceedings Warning: Complete Proceedings, 359 pages, 26.3 MB PDF file. Might not want to try it on your cellphone.

And yes, this is the paper that I found this morning that triggered a number of posts as I ran it to ground. 😉 At least I will have sign-posts for some of these places next time.

Journal of e-Learning and Knowledge Society

Filed under: Education,Interface Research/Design,Training — Patrick Durusau @ 10:36 am

Journal of e-Learning and Knowledge Society

From the focus and scope statement for the journal:

SIe-L , Italian e-Learning Association, is a non-profit organization who operates as a non-commercial entity to promote scientific research and testing best practices of e-Learning and Distance Education. SIe-L consider these subjects strategic for citizen and companies for their instruction and education.

I encountered this journal while chasing a paper about topic maps in education to ground.

I have only started to explore but definitely a resource for anyone interested in the exploding on-line education market.

December 18, 2012

The value of typing code

Filed under: Education,Programming,Teaching,Topic Maps — Patrick Durusau @ 3:53 pm

The value of typing code by John D. Cook.

John points to a blog post by Tommy Nicholas that reads in part:

When Hunter S. Thompson was working as a copy boy at Time Magazine in 1959, he spent his spare time typing out the entire Great Gatsby by F. Scott Fitzgerald and A Farewell to Arms by Ernest Hemingway in order to better understand what it feels like to write a great book. To be able to feel the author’s turns in logic and storytelling weren’t possible from reading the books alone, you had to feel what it feels like to actually create the thing. And so I have found it to be with coding.

Thompson’s first book, Hell’s Angels: a strange and terrible saga was almost a bible to me in middle school, but I don’t know that he ever captured writing “a great book.” There or in subsequent books. Including the scene where he describes himself as clawing at the legs of Edmund Muskie before Muskie breaks down in tears. Funny, amusing, etc. but too period bound to be “great.”

On the other hand, as an instructional technique, what do you think about disabling cut-n-paste in a window so students have to re-type a topic map and perhaps add material to it at the same time?

Something beyond toy examples although with choices so students could pick one with enough interest for them to do the work.

December 17, 2012

MOOCs have exploded!

Filed under: Education,Teaching — Patrick Durusau @ 3:13 pm

MOOCs have exploded! by John Johnson.

From the post:

About a year and two months ago, Stanford University taught three classes online: Intro to Databases, Machine Learning, and Artificial Intelligence. I took two of those classes (I did not feel I had time to take Artificial Intelligence), and found them very valuable. The success of those programs led to the development of at least two companies in a new area of online education: Coursera and Udacity. In the meantime, other efforts have been started (I’m thinking mainly edX, but there are others as well), and now many universities are scrambling to take advantage of either the framework of these companies or other platforms.

Put simply, if you have not already, then you need to make the time to do some of these classes. Education is the most important investment you can make in yourself, and at this point there are hundreds of free online university-level classes in everything from the arts to statistics. If ever you wanted to expand your horizons, now’s the time.

John mentions that the courses require self-discipline. For enrollment of any size, that would be true of the person offering the course as well.

If you have taken one or more MOOCs, I am interested to hear your thoughts on teaching topic maps via a MOOC.

The syntaxes look amenable to the mini-test with automated grading style of testing. Could subject a topic map to parsing validity.

Would that be enough? As a mini-introduction to topic maps?

Saving in-depth discussion of semantics, identity and such for smaller settings?

December 14, 2012

FutureLearn [MOOCs from Open University, UK]

Filed under: CS Lectures,Education — Patrick Durusau @ 7:05 am

Futurelearn

From the webpage:

Futurelearn will bring together a range of free, open, online courses from leading UK universities, in the same place and under the same brand.

The Company will be able to draw on The Open University’s unparalleled expertise in delivering distance learning and in pioneering open education resources. These will enable Futurelearn to present a single, coherent entry point for students to the best of the UK’s online education content.

Futurelearn will increase the accessibility of higher education, opening up a wide range of new online courses and learning materials to students across the UK and the rest of the world.

More details in 2013.

If you want to know more, now, try:

Open University launches British Mooc platform to rival US providers

or,

OU Launches FutureLearn Ltd

Have you noticed that the more players in a space the greater the semantic diversity?

Makes me suspect that semantic diversity is a characteristic of humanity.

Are there any counter examples?

PS: MOOCs should be fertile grounds for mapping across different vocabularies for the same content.

PPS: In case you are wondering why the Open University has the .com domain, consider that futurelearn.org was taken. Oh! There are those damned re-use of name issues! 😉

November 30, 2012

Linking Web Data for Education Project [Persisting Heterogeneity]

Filed under: Education,Linked Data,WWW — Patrick Durusau @ 3:48 pm

Linking Web Data for Education Project

From the about page:

LinkedUp aims to push forward the exploitation of the vast amounts of public, open data available on the Web, in particular by educational institutions and organizations.

This will be achieved by identifying and supporting highly innovative large-scale Web information management applications through an open competition (the LinkedUp Challenge) and dedicated evaluation framework. The vision of the LinkedUp Challenge is to realise personalised university degree-level education of global impact based on open Web data and information. Drawing on the diversity of Web information relevant to education, ranging from Open Educational Resources metadata to the vast body of knowledge offered by the Linked Data approach, this aim requires overcoming substantial challenges related to Web-scale data and information management involving Big Data, such as performance and scalability, interoperability, multilinguality and heterogeneity problems, to offer personalised and accessible education services. Therefore, the LinkedUp Challenge provides a focused scenario to derive challenging requirements, evaluation criteria, benchmarks and thresholds which are reflected in the LinkedUp evaluation framework. Information management solutions have to apply data and learning analytics methods to provide highly personalised and context-aware views on heterogeneous Web data.

Before linked data, we had: “…interoperability, multilinguality and heterogeneity problems….”

After linked data, we have: “…interoperability, multilinguality and heterogeneity problems….” + linked data (with heterogeneity problems).

Not unexpected but still need a means of resolution. Topic maps anyone?

September 27, 2012

EdSense:… [Sepulcher or bricks for next silo?]

Filed under: Couchbase,Education,ElasticSearch — Patrick Durusau @ 2:55 pm

EdSense: Building a self-adapting, interactive learning portal with Couchbase by Christopher Tse.

From the description:

Talk from Christopher Tse (@christse), Director of McGraw-Hill Education Labs (MHE Labs), on how to architect a scalable adaptive learning system using a combination of Couchbase 2.0 and ElasticSearch as back-ends. These slides are the presented at CouchConf San Francisco on September 21, 2012.

Code for the proof-of-concept project, called “Learning Portal” has been open sourced and is available via Github at http://github.com/couchbaselabs/learningportal

When you hear about semantic diversity, do you ever think about EdSense, Moodle, EdX, Coursera, etc., as examples of semantic diversity?

And semantic silos?

All content delivery systems are semantic silos.

They have made choices about storage, access and delivery that had semantics. In addition to the semantics of your content.

The question is whether your silo will become a sepulcher for your content or bricks for the next silo in turn.

August 29, 2012

The Curse Of Knowledge

Filed under: Communication,Education,Information Sharing — Patrick Durusau @ 6:03 pm

The Curse Of Knowledge by Mark Needham.

From the post:

My colleague Anand Vishwanath recently recommended the book ‘Made To Stick‘ and one thing that has really stood out for me while reading it is the idea of the ‘The Curse Of Knowledge’ which is described like so:

Once we know something, we find it hard to imagine what it was like not to know it. Our knowledge has “cursed” us. And it becomes difficult for us to share out knowledge with others, because can’t readily re-create our listeners’ state of mind.

This is certainly something I imagine that most people have experienced, perhaps for the first time at school when we realised that the best teacher of a subject isn’t necessarily the person who is best at the subject.

I’m currently working on an infrastructure team and each week every team does a mini showcase where they show the other teams some of the things they’ve been working on.

It’s a very mixed audience – some very technical people and some not as technical people – so we’ve found it quite difficult to work out how exactly we can explain what we’re doing in a way that people will be able to understand.

A lot of what we’re doing is quite abstract/not very visible and the first time we presented we assumed that some things were ‘obvious’ and didn’t need an explanation.
….

Sounds like a problem that teachers/educators have been wrestling with for a long time.

Read the rest of Mark’s post, then find a copy of Made to Stick.

And/or, find a really good teacher and simply observe them teaching.

July 17, 2012

Big Data in Education (Part 2 of 2)

Filed under: BigData,Education — Patrick Durusau @ 5:08 pm

Big Data in Education (Part 2 of 2) by James Locus.

From the post:

Big data analytics are coming to public education. In 2012, the US Department of Education (DOE) was part of a host of agencies to share a $200 million initiative to begin applying big data analytics to their respective functions. The DOE targeted its $25 million share of the budget toward efforts to understand how students learn at an individualized level. This segment reviews the efforts enumerated in the draft paper released by the DOE on their big data analytics.

The ultimate goal of incorporating big data analytics in education is to improve student outcomes – as determined common metrics like end-of-grade testing, attendance, and dropout rates. Currently, the education sector’s application of big data analytics is to create “learning analytic systems” – here defined as a connected framework of data mining, modeling, and use-case applications.

The hope of these systems is to offer educators better, more accurate information on answer the “how” question in student learning. Is a student performing poor because she is distracted by her environment? Does a failing mark on the end-of-year test mean that the student did not fully grasp the year’s material, or was she having a off day? Learning analytics can help provide information to help educators answer some of these tough, real world questions.

Not complete but a good start on the type of issues that data mining for education and educational measurement are going to have to answer.

As James points out, this has the potential to be a mega-market for big data analytics.

Traditional testing service heavyweights have been in the area for decades.

But one could argue they have documented the decline of education without having the ability to offer any solutions. (Ouch!)

Could be a telling argument as the only response thus far has been to require more annual testing and to punish schools for truthful results.

Contrast that solution with weekly tests in various subjects that is lite-weight and provides reactive feedback to the teacher. So the teacher can address any issues, call in additional resources, the parents, etc. Would be “big data” but also “useful big data.”

Assuming that schools and teachers are provided with the resources to teach “our most precious assets” rather than punished for our failure to support schools and teachers properly.

July 8, 2012

statistics.com The Institute for Statistics Education

Filed under: Education,R,Statistics — Patrick Durusau @ 10:22 am

statistics.com The Institute for Statistics Education

The spread of R made me curious about certification in R?

The first “hit” on the subject was statistics.com The Institute for Statistics Education.

From their homepage:

Certificate Programs

Programs in Analytics and Statistical Studies (PASS)

From in-depth clinical trial design and analysis to data mining skills that help you make smarter business decisions, our unique programs focus on practical applications and help you master the software skills you need to stay a step ahead in your field.

http://www.statistics.com/

Biostatistics – Epidemiology

Biostatistics – Controlled Trials

Business Analytics

Data Mining

Social Science

Environmental Science

Engineering Statistics

Using R

Not with the same group or even the same subject (NetWare several versions ago), but I have had good experiences with this type of program.

Self study is always possible and sometimes the only option.

But, a good instructor can keep your interest in a specific body of material long enough to earn a certification.

Suggestions of other certification programs that would be of interest to data miners, machine learning, big data, etc., worker bees?

PS: If the courses sound pricey, slide on over the the University of Washington 3 course certificate in computational finance. At a little over $10K for 9 months.

July 7, 2012

Measurement = Meaningful?

Filed under: Data Science,Education,Measurement — Patrick Durusau @ 4:37 am

A two part series of posts on data and education has started up at Hortonworks. Data in Education (Part I) by James Locus.

From the post:

The education industry is transforming into a 21st century data-driven enterprise. Metrics based assessment has been a powerful force that has swept the national education community in response to widespread policy reform. Passed in 2001, the No-Child-Left-Behind Act pushed the idea of standards-based education whereby schoolteachers and administrators are held accountable for the performance of their students. The law elevated standardized tests and dropout rates as the primary way officials measure student outcomes and achievement. Underperforming schools can be placed on probation, and if no improvement is seen after 3-4 years, the entire staff of the school can be replaced.

The political ramifications of the law inspire much debate amongst policy analysts. However, from a data perspective, it is more informative to understand how advances in technology can help educators both meet the policy’s guidelines and work to create better student outcomes.

How data measurement can drive poor management practices is captured in:

whereby schoolteachers and administrators are held accountable for the performance of their students.

Really? The only people who are responsible for the performance of students are schoolteachers and administrators?

Recalling that schoolteachers don’t see a child until they are at least four or five years old and most of their learning and behavior patterns have been well established. By their parents, by advertisers, by TV shows, by poor diets, by poor health care, etc.

And when they do see children, it is only for seven hours out of twenty-four.

Schoolteachers and administrators are in a testable situation, which isn’t the same thing as a situation where tests are meaningful.

As data “scientists” we can crunch the numbers given to us and serve the industry’s voracious appetite for more numbers.

Or we can point out that better measurement design could result in different policy choices.

Depends on your definition of “scientist.”

There were people who worked for Big Tobacco that still call themselves “scientists.”

What do you think?

June 9, 2012

The Power of Open Education Data [Semantic Content ~ 0]

Filed under: Education,Open Data — Patrick Durusau @ 7:19 pm

The Power of Open Education Data by Todd Park and Jim Shelton.

The title implies a description or example of the “power” of Open Education Data.

Here are ten examples of how this post disappoints:

  • …who pledged to provide…
  • …voting with their feet…
  • …can help with…
  • …as fuel to spur…
  • …seeks to (1) work with…
  • …and (2) collaborate with…
  • …will also include efforts…
  • …will enable them to create…
  • …will include work to develop…
  • …which can help fuel…

None of these have happened, just speculation on what might happen, maybe.

Let me call your attention to, Consumers and Credit Disclosures: Credit Cards and Credit Insurance (2002) by Thomas A. Durkin, a Federal Reserve study of the impact of the Truth in Lending Act, one of the “major” consumer victories of its day (1968).

From the conclusion:

Conclusively evaluating the direct effects of disclosure legislation like Truth in Lending on either consumer behavior or the functioning of the credit marketplace is never a simple matter because there are always competing explanations for observed phenomena. From consumer surveys over time, however, it seems likely that disclosures required by Truth in Lending have had a favorable effect on the ready availability of information on credit transactions.

Let me save some future federal reserve researcher time and effort and observe that with Open Education Data, there will be more information about the cost of higher education available.

What impact it had on behavior is unknown.

The Power of Open Education Data is a disservice to the data mining, open data, education and other communities. It is specious speculation, beneficial only to those seeking public office and the cronies they appoint.

May 30, 2012

Printable, Math and Physics Flash Cards

Filed under: Education,Flash Cards,Teaching — Patrick Durusau @ 3:07 pm

Printable, Math and Physics Flash Cards by Jason Underdown.

From the introduction:

Click on the links below to download PDF files containing double-sided flash cards suitable for printing on common business card printer paper. If you don’t have or don’t want to buy special business card paper, I have also included versions which include a grid. You can use scissors or a paper cutter to create your cards.

The definitions and theorems of mathematics constitute the body of the discipline. To become conversant in mathematics, you simply must become so familiar with certain concepts and facts that you can recall them without thought. Making these flash cards has been a great help in getting me closer to that point. I hope they help you too. If you find any errors please contact me at the email address below.

Some of the decks are works in progress and thus incomplete, but if you know how to use LaTeX, the source files are also provided, so you can add your own flash cards. If you do create new flash cards, please share them back with me. You can contact me at the address below. Special thanks to Andrew Budge who created the “flashcards” LaTeX class which handles the formatting details.

Quite delightful!

What areas do you think are missing for IR, statistics, search?

As a markup hand, XML, XSLT, XPath 2.0 spring to mind.

I suspect you would learn as much about an area authoring cards as you will from using them.

If you make a set, please post and send a note.

First seen in Christophe Lalanne’s Bag of Tweets for May 2012.

May 3, 2012

Harvard as Tipping Point

Filed under: Education,Harvard,MIT — Patrick Durusau @ 6:22 pm

Harvard University made IT news twice this week:

$60 Million Venture To Bring Harvard, MIT Online For The Masses


The new nonprofit venture, dubbed edx, pours a combined $60 million of foundation and endowment capital into the open-source learning platform first developed and announced by MIT earlier this year as MITx.

Edx’s offerings are very different from the long-form lecture videos currently available as “open courseware” from MIT and other universities. Eventually, edx will offer a full slate of courses in all disciplines, created with faculty at MIT and Harvard, using a simple format of short videos and exercises graded largely by computer; students interact on a wiki and message board, as well as on Facebook groups, with peers substituting for TAs. The research arm of the project will continue to develop new tools using machine learning, robotics, and crowdsourcing that allow grading and evaluation of essays, circuit designs, and other types of exercises without endless hours by professors or TAs. Although edx is nonprofit and the courses are free, Agarwal envisions bringing the project to sustainability by one day charging students for official certificates of completion.

Harvard Library to faculty: we’re going broke unless you go open access

Henry sez, “Harvard Library’s Faculty Advisory Council is telling faculty that it’s financially ‘untenable’ for the university to keep on paying extortionate access fees for academic journals. It’s suggesting that faculty make their research publicly available, switch to publishing in open access journals and consider resigning from the boards of journals that don’t allow open access.”

The avalanche of flagship education and open content has begun.

Arguments about online content/delivery not being “good enough” will no longer carry any weight, or not much.

The opponents of online content/delivery, who made those arguments, will fight to preserve systems that benefited themselves and a few others. They will be routed soon enough and their fate is not my concern.

Information systems to meet the needs of the coming generation of world wide scholars, on the other hand, should be the concern of us all.

« Newer PostsOlder Posts »

Powered by WordPress