Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

December 17, 2011

Mr. Pearson, meet Mr. Mandelbrot:…

Filed under: Associations,Mathematics — Patrick Durusau @ 7:51 pm

Mr. Pearson, meet Mr. Mandelbrot: Detecting Novel Associations in Large Data Sets

Something you may enjoy along with: Detecting Novel Associations in Large Data Sets.

Jeremy Fox asks what I think about this paper by David N. Reshef, Yakir Reshef, Hilary Finucane, Sharon Grossman, Gilean McVean, Peter Turnbaugh, Eric Lander, Michael Mitzenmacher, and Pardis Sabeti which proposes a new nonlinear R-squared-like measure.

My quick answer is that it looks really cool!

From my quick reading of the paper, it appears that the method reduces on average to the usual R-squared when fit to data of the form y = a + bx + error, and that it also has a similar interpretation when “a + bx” is replaced by other continuous functions.

December 6, 2011

Lecture Fox

Filed under: CS Lectures,Mathematics — Patrick Durusau @ 8:07 pm

Lecture Fox

A nice collection of links to university lectures.

Has separate pages on computer science and math, but also physics and chemistry. The homepage is a varied collection of those subjects and others.

Good to see someone collecting links for lectures beyond the usual ones.

Trivia from one of the CS lectures: What language was started by the U.S. DoD in the mid to late 1970’s to consolidate more than 500 existing languages and dialects?

Try to answer before peeking! Computer Science 164, Spring 2011, Berkeley byw, the materials for Computer Science 164.

December 4, 2011

Math Documentaries

Filed under: Mathematics — Patrick Durusau @ 8:18 pm

Math Documentaries

Thirty-six documentaries about mathematics.

Question: If compelling and interesting documentaries can be made about mathematics, why don’t we have a collection of documentaries about semantics, subject identity and similar topics?

Or are such documentaries out there and I have simply overlooked them? (Entirely possible since I don’t as a rule watch much TV.)

Suggestions/comments?

Oh, I didn’t list this simply to complain about the lack of semantic documentaries, I think this are good to recommend, particularly to young people. Understanding when math is being used to lie is as important a skill as knowing mathematics, if not more so.

Translating math into code with examples in Java, Racket, Haskell and Python

Filed under: Haskell,Java,Mathematics,Python — Patrick Durusau @ 8:17 pm

Translating math into code with examples in Java, Racket, Haskell and Python by Matthew Might.

Any page that claims Okasaki’s Purely Functional Data Structures as an “essential reference” has to be interesting.

And…, it turns out to be very interesting!

If I have a complaint, it is that it ended too soon! See what you think.

November 28, 2011

3 surprising facts about the computation of scalar products

Filed under: Mathematics,Matrix — Patrick Durusau @ 7:06 pm

3 surprising facts about the computation of scalar products by Daniel Lemire.

From the post:

The speed of many algorithms depends on how quickly you can multiply matrices or compute distances. In turn, these computations depend on the scalar product. Given two arrays such as (1,2) and (5,3), the scalar product is the sum of products 1 × 5 + 2 × 3. We have strong incentives to compute the scalar product as quickly as possible.

Sorry, can’t tell you the three things because that would ruin the surprise. 😉 See Daniel’s blog for the details.

November 16, 2011

Bayesian variable selection [off again]

Filed under: Bayesian Models,Mathematics — Patrick Durusau @ 8:18 pm

Bayesian variable selection [off again]

From the post:

As indicated a few weeks ago, we have received very encouraging reviews from Bayesian Analysis about our [Gilles Celeux, Mohammed El Anbari, Jean-Michel Marin and myself] our comparative study of Bayesian and non-Bayesian variable selections procedures (“Regularization in regression: comparing Bayesian and frequentist methods in a poorly informative situation“) to Bayesian Analysis. We have just rearXived and resubmitted it with additional material and hope this is the last round. (I must acknowledge a limited involvement at this final stage of the paper. Had I had more time available, I would have liked to remove the numerous tables and turn them into graphs…)

If you are not conversant in Bayesian thinking and recent work, this paper is going to be … difficult. Despite just having gotten past the introduction and looking references to help with part 2, I think it will be a good intellectual exercise and important for your use of Bayesian models in the future. Two very good reasons to spend the time to understand this paper.

Or to put it another way, the world is non-probabilistic only when viewed with a certain degree of coarseness. How useful a coarse view is, varies from circumstance to circumstance. If you don’t have the capability to use a probabilistic view, you will be limited to a coarse one. (Neither better than the other, but having both seems advantageous to me.)

November 15, 2011

Models for MapReduce

Filed under: MapReduce,Mathematics — Patrick Durusau @ 7:58 pm

Models for MapReduce by Suresh Venkatasubramanian

From the post:

I’ve been listening to Jeff Phillips‘ comparison of different models for MapReduce (he’s teaching a class on models for massive data). In what follows, I’ll add the disclaimer IANACT (I am not a complexity theorist).

There’s something that bothers me about the various models floating around that attempt to capture the MapReduce framework (specifically the MUD framework by Feldman et al, the MRC framework by Karloff, Suri and (co-blogger) Vassilvitskii, and the newer Goodrich-Sitchinava-Zhang framework).

I won’t spoil the rest of the post for you, read it and the comments.

There is a lot of work to be done towards modeling and understanding mapreduce.

Personally I suspect there will be some general models that give way to more specialized ones for some domains.

November 10, 2011

Graph Theory in Sage

Filed under: Graphs,Mathematics,Sage — Patrick Durusau @ 6:40 pm

Graph Theory in Sage is a presentation by William Stein of some of the graph capabilities of Sage.

I mention it because there has been discussion on the Neo4j mailing list about learning graph theory and this may be helpful in that regard.

There is a Sage worksheet that has all the formulas and values used in the presentation.

You can also download the video.

You will have to experience it for yourself but I thought the help feature on graphs was most impressive.

Sage will help you get your feet on the ground with formal graph theory.

Sage

Filed under: Mathematica,Mathematics,Sage — Patrick Durusau @ 6:38 pm

Sage

Kirk Lowery mentioned Sage to me and with mathematics being fundamental to IR, it seemed like a good resource to mention. Either for research, using one of the course books or satisfying yourself that algorithms operate as advertised.

You don’t have to take someone’s word on algorithms. Use a small enough test case that you will recognize the effects of the algorithm. Or test it against another algorithm said to give similar results.

I saw a sad presentation years ago when a result was described as significant because the manual for the statistics package used said it was significant. Don’t let that be you, either in front of a client or in a presentation to peers.

From the website:

Sage is a free open-source mathematics software system licensed under the GPL. It combines the power of many existing open-source packages into a common Python-based interface.

Mission: Creating a viable free open source alternative to Magma, Maple, Mathematica and Matlab.

From the feature tour:

Sage is built out of nearly 100 open-source packages and features a unified interface. Sage can be used to study elementary and advanced, pure and applied mathematics. This includes a huge range of mathematics, including basic algebra, calculus, elementary to very advanced number theory, cryptography, numerical computation, commutative algebra, group theory, combinatorics, graph theory, exact linear algebra and much more. It combines various software packages and seamlessly integrates their functionality into a common experience. It is well-suited for education and research.

The user interface is a notebook in a web browser or the command line. Using the notebook, Sage connects either locally to your own Sage installation or to a Sage server on the network. Inside the Sage notebook you can create embedded graphics, beautifully typeset mathematical expressions, add and delete input, and share your work across the network.

The following showcase presents some of Sage’s capabilities, screenshots and gives you an overall impression of what Sage is. The examples show the lines of code in Sage on the left side, accompanied by an explanation on the right. They only show the very basic concepts of how Sage works. Please refer to the documentation material for more detailed explanations or visit the library to see Sage in action.

In all fairness to Mathematica, the hobbyist version is only $295 for Mathematica 8. With versions for Windows (XP/Vista/7) Max OS X (Intel) and Linux. There is a reason why people want to be like…some other software. Mathematica has data mining capabilities and a host of other features. I am contemplating a copy of Mathematica as a Christmas present for myself.

Do note that all of the Fortune 50 companies use Mathematica. The hobbyist version allows you to add an important skill set that is relevant to a select clientele. Not to mention various government agencies, etc.

Should a job come along that requires it, I can simply upgrade to a professional license. Why? Well, I expect people to pay my invoices when I submit them. Why shouldn’t I pay for software I use on the jobs that result in those invoices?

Don’t cut corners on software. Same goes for the quality of jobs. It will show. If you don’t know, don’t lie, say you don’t know but will find out. Clients will find simple honesty quite refreshing. (I can’t promise that result for you but it has been the result for me over a variety of professions.)

October 12, 2011

Top 50 Statistics Blogs

Filed under: Mathematics,Statistics — Patrick Durusau @ 4:36 pm

Top 50 Statistics Blogs

From the post:

Statistics is a branch of mathematics that deals with the interpretation of data. Statisticians work in a wide variety of fields in both the private and the public sectors. They are teachers, consultants, watchdogs, journalists, designers, programmers, and by in large, ordinary people like you and me. And some of them blog.

In searching for the top statistics blogs on the web we only considered blogs that have been active in 2011. In deciding which ones to include in our (admittedly unscientific) list of the 50 best statistics blogs we considered a range of factors, including visual appeal/aesthetics, frequency of posts, and accessibility to non-specialists. Our goal is to highlight blogs that students and prospective students will find useful and interesting in their exploration of the field.

I’m not quite sure of the reason for the explanation of statistics at the head of a list of the top 50 statistics blogs but it isn’t a serious defect.

(I first saw this at www.r-bloggers.org.)

October 1, 2011

Bayesian Statistical Reasoning

Filed under: Bayesian Models,CS Lectures,Mathematics — Patrick Durusau @ 8:29 pm

DM SIG “Bayesian Statistical Reasoning ” 5/23/2011 by Prof. David Draper, PhD.

I think you will be surprised at how interesting and even compelling this presentation becomes at points. Particularly his comments early in the presentation about needing an analogy machine, to find things not expressed in the way you usually look for them. And he has concrete examples of where that has been needed.

Title: Bayesian Statistical Reasoning: an inferential, predictive and decision-making paradigm for the 21st century

Professor Draper gives examples of Bayesian inference, prediction and decision-making in the context of several case studies from medicine and health policy. There will be points of potential technical interest for applied mathematicians, statisticians, and computer scientists. Broadly speaking, statistics is the study of uncertainty: how to measure it well, and how to make good choices in the face of it. Statistical activities are of four main types: description of a data set, inference about the underlying process generating the data, prediction of future data, and decision-making under uncertainty. The last three of these activities are probability based. Two main probability paradigms are in current use: the frequentist (or relative-frequency) approach, in which you restrict attention to phenomena that are inherently repeatable under “identical” conditions and define P(A) to be the limiting relative frequency with which A would occur in hypothetical repetitions, as n goes to infinity; and the Bayesian approach, in which the arguments A and B of the probability operator P(A|B) are true-false propositions (with the truth status of A unknown to you and B assumed by you to be true), and P(A|B) represents the weight of evidence in favor of the truth of A, given the information in B. The Bayesian approach includes the frequentest paradigm as a special case,so you might think it would be the only version of probability used in statistical work today, but (a) in quantifying your uncertainty about something unknown to you, the Bayesian paradigm requires you to bring all relevant information to bear on the calculation; this involves combining information both internal and external to the data you’ve gathered, and (somewhat strangely) the external-information part of this approach was controversial in the 20th century, and (b) Bayesian calculations require approximating high-dimensional integrals (whereas the frequentist approach mainly relies on maximization rather than integration), and this was a severe limitation to the Bayesian paradigm for a long time (from the 1750s to the 1980s). The external-information problem has been solved by developing methods that separately handle the two main cases: (1) substantial external information, which is addressed by elicitation techniques, and (2) relatively little external information, which is covered by any of several methods for (in the jargon) specifying diffuse prior distributions. Good Bayesian work also involves sensitivity analysis: varying the manner in which you quantify the internal and external information across reasonable alternatives, and examining the stability of your conclusions. Around 1990 two things happened roughly simultaneously that completely changed the Bayesian computational picture: * Bayesian statisticians belatedly discovered that applied mathematicians (led by Metropolis), working at the intersection between chemistry and physics in the 1940s, had used Markov chains to develop a clever algorithm for approximating integrals arising in thermodynamics that are similar to the kinds of integrals that come up in Bayesian statistics, and * desk-top computers finally became fast enough to implement the Metropolis algorithm in a feasibly short amount of time. As a result of these developments, the Bayesian computational problem has been solved in a wide range of interesting application areas with small-to-moderate amounts of data; with large data sets, variational methods are available that offer a different approach to useful approximate solutions. The Bayesian paradigm for uncertainty quantification does appear to have one remaining weakness, which coincides with a strength of the frequentest paradigm: nothing in the Bayesian approach to inference and prediction requires you to pay attention to how often you get the right answer (thisis a form of calibration of your uncertainty assessments), which is an activity that’s (i) central to good science and decision-making and (ii) natural to emphasize from the frequentist point of view. However, it has recently been shown that calibration can readily be brought into the Bayesian story by means of decision theory, turning the Bayesian paradigm into an approach that is (in principle) both logically internally consistent and well-calibrated. In this talk I’ll (a) offer some historical notes about how we have arrived at the present situation and (b) give examples of Bayesian inference, prediction and decision-making in the context of several case studies from medicine and health policy. There will be points of potential technical interest for applied mathematicians, statisticians and computer scientists.

September 22, 2011

Khan Academy

Filed under: Mathematics — Patrick Durusau @ 6:20 pm

Khan Academy

From the “about” page:

The Khan Academy is an organization on a mission. We’re a not-for-profit with the goal of changing education for the better by providing a free world-class education to anyone anywhere.

All of the site’s resources are available to anyone. It doesn’t matter if you are a student, teacher, home-schooler, principal, adult returning to the classroom after 20 years, or a friendly alien just trying to get a leg up in earthly biology. The Khan Academy’s materials and resources are available to you completely free of charge.

If you need to brush up on probability or linear algebra, you will find some helpful video lectures here.

September 18, 2011

Approaching optimality for solving SDD systems

Filed under: Algorithms,Mathematics,Matrix — Patrick Durusau @ 7:29 pm

In October 2010, this paper was presented by the authors.

Approaching optimality for solving SDD systems by Ioannis Koutis, Gary L. Miller, and Richard Peng.

Public reports on that paper can be found at: A Breakthrough in Algorithm Design in the September 2011 issue of CACM and PC Pro in: Algorithm sees massive jump in complex number crunching.

The claim is that the new approach will be a billion times faster than traditional techniques.

In February of 2011, the authors have posted a new and improved version of their algorithm in:

A nearly-mlogn time solver for SDD linear systems.

Koutis has written a MATLAB implementation at: CMG: Combinatorial Multigrid

For further background, see: Combinatorial Preconditioning, sparsification, local clustering, low-stretch trees, etc. by Spielman, one of the principal researchers in this area.

The most obvious application in topic maps would be recommender systems that bring possible merges to a topic map author’s attention or even perform merging on specified conditions. (If the application doesn’t seem obvious, read the post I refer to in: Text Feature Extraction (tf-idf) – Part 1 , again. Will also give you some ideas about scaleable merging tests as well.)

Years ago Lars Marius told me that topic maps needed to scale on laptops to be successful. It looks like algorithms are catching up to meet his requirement.

September 6, 2011

Electronic Statistics Textbook

Filed under: Mathematics,Statistics — Patrick Durusau @ 7:02 pm

Electronic Statistics Textbook

From the website:

The only Internet Resource about Statistics Recommended by Encyclopedia Britannica

StatSoft has freely provided the Electronic Statistics Textbook as a public service for more than 12 years now.

This Textbook offers training in the understanding and application of statistics. The material was developed at the StatSoft R&D department based on many years of teaching undergraduate and graduate statistics courses and covers a wide variety of applications, including laboratory research (biomedical, agricultural, etc.), business statistics, credit scoring, forecasting, social science statistics and survey research, data mining, engineering and quality control applications, and many others.

The Electronic Textbook begins with an overview of the relevant elementary (pivotal) concepts and continues with a more in depth exploration of specific areas of statistics, organized by “modules” and accessible by buttons, representing classes of analytic techniques. A glossary of statistical terms and a list of references for further study are included.

Proper citation
(Electronic Version): StatSoft, Inc. (2011). Electronic Statistics Textbook. Tulsa, OK: StatSoft. WEB: http://www.statsoft.com/textbook/. (Printed Version): Hill, T. & Lewicki, P. (2007). STATISTICS: Methods and Applications. StatSoft, Tulsa, OK.

This is going to get a bookmark for sure!

July 28, 2011

MATLAB GPU / CUDA experiences

Filed under: CUDA,GPU,Mathematics,Parallel Programming — Patrick Durusau @ 6:57 pm

MATLAB GPU / CUDA experiences and tutorials on my laptop – Introduction

From the post:

These days it seems that you can’t talk about scientific computing for more than 5 minutes without somone bringing up the topic of Graphics Processing Units (GPUs). Originally designed to make computer games look pretty, GPUs are massively parallel processors that promise to revolutionise the way we compute.

A brief glance at the specification of a typical laptop suggests why GPUs are the new hotness in numerical computing. Take my new one for instance, a Dell XPS L702X, which comes with a Quad-Core Intel i7 Sandybridge processor running at up to 2.9Ghz and an NVidia GT 555M with a whopping 144 CUDA cores. If you went back in time a few years and told a younger version of me that I’d soon own a 148 core laptop then young Mike would be stunned. He’d also be wondering ‘What’s the catch?’

Parallel computing has been around for years but in the form of GPUs it has reached the hands of hackers and innovators. Will your next topic map application take advantage of parallel processing?

July 22, 2011

Random Graphs Anyone?

Filed under: Graphs,Mathematics,Wandora — Patrick Durusau @ 6:06 pm

I saw a tweet from @CompSciFact (John Cook) pointing out Luc Devroye’s (McGill University) Non-Uniform Random Variate Generation (Springer-Verlag, New York, 1986) was available for free download.

Amazon lists used copies starting at $180.91 and one new copy for $618.47, so you are better off with the scanned PDF, unless you are simply trying to burn up grant funding before the end of a year.

Chapter XIII. RANDOM COMBINATORIAL OBJECTS includes random graphs and notes:

Graphs are the most general comblnatorlal objects dealt wlth In this chapter. They have appllcatlons In nearly all flelds of sclence and englneerlng. It is qulte impossible to glve a thorough overvlew of the dlfferent subclasses of graphs, and how objects In these subclasses can be generated uniformly and at random. Instead, we will just glve a superflclal treatment, and refer the reader to general principles or speclflc artlcles In the literature whenever necessary.

For one use of random graphs in topic maps work, see the Random Graph Generator in Wandora.

July 1, 2011

…filling space — without cubes

Filed under: Algorithms,Data Structures,Mathematics — Patrick Durusau @ 2:56 pm

Princeton researchers solve problem filling space — without cubes

From the post:

Whether packing oranges into a crate, fitting molecules into a human cell or getting data onto a compact disc, wasted space is usually not a good thing.

Now, in findings published June 20 in the Proceedings of the National Academy Sciences, Princeton University chemist Salvatore Torquato and colleagues have solved a conundrum that has baffled mathematical minds since ancient times — how to fill three-dimensional space with multi-sided objects other than cubes without having any gaps.

The discovery could lead to scientists finding new materials and could lead to advances in communications systems and computer security.

“You know you can fill space with cubes,” Torquato said, “We were looking for another way.” In the article “New Family of Tilings of Three-Dimensional Euclidean Space by Tetrahedra and Octahedra,” he and his team show they have solved the problem.

Not immediately useful for topic maps but will be interesting to see if new data structures emerge from this work.

See the article: New Family of Tilings of Three-Dimensional Euclidean Space by Tetrahedra and Octahedra (pay-per-view site)

June 24, 2011

Online Math Videos

Filed under: Mathematics — Patrick Durusau @ 10:44 am

Online Math Videos

Collection by Dave Richeson, which is described as:

The purpose of this page is to consolidate online mathematics lectures. Right now I am focusing on video and not audio, but I may expand in that direction eventually. This page is aimed at mathematics faculty and graduate students, but others may find useful links here too. I hope this can be a resource for people like me who want to keep abreast of current mathematics, but who are not at a research university that has regular seminars and colloquia.

May 26, 2011

24th OpenMath Workshop

Filed under: Conferences,Mathematics — Patrick Durusau @ 3:40 pm

24th OpenMath Workshop
Bertinoro, Italy
July 20, 2011
co-located with CICM 2011
Continuous submission until July 10

From the post with the announcement (the link at the CICM site is broken, as of 24 May 2011)

OBJECTIVES

With the release of the MathML 3 W3C recommendation, OpenMath enters a new phase of its development. Topics we expect to see at the workshop include

  • Feature Requests (Standard Enhancement Proposals) and Discussions for OpenMath3
  • Convergence of OpenMath and MathML 3
  • Reasoning with OpenMath
  • Software using or processing OpenMath
  • New OpenMath Content Dictionaries

though others related to OpenMath are certainly welcomed. For examples of contributions see the 22nd OpenMath Workshop of 2009 (http://staff.bath.ac.uk/masjhd/OM2009.html#contributions).

Contributions can be either full research papers, Standard Enhancement Proposals, or a description of new Content Dictionaries, particularly ones that are suggested for formal adoption by the OpenMath Society.

IMPORTANT DATES (all times are GMT)

OpenMath 2011 does not have a submission deadline. Submissions will be accepted until July 10 and reviewed and notified continuously.

SUBMISSIONS

Submission is by e-mail to omws2011@googlegroups.com. Papers must conform to the Springer LNCS style, preferably using LaTeX2e and the Springer llncs class files.

Submission categories:

  • Full paper: 4-12 LNCS pages
  • Short paper: 1-8 LNCS pages
  • CD description: 1-8 LNCS pages; a .zip or .tgz file of the CDs should be attached.
  • Standard Enhancement Proposal: 1-12 LNCS pages (as appropriate w.r.t. the background knowledge required); a .zip or .tgz file of any related implementation (e.g. a Relax NG schema) should be attached.

PROCEEDINGS

Electronic proceedings will be published on the OpenMath web site in time for the conference.

WORKSHOP COMMITTEE

  • James Davenport (The University of Bath)
  • Michael Kohlhase (Jacobs University Bremen, Germany)
  • Christoph Lange (Jacobs University Bremen, Germany)

Comments/questions/inquiries: to be sent to omws2011@googlegroups.com

May 23, 2011

Workshop on Mathematical Wikis
(MathWikis-2011)

Filed under: Mathematics,Mathematics Indexing,Semantics — Patrick Durusau @ 7:46 pm

Workshop on Mathematical Wikis (MathWikis-2011)

Important Dates:

  • Submission of abstracts: May 30th, 2011, 8:00 UTC+1
  • Notification: June 23rd, 2011
  • Camera ready versions due: July 11th, 2011
  • Workshop: August 27th, 2011

From the website:

Mathematics is increasingly becoming a collaborative discipline. The Internet has simplified the distributed development, review, and improvement of large proofs, theories, libraries, and knowledge repositories, also giving rise to all kinds of collaboratively developed mathematical learning resources. Examples include the PlanetMath free encyclopedia, the Polymath collaborative collaborative proof development efforts, and also large collaboratively developed formal libraries. Interactive computer assistance, semantic representation, and linking with other datasets on the Semantic Web are becoming very interesting aspects of collaborative mathematical developments.

The ITP 2011 MathWikis workshop aims to bring together developers and major users of mathematical wikis and collaborative and social tools for mathematics.

Topics include but are not limited to:

  • wikis and blogs for informal, semantic, semiformal, and formal mathematical knowledge;
  • general techniques and tools for online collaborative mathematics;
  • tools for collaboratively producing, presenting, publishing, and interacting with online mathematics;
  • automation and computer-human interaction aspects of mathematical wikis;
  • practical experiences, usability aspects, feasibility studies;
  • evaluation of existing tools and experiments;
  • requirements, user scenarios and goals.

April 26, 2011

Data Beats Math

Filed under: Data,Mathematics,Subject Identity — Patrick Durusau @ 2:17 pm

Data Beats Math

A more recent post by Jeff Jonas.

Topic maps can capture observations, judgments, conclusions from human analysts.

Do those beat math as well?

April 3, 2011

Octave

Filed under: Mathematics,Visualization — Patrick Durusau @ 6:38 pm

Octave

From the website:

GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write non-interactive programs. The Octave language is quite similar to Matlab so that most programs are easily portable.

A new version, 3.4.0, was released February 8, 2011.

February 28, 2011

Book of Proof

Filed under: Algorithms,Mathematics — Patrick Durusau @ 8:21 am

Book of Proof by Richard Hammack.

Important for topic maps research and also captures an important distinction that is sometimes overlooked in topic maps.

From the Introduction:

This is a book about how to prove theorems.

Until this point in your education, you may have regarded mathematics as being a primarily computational discipline. You have learned to solve equations, compute derivatives and integrals, multiply matrices and find determinants; and you have seen how these things can answer practical questions about the real world. In this setting, your primary goal in using mathematics has been to compute answers.

But there is another approach to mathematics that is more theoretical than computational. In this approach, the primary goal is to understand mathematical structures, to prove mathematical statements, and even to discover new mathematical theorems and theories. The mathematical techniques and procedures that you have learned and used up until now have their origins in this theoretical side of mathematics. For example, in computing the area under a curve, you use the Fundamental Theorem of Calculus. It is because this theorem is true that your answer is correct. However, in your calculus class you were probably far more concerned with how that theorem could be applied than in understanding why it is true. But how do we know it is true? How can we convince ourselves or others
of its validity? Questions of this nature belong to the theoretical realm of mathematics. This book is an introduction to that realm.

This book will initiate you into an esoteric world. You will learn to understand and apply the methods of thought that mathematicians use to verify theorems, explore mathematical truth and create new mathematical theories. This will prepare you for advanced mathematics courses, for you will be better able to understand proofs, write your own proofs and think critically and inquisitively about mathematics.

Quite legitimately there are topic map activities that are concerned with the efficient application and processing of particular ways to identify subjects and to determine when subject sameness has occurred.

It is equally legitimate to investigate how subject identity is viewed in different domains and the nature of data structures that can best represent those views.

Either one without the other is incomplete.

For those walking on the theoretical side of the street, I think this volume will prove to be quite valuable.

« Newer Posts

Powered by WordPress