Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

October 28, 2011

Strata Conference: Making Data Work

Filed under: Conferences,Data,Data Mining,Data Science — Patrick Durusau @ 3:15 pm

Strata Conference: Making Data Work Proceedings from the New York Strata Conference, Sept. 22-23, 2011.

OK, so you missed the live video feeds. Don’t despair, videos are available for some and slides appear to be available for all. Not like being there or seeing the videos but better than missing it altogether!

A number of quite remarkable presentations.

Semantic APIs

Filed under: Semantics,Topic Maps — Patrick Durusau @ 3:15 pm

Semantic APIs

My contribution to where I think JTC 1/SC 34/WG 3 should be going.

Strictly a discussion document and does not represent the views of the US or any other national body or WG 3.

Comments welcome!

Factual Resolve

Factual Resolve

Factual has a new API – Resolve:

From the post:

The Internet is awash with data. Where ten years ago developers had difficulty finding data to power applications, today’s difficulty lies in making sense of its abundance, identifying signal amidst the noise, and understanding its contextual relevance. To address these problems Factual is today launching Resolve — an entity resolution API that makes partial records complete, matches one entity against another, and assists in de-duping and normalizing datasets.

The idea behind Resolve is very straightforward: you tell us what you know about an entity, and we, in turn, tell you everything we know about it. Because data is so commonly fractured and heterogeneous, we accept fragments of an entity and return the matching entity in its entirety. Resolve allows you to do a number of things that will make your data engineering tasks easier:

  • enrich records by populating missing attributes, including category, lat/long, and address
  • de-dupe your own place database
  • convert multiple daily deal and coupon feeds into a single normalized, georeferenced feed
  • identify entities unequivocally by their attributes

For example: you may be integrating data from an app that provides only the name of a place and an imprecise location. Pass what you know to Factual Resolve via a GET request, with the attributes included as JSON-encoded key/value pairs:

I particularly like the line:

identify entities unequivocally by their attributes

I don’t know about the “unequivocally” part but the rest of it rings true. At least in my experience.

Dedupe, Merge, and Purge: the Art of Normalization

Filed under: Deduplication,Merging,Purge — Patrick Durusau @ 3:14 pm

Dedupe, Merge, and Purge: the Art of Normalization by Tyler Bell and Leo Polovets.

From the description:

Big Noise always accompanies Big Data, especially when extracting entities from the tangle of duplicate, partial, fragmented and heterogeneous information we call the Internet. The ~17m physical businesses in the US, for example, are found on over 1 billion webpages and endpoints across 5 million domains and applications. Organizing such a disparate collection of pages into a canonical set of things requires a combination of distributed data processing and human-based domain knowledge. This presentation stresses the importance of entity resolution within a business context and provides real-world examples and pragmatic insight into the process of canonicalization.

I like the Big Noise line. That may have some traction. Certainly will be the case that when users started having contact with unfiltered big data, they are likely to be as annoyed as they are with web searching.

The answer to their questions is likely “out there” but it lies just beyond their grasp. Failing they won’t blame the Big Noise or their own lack of skill but the inoffensive (and ineffective) tools at hand. Guess it is a good thing search engines are free save for advertising.

PS: The slides have a number of blanks. I have written to the authors, well, to their company, asking for corrected slides to be posted.

Network Modeling and Analysis in Health Informatics and Bioinformatics (NetMAHIB)

Filed under: Bioinformatics,Biomedical,Health care — Patrick Durusau @ 3:14 pm

Network Modeling and Analysis in Health Informatics and Bioinformatics (NetMAHIB) Editor-in-Chief: Reda Alhajj, University of Calgary.

From Springer, a new journal of health informatics and bioinformatics.

From the announcement:

NetMAHIB publishes original research articles and reviews reporting how graph theory, statistics, linear algebra and machine learning techniques can be effectively used for modelling and knowledge discovery in health informatics and bioinformatics. It aims at creating a synergy between these disciplines by providing a forum for disseminating the latest developments and research findings; hence results can be shared with readers across institutions, governments, researchers, students, and the industry. The journal emphasizes fundamental contributions on new methodologies, discoveries and techniques that have general applicability and which form the basis for network based modelling and knowledge discovery in health informatics and bioinformatics.

The NetMAHIB journal is proud to have an outstanding group of editors who widely and rigorously cover the multidisciplinary score of the journal. They are known to be research leaders in the field of Health Informatics and Bioinformatics. Further, the NetMAHIB journal is characterized by providing thorough constructive reviews by experts in the field and by the reduced turn-around time which allows research results to be disseminated and shared on timely basis. The target of the editors is to complete the first round of the refereeing process within about 8 to 10 weeks of submission. Accepted papers go to the online first list and are immediately made available for access by the research community.

Context and Semantics for Knowledge Management – … Personal Productivity [and Job Security]

Filed under: Context,Knowledge Management,Ontology,Semantic Web,Semantics — Patrick Durusau @ 3:13 pm

Context and Semantics for Knowledge Management – Technologies for Personal Productivity by Warren, Paul; Davies, John; Simperl, Elena (Eds.). 1st Edition., 2011, X, 392 p. 120 illus., 4 in color. Hardcover, ISBN 978-3-642-19509-9

I quite agree with the statement: “the fact that much corporate knowledge only resides in employees’ heads seriously hampers reuse.” True but it is also a source of job security. In organizations both large and small, in the U.S. and in other countries as well.

I don’t think any serious person believes the Pentagon (US) needs to have more than 6,000 HR systems. But, job security presents different requirements from say productivity, accomplishment of mission (aside from the mission of remaining employed), in this case, national defense, etc.

How one overcomes job security is going to vary from system to system. Be aware it is a non-technical issue and technology is not the answer to it. It is a management issue that management would like to treat as a technology problem. Treating personnel issues as problems that can be solved with technology nearly universally fails.

From the announcement:

Knowledge and information are among the biggest assets of enterprises and organizations. However, efficiently managing, maintaining, accessing, and reusing this intangible treasure is difficult. Information overload makes it difficult to focus on the information that really matters; the fact that much corporate knowledge only resides in employees’ heads seriously hampers reuse.

The work described in this book is motivated by the need to increase the productivity of knowledge work. Based on results from the EU-funded ACTIVE project and complemented by recent related results from other researchers, the application of three approaches is presented: the synergy of Web 2.0 and semantic technology; context-based information delivery; and the use of technology to support informal user processes. The contributions are organized in five parts. Part I comprises a general introduction and a description of the opportunities and challenges faced by organizations in exploiting Web 2.0 capabilities. Part II looks at the technologies, and also some methodologies, developed in ACTIVE. Part III describes how these technologies have been evaluated in three case studies within the project. Part IV starts with a chapter describing the principal market trends for knowledge management solutions, and then includes a number of chapters describing work complementary to ACTIVE. Finally, Part V draws conclusions and indicates further areas for research.

Overall, this book mainly aims at researchers in academia and industry looking for a state-of-the-art overview of the use of semantic and Web 2.0 technologies for knowledge management and personal productivity. Practitioners in industry will also benefit, in particular from the case studies which highlight cutting-edge applications in these fields.

Teradata Provides the Simplest Way to Bring the Science of Data to the Art of Business

Filed under: Hadoop,MapReduce,Marketing — Patrick Durusau @ 3:13 pm

Teradata Provides the Simplest Way to Bring the Science of Data to the Art of Business

From the post:

SAN CARLOS, California Teradata (NYSE: TDC), the analytic data solutions company, today announced the new Teradata Aster MapReduce Platform that will speed adoption of big data analytics. Big data analytics can be a valuable tool for increasing corporate profitability by unlocking information that can be used for everything from optimizing digital marketing or detecting fraud to measurement and reporting machine operations in remote locations. However, until now, the cost of mining large volumes of multi-structured data and a widespread scarcity of staff with the required specialized analytical skills have largely prevented adoption of big data analytics.

The new Teradata Aster MapReduce Platform marries MapReduce, the language of big data analytics, with Structured Query Language (SQL), the language of business analytics. It includes Aster Database 5.0, a new Aster MapReduce Appliance—which extends the Aster software deployment options beyond software-only and Cloud—and the Teradata-Aster Adaptor for high-speed data transfer between Teradata and Aster Data systems.

I leave the evaluation of these products to one side for now to draw your attention to:

Teradata Aster makes it easy for any business person to see, explore, and understand multi-structured data. No longer is big data analysis just in the hands of the few data scientists or MapReduce specialists in an organization. (enphasis added)

I am not arguing that is true or even a useful idea, but consider the impact it is going to have on the average business executive. A good marketing move, if not very good for the customers who buy into it. Perhaps there is a kernel of truth we can tap into for marketing topic maps.

tm – Text Mining Package

Filed under: Data Mining,R,Text Extraction — Patrick Durusau @ 3:12 pm

tm – Text Mining Package

From the webpage:

tm (shorthand for Text Mining Infrastructure in R) provides a framework for text mining applications within R.

The tm package offers functionality for managing text documents, abstracts the process of document manipulation and eases the usage of heterogeneous text formats in R. The package has integrated database backend support to minimize memory demands. An advanced meta data management is implemented for collections of text documents to alleviate the usage of large and with meta data enriched document sets.

With the package ships native support for handling the Reuters-21578 data set, Gmane RSS feeds, e-mails, and several classic file formats (e.g. plain text, CSV text, or PDFs).

Admittedly, the “tm” caught my attention but a quick review confirmed that the package could be useful to topic map authors.

Radical Cartography

Filed under: Cartography,Geographic Data,Maps,Visualization — Patrick Durusau @ 3:12 pm

Radical Cartography

You have to choose categories from the left-hand menu to see any content.

A wide variety of content, some of which may be familiar, some of which may not be.

I was particularly amused by the “Center of the World” map. Look for New York and you will find it.

To me it explains why 9/11 retains currency while the poisoning of a large area in Japan with radiation has slipped from view, at least in the United States. (To pick only one event that merits more informed coverage and attention that it has gotten in the United States.)

Dealing with Data (Science 11 Feb. 2011)

Filed under: Data,Data Mining,Data Science — Patrick Durusau @ 3:11 pm

Dealing with Data (Science 11 Feb. 2011)

From the website:

In the 11 February 2011 issue, Science joins with colleagues from Science Signaling, Science Translational Medicine, and Science Careers to provide a broad look at the issues surrounding the increasingly huge influx of research data. This collection of articles highlights both the challenges posed by the data deluge and the opportunities that can be realized if we can better organize and access the data.

The Science cover (left) features a word cloud generated from all of the content from the magazine’s special section.

Science is making access to this entire collection FREE (simple registration is required for non-subscribers).

Better late than never!

This is a very good overview of the big data issue, from a science perspective.

October 27, 2011

HCIR 2011

Filed under: Conferences,Information Retrieval — Patrick Durusau @ 4:46 pm

HCIR 2011 Papers

From the homepage:

The Fifth Workshop on Human-Computer Interaction and Information Retrieval took place all day on Thursday, October 20th, 2011, at Google’s main campus in Mountain View, California. There was a reception on Wednesday evening before the workshop, which attracted about a hundred participants.

By my count fourteen (14) papers and twenty-eight (28) posters.

Quite a gold mine of material and I look forward to a long weekend with them!

Enjoy!

PS: Interesting that papers from prior conferences only start to be available starting in 2010.

Data.gov

Filed under: Data,Data Mining,Government Data — Patrick Durusau @ 4:46 pm

Data.gov

A truly remarkable range of resources from the U.S. Federal Government, that is made all the more interesting by Data.gov Next Generation:

Data.gov starts an exciting new chapter in its evolution to make government data more accessible and usable than ever before. The data catalog website that broke new grounds just two years ago, is once again redefining the Open Data experience. Learn more about Data.gov’s transformation into a cloud-based Open Data platform for citizens, developers and government agencies in this 4-minute introductory video.

Developers should take a look at: http://dev.socrata.com/.

OpenStack

Filed under: Cloud Computing — Patrick Durusau @ 4:46 pm

OpenStack

From the OpenStack wiki:

The OpenStack Open Source Cloud Mission: to produce the ubiquitous Open Source Cloud Computing platform that will meet the needs of public and private clouds regardless of size, by being simple to implement and massively scalable.

There are three (3) core projects:

OPENSTACK COMPUTE: open source software and standards for large-scale deployments of automatically provisioned virtual compute instances.

OPENSTACK OBJECT STORAGE: open source software and standards for large-scale, redundant storage of static objects.

OPENSTACK IMAGE SERVICE: provides discovery, registration, and delivery services for virtual disk images.

Two (2) new projects that will be promoted to core on the next release:

OpenStack Identity: Code-named Keystone, The OpenStack Identity Service provides unified authentication across all OpenStack projects and integrates with existing authentication systems.

OpenStack Dashboard: Dashboard enables administrators and users to access and provision cloud-based resources through a self-service portal.

And a host of unofficial projects, related to one or more OpenStack components. (OpenStack Projects)

So far as I could tell, no projects to deal with mapping between data sets in any re-usable way.

Do you think cloud computing will make semantic impedance more or less obvious?

More obvious because of the clash of the unknown semantics of data sets.

Less obvious because the larger the data sets, the greater the tendency to assume the answer(s), however curious, must be correct.

Which do you think it will be?

Timetric

Filed under: Data Mining,Data Structures — Patrick Durusau @ 4:46 pm

Timetric: Everything you need to publish data and research online

Billed as having more than three (3) million public statistics.

Looks like an interesting data source.

Anyone have experience with this site in particular?

AnalyticBridge

Filed under: Analytics,Bibliography,Data Analysis — Patrick Durusau @ 4:45 pm

AnalyticBridge: A Social Network for Analytic Professionals

Some interesting resources, possibly useful groups.

Anyone with experience with this site?

Department of Numbers

Filed under: Data Source,Marketing — Patrick Durusau @ 4:45 pm

Department of Numbers

From the webpage:

The Department of Numbers contextualizes public data so that individuals can form independent opinions on everyday social and economic matters.

Possible source for both data and analysis that is of public interest. Thinking it will be easier to attract attention to topic maps that address current issues.

HTML5 web dev reading list

Filed under: HTML,Interface Research/Design — Patrick Durusau @ 4:45 pm

HTML5 web dev reading list

I am sure there are more of these than can be easily counted.

Suggestions on others that will be particularly useful for people developing topic map interfaces? (Should not be different from effective interfaces in general.)

Thanks!

Chart Suggestions – A Thought-Starter

Filed under: Visualization — Patrick Durusau @ 4:45 pm

Chart Suggestions – A Thought-Starter

A visual presentation of various chart types.

Worth printing out and keeping on hand.

October 26, 2011

Information Visualization Framework

Filed under: Graphics,Visualization — Patrick Durusau @ 6:59 pm

Information Visualization Framework (described as a chapter that did not make it into Visual Complexity: Mapping Patterns of Information) by Manuel Lima.

From the post:

Provide easy evaluation methodologies for existing tools and approaches. Information visualization requires a common rule system that can accordingly distinguish the good from the bad, the appropriate from the inappropriate, the usable from the unusable, the effective from the ineffective. Case studies and success stories are a great first step in this direction. If information visualization is a vehicle for evidence and clarity, it should embrace the same ideology in the definition of its own practice, by creating a systematic body of analysis able to properly evaluate the success of any project. Quantitative and qualitative evaluation methods should be welcomed, including observational studies, participatory assessment, usability testing, contextual interviews, and user feedback. This effort should, most importantly, go hands in hands with the development of an adequate language of criticism.

The forsaken chapter is quite good but I am curious what you make of this final paragraph. Do we really need “…a common rule system…” for visualization? I know the author thinks so but am curious what you think? What reasons would you give for your answer?

Collective Intelligence 2012: Deadline November 4, 2011

Filed under: Conferences,Crowd Sourcing — Patrick Durusau @ 6:59 pm

Collective Intelligence 2012: Deadline November 4, 2011 by Panos Ipeirotis.

From the post:

For all those of you interested in crowdsourcing, I would like to bring your attention to a new conference, named Collective Intelligence 2012, being organized at MIT this spring (April 18-20, 2012) by Tom Malone and Luis von Ahn. The conference is expected to have a set of 15-20 invited speakers (disclaimer: I am one of them), and also accepts papers submitted for publication. The deadline is November 4th, 2011, so if you have something that you would be willing to share with a wide audience interested in collective intelligence, this may be a place to consider.

If you do attend, please share your thoughts on the papers as relevant to crowdsourcing and topic map authoring. Thanks!

Computing Document Similarity using Lucene Term Vectors

Filed under: Lucene,Similarity,Vectors — Patrick Durusau @ 6:58 pm

Computing Document Similarity using Lucene Term Vectors

From the post:

Someone asked me a question recently about implementing document similarity, and since he was using Lucene, I pointed him to the Lucene Term Vector API. I hadn’t used the API myself, but I knew in general how it worked, so when he came back and told me that he could not make it work, I wanted to try it out for myself, to give myself a basis for asking further questions.

I already had a Lucene index (built by SOLR) of about 3000 medical articles for whose content field I had enabled term vectors as part of something I was trying out for highlighting, so I decided to use that. If you want to follow along and have to build your index from scratch, you can either use a field definition in your SOLR schema.xml file similar to this:

Nice walk through on document vectors.

Plus a reminder that “document” similarity can only take you so far. Once you find a relevant document, you still have to search for the subject of interest. Not to mention that you view that subject absent its relationship to other subjects, etc.

Oracle Releases NoSQL Database

Filed under: Java,NoSQL,Oracle — Patrick Durusau @ 6:58 pm

Oracle Releases NoSQL Database by Leila Meyer.

From the post:

Oracle has released Oracle NoSQL Database 11g, the company’s new entry into the NoSQL database market. Oracle NoSQL Database is a distributed, highly scalable, key-value database that uses the Oracle Berkeley Database Java Edition as its underlying storage system. Developed as a key component of the Oracle Big Data Appliance that was unveiled Oct. 3, Oracle NoSQL Database is available now as a standalone product.

(see the post for the list of features and other details)

Oracle NoSQL Database will be available in a Community Edition through an open source license and an Enterprise Edition through an Oracle Technology Network (OTN) license. The Community Edition is still awaiting final licensing approval, but the Enterprise Edition is available now for download from the Oracle Technology Network.

Don’t know that I will have the time but it would be amusing to compare the actual release with pre-release speculation about its features and capabilities.

More to follow as information becomes available.

Overview of the Oracle NoSQL Database

Filed under: NoSQL,Oracle — Patrick Durusau @ 6:58 pm

Overview of the Oracle NoSQL Database nice review by Daniel Abadi.

Where Daniel is inferring information he makes that clear but as one of the leading researchers in the area, I suspect we will find, eventually, that he wasn’t far off the mark.

Interesting reading.

Google removes plus (+) operator

Filed under: Search Engines,Search Interface,Searching — Patrick Durusau @ 6:58 pm

Google removes plus (+) operator

From the post:

The “+” operator used to mean “required” to Google, I think. But it also meant “and exactly that word is required, not an alternate form.” I think? Maybe it always was just a synonym for double quotes, and never meant ‘required’? Or maybe double quotes mean ‘required’ too?

At any rate, the plus operator is gone now.

I’m not entirely sure that the quotes will actually insist on the quoted word being present in the page? Can anyone find a counter-example?

I had actually noticed a while ago that the google advanced search page had stopped providing any fields that resulted in “+”, and was suggesting double quotes for “exactly this form of word” (not variants), rather than “phrase”. Exactly what given operators (and bare searches) do has continually evolved over time, and isn’t always documented or reflected in the “search tips” page or “advanced search” screen.

The post is a good example of using the Internet Archive to research the prior state of the web.

BTW, the comments and discussion on this were quite amusing. “Kelly Fee,” a Google employee had these responses to questions about removal of the “+” operator:

We’ve made the ways you can tell Google exactly what you want more consistent by expanding the functionality of the quotation marks operator. In addition to using this operator to search for an exact phrase, you can now add quotation marks around a single word to tell Google to match that word precisely. So, if in the past you would have searched for [magazine +latina], you should now search for [magazine “latina”].

We’re constantly making changes to Google Search – adding new features, tweaking the look and feel, running experiments, – all to get you the information you need as quickly and as easily as possible. This recent change is another step toward simplifying the search experience to get you to the info you want.

If you read the comments, having a simple search experience wasn’t the goal of most users. Finding relevant information was.

Kelly reassures users they are being heard, but ignored:

Thanks for sharing your thoughts. I especially appreciate everyone’s passion for search operators (if only every Google Search user were aware of these tools like you are…).

One thing I’d like to add to my original post is that, as with any change we make to our search engine, we put a lot of thought into this modification, but we’re always interested in user feedback.

I hope that you’ll continue to give us feedback in the future so that we can make your experience on Google more enjoyable.

After a number of posts on the lost of function by elimination of the “+” operator, Kelly creatively mis-hears the questions and comes up with an example that works.

I just tested out the quotes operator to make sure that it still works for phrases and it does. I searched for [from her eyes] and then [“from her eyes”] and got different results. I also tried [from her “eye”] and [from her eye] and got different results for each query, which is how it is intended to work.

Many people understand that putting quotes around a phrase tells a search engine to search for that exact phrase. This change applies that same idea to a specific word.

Would it help to know that Kelly Fee was a gymnast?

Can an algorithm be wrong? Twitter Trends,…

Filed under: Algorithms — Patrick Durusau @ 6:58 pm

Can an algorithm be wrong? Twitter Trends, the specter of censorship, and our faith in the algorithms around us

From the post:

The interesting question is not whether Twitter is censoring its Trends list. The interesting question is, what do we think the Trends list is, what it represents and how it works, that we can presume to hold it accountable when we think it is “wrong?” What are these algorithms, and what do we want them to be?

It’s not the first time it has been asked. Gilad Lotan at SocialFlow (and erstwhile Microsoft UX designer), spurred by questions raised by participants and supporters of the Occupy Wall Street protests, asks the question: is Twitter censoring its Trends list to exclude #occupywallstreet and #occupyboston? While the protest movement gains traction and media coverage, and participants, observers and critics turn to Twitter to discuss it, why are these widely-known hashtags not Trending? Why are they not Trending in the very cities where protests have occurred, including New York?

Careful analysis of the data indicates that Twitter is not censoring its Trends list. But I have heard people that I consider to be quite responsible argue to the contrary.

I raise this here as a caution against criticism of topic maps that don’t reflect what you think are undisputed “facts.” That you think some case to be true or that it must be true (in the case of political discussions) isn’t a sufficient basis for others to feel the same way. Nor does that mean their topic maps are “censoring” your view. Others have no obligation to advance your perspective.

SQL -> Pig Translation

Filed under: Pig,SQL — Patrick Durusau @ 6:57 pm

hadoop pig documentation

From the post:

It is sometimes difficult for SQL users to learn Pig because their mind is used to working in SQL. In this tutorial, examples of various SQL statements are shown, and then translated into Pig statements. For more detailed documentation, please see the official Pig manual.

This could be an effective technique for teaching Pig to SQL programmers. What do you think?

NoSQL notes

Filed under: NoSQL — Patrick Durusau @ 6:57 pm

NoSQL notes

From the post:

Last week I visited with James Phillips of Couchbase, Max Schireson and Eliot Horowitz of 10gen, and Todd Lipcon, Eric Sammer, and Omer Trajman of Cloudera. I guess it’s time for a round-up NoSQL post. 🙂

Views of the NoSQL market horse race are reasonably consistent, with perhaps some elements of “Where you stand depends upon where you sit.”

Quite a bit of “where you sit,” although amusing none the less.

LingPipe and Text Processing Books

Filed under: Java,LingPipe,Natural Language Processing — Patrick Durusau @ 6:57 pm

LingPipe and Text Processing Books

From the website:

We’ve decided to split what used to be the monolithic LingPipe book in two. As they’re written, we’ll be putting up drafts here.

NLP with LingPipe

You can download the PDF of the LingPipe book here:

Carpenter, Bob and Breck Baldwin. 2011. Natural Language Processing with LingPipe 4. Draft 0.5. June 2011. [Download: lingpipe-book-0.5.pdf]

Text Processing with Java

The PDF of the book on text in Java is here:

Carpenter, Bob, Mitzi Morris, and Breck Baldwin. 2011. Text Processing with Java 6. Draft 0.5. June 2011. [Download: java-text-book-0.5.pdf]

The pages are 7 inches by 10 inches, so if you print, you have the choice of large margins (no scaling) or large print (print fit to page).

Source code is also available.

Royal Society Journal Archive

Filed under: Data Source — Patrick Durusau @ 3:22 pm

Royal Society Journal Archive – Free Permanent Access

From the announcement:

The Royal Society has today announced that its world-famous historical journal archive – which includes the first ever peer-reviewed scientific journal – has been made permanently free to access online.

So, if you search for information using modern terminology, are you going to pick up materials from 10, 50, 100, 300 years ago?

Where do you think the break point will be on terminology?

Here’s my suggestion:

  1. We will pair off in two person teams. The A teams will research a subject and record their queries, along with an estimate for when the language changes.
  2. The B teams will take the A team results and try to show that the estimate for when the language changed is incorrect. (too early, too late, never, etc.)
  3. Prizes will be awarded for the best results as well as the most interesting queries, subjects and odd facts learned along the way.
« Newer PostsOlder Posts »

Powered by WordPress