Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

January 20, 2015

Modelling Plot: On the “conversional novel”

Filed under: Language,Literature,Text Analytics,Text Mining — Patrick Durusau @ 11:11 am

Modelling Plot: On the “conversional novel” by Andrew Piper.

From the post:

I am pleased to announce the acceptance of a new piece that will be appearing soon in New Literary History. In it, I explore techniques for identifying narratives of conversion in the modern novel in German, French and English. A great deal of new work has been circulating recently that addresses the question of plot structures within different genres and how we might or might not be able to model these computationally. My hope is that this piece offers a compelling new way of computationally studying different plot types and understanding their meaning within different genres.

Looking over recent work, in addition to Ben Schmidt’s original post examining plot “arcs” in TV shows using PCA, there have been posts by Ted Underwood and Matthew Jockers looking at novels, as well as a new piece in LLC that tries to identify plot units in fairy tales using the tools of natural language processing (frame nets and identity extraction). In this vein, my work offers an attempt to think about a single plot “type” (narrative conversion) and its role in the development of the novel over the long nineteenth century. How might we develop models that register the novel’s relationship to the narration of profound change, and how might such narratives be indicative of readerly investment? Is there something intrinsic, I have been asking myself, to the way novels ask us to commit to them? If so, does this have something to do with larger linguistic currents within them – not just a single line, passage, or character, or even something like “style” – but the way a greater shift of language over the course of the novel can be generative of affective states such as allegiance, belief or conviction? Can linguistic change, in other words, serve as an efficacious vehicle of readerly devotion?

While the full paper is available here, I wanted to post a distilled version of what I see as its primary findings. It’s a long essay that not only tries to experiment with the project of modelling plot, but also reflects on the process of model building itself and its place within critical reading practices. In many ways, its a polemic against the unfortunate binariness that surrounds debates in our field right now (distant/close, surface/depth etc.). Instead, I want us to see how computational modelling is in many ways conversional in nature, if by that we understand it as a circular process of gradually approaching some imaginary, yet never attainable centre, one that oscillates between both quantitative and qualitative stances (distant and close practices of reading).

Andrew writes of “…critical reading practices….” I’m not sure that technology will increase the use of “…critical reading practices…” but it certainly offers the opportunity to “read” texts in different ways.

I have done this with IT standards but never a novel, attempt reading it from the back forwards, a sentence at a time. At least with authoring you are proofing, it provides a radically different perspective than the more normal front to back. The first thing you notice is that it interrupts your reading/skimming speed so you will catch more errors as well as nuances in the text.

Before you think that literary analysis is a bit far afield from “practical” application, remember that narratives (think literature) are what drive social policy and decision making.

Take the current popular “war on terrorism” narrative that is so popular and unquestioned in the United States. Ask anyone inside the beltway in D.C. and they will blather on and on about the need to defend against terrorism. But there is an absolute paucity of terrorists, at least by deed, in the United States. Why does the narrative persist in the absence of any evidence to support it?

The various Red Scares in U.S. history were similar narratives that have never completely faded. They too had a radical disconnect between the narrative and the “facts on the ground.”

Piper doesn’t offer answers to those sort of questions but a deeper understanding of narrative, such as is found in novels, may lead to hints with profound policy implications.

December 24, 2014

How Language Shapes Thought:…

Filed under: Language,Psychology — Patrick Durusau @ 3:18 pm

How Language Shapes Thought: The languages we speak affect our perceptions of the world by Lera Boroditsky.

From the article:

I am standing next to a five-year old girl in pormpuraaw, a small Aboriginal community on the western edge of Cape York in northern Australia. When I ask her to point north, she points precisely and without hesitation. My compass says she is right. Later, back in a lecture hall at Stanford University, I make the same request of an audience of distinguished scholars—winners of science medals and genius prizes. Some of them have come to this very room to hear lectures for more than 40 years. I ask them to close their eyes (so they don’t cheat) and point north. Many refuse; they do not know the answer. Those who do point take a while to think about it and then aim in all possible directions. I have repeated this exercise at Harvard and Princeton and in Moscow, London and Beijing, always with the same results.

A five-year-old in one culture can do something with ease that eminent scientists in other cultures struggle with. This is a big difference in cognitive ability. What could explain it? The surprising answer, it turns out, may be language.

Michael Nielson mentioned this article in a tweet about a new book due out from Lera in the Fall of 2015.

Looking further I found: 7,000 Universes: How the Language We Speak Shapes the Way We Think [Kindle Edition] by Lera Boroditsky. (September, 2015, available for pre-order now)

As Michael says, looking forward to seeing this book! Sounds like a good title to forward to Steve Newcomb. Steve would argue, correctly I might add, any natural language may contain an infinite number of possible universes of discourse.

I assume some of this issue will be caught by your testing topic map UIs with actual users in whatever subject domain and language you are offering information. That is rather than consider the influence of language in the abstract, you will be silently taking it into account in user feedback. You are testing your topic map deliverables with live users before delivery. Yes?

There are other papers by Lera available for your leisure reading.

December 23, 2014

The Sense of Style [25 December 2014 – 10 AM – C-SPAN2]

Filed under: Language,Writing — Patrick Durusau @ 2:03 pm

Steve Pinker discussing his book The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century.

From the description:

Steven Pinker talked about his book, The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century, in which he questions why so much of our writing today is bad. Professor Pinker said that while texting and the internet are blamed for developing bad writing habits, especially among young people, good writing has always been a difficult task.

The transcript, made for closed captioning, will convince you of the power of paragraphing if you attempt to read it. I may copy it, watch the lecture Christmas morning, insert paragraphing and ask CSPAN if they would like a corrected copy. 😉

One suggestion for learning to write (like learning to program), that I have heard but never followed, is to type out text written by known good writers. As you probably suspect, my excuse is a lack of time. Perhaps that will be a New Year’s resolution for the coming year.

Becoming a better writer automatically means you will be communicating better with your audience. For some of us that may be a plus or a negative. You have been forewarned.

Enjoy!


In case you miss the broadcast, I found the video archive of the presentation. Nothing that will startle you but Pinker is an entertaining speaker.

I am watching the video early and Pinker points out an “inherent problem in the design of language.” [paraphrasing] We hold knowledge in a semantic network in our brains but when we use language to communicate some piece of that knowledge, the order of words in a sentence has to do two things at once:

* Serve as a code for meaning (who did what to whom)

* Present some bits of information to the reader before others (affects how the information is absorbed)

Pinker points out that passive voice allows better prose. Focus remains on the subject. (Is prevalent in bad prose but Pinker argues that is due to the curse of knowledge.)

Question: Do we need a form of passive voice in computer languages? What would that look like?

December 16, 2014

LT-Accelerate

Filed under: Language,Sentiment Analysis — Patrick Durusau @ 11:22 am

LT-Accelerate: LT-Accelerate is a conference designed to help businesses, researchers and public administrations discover business value via Language Technology.

From the about page:

LT-Accelerate is a joint production of LT-Innovate, the European Association of the Language Technology Industry, and Alta Plana Corporation, a Washington DC based strategy consultancy headed by analyst Seth Grimes.

Held December 4-5, 2014 in Brussels, the website reports seven (7) interviews with key speakers and slides from thirty-eight speakers.

Not as in depth as papers nor as useful as videos of the presentations but still capable of sparking new ideas as you review the slides.

For example, the slides from Multi-Dimensional Sentiment Analysis by Stephen Pulman made me wonder what sentiment detection design would be appropriate for the Michael Brown grand jury transcripts?

Sentiment detection has been successfully used with tweets (140 character limit) and I am reliably informed that most of the text strings in the Michael Brown grand jury transcript are far longer than one hundred and forty (140) characters. 😉

Any sentiment detectives in the audience?

December 14, 2014

Inheritance Patterns in Citation Networks Reveal Scientific Memes

Filed under: Citation Analysis,Language,Linguistics,Meme,Social Networks — Patrick Durusau @ 8:37 pm

Inheritance Patterns in Citation Networks Reveal Scientific Memes by Tobias Kuhn, Matjaž Perc, and Dirk Helbing. (Phys. Rev. X 4, 041036 – Published 21 November 2014.)

Abstract:

Memes are the cultural equivalent of genes that spread across human culture by means of imitation. What makes a meme and what distinguishes it from other forms of information, however, is still poorly understood. Our analysis of memes in the scientific literature reveals that they are governed by a surprisingly simple relationship between frequency of occurrence and the degree to which they propagate along the citation graph. We propose a simple formalization of this pattern and validate it with data from close to 50 million publication records from the Web of Science, PubMed Central, and the American Physical Society. Evaluations relying on human annotators, citation network randomizations, and comparisons with several alternative approaches confirm that our formula is accurate and effective, without a dependence on linguistic or ontological knowledge and without the application of arbitrary thresholds or filters.

Popular Summary:

It is widely known that certain cultural entities—known as “memes”—in a sense behave and evolve like genes, replicating by means of human imitation. A new scientific concept, for example, spreads and mutates when other scientists start using and refining the concept and cite it in their publications. Unlike genes, however, little is known about the characteristic properties of memes and their specific effects, despite their central importance in science and human culture in general. We show that memes in the form of words and phrases in scientific publications can be characterized and identified by a simple mathematical regularity.

We define a scientific meme as a short unit of text that is replicated in citing publications (“graphene” and “self-organized criticality” are two examples). We employ nearly 50 million digital publication records from the American Physical Society, PubMed Central, and the Web of Science in our analysis. To identify and characterize scientific memes, we define a meme score that consists of a propagation score—quantifying the degree to which a meme aligns with the citation graph—multiplied by the frequency of occurrence of the word or phrase. Our method does not require arbitrary thresholds or filters and does not depend on any linguistic or ontological knowledge. We show that the results of the meme score are consistent with expert opinion and align well with the scientific concepts described on Wikipedia. The top-ranking memes, furthermore, have interesting bursty time dynamics, illustrating that memes are continuously developing, propagating, and, in a sense, fighting for the attention of scientists.

Our results open up future research directions for studying memes in a comprehensive fashion, which could lead to new insights in fields as disparate as cultural evolution, innovation, information diffusion, and social media.

You definitely should grab the PDF version of this article for printing and a slow read.

From Section III Discussion:


We show that the meme score can be calculated exactly and exhaustively without the introduction of arbitrary thresholds or filters and without relying on any kind of linguistic or ontological knowledge. The method is fast and reliable, and it can be applied to massive databases.

Fair enough but “black,” “inflation,” and, “traffic flow,” all appear in the top fifty memes in physics. I don’t know that I would consider any of them to be “memes.”

There is much left to be discovered about memes. Such as who is good at propagating memes? Would not hurt if your research paper is the origin of a very popular meme.

I first saw this in a tweet by Max Fisher.

December 11, 2014

When Do Natural Language Metaphors Influence Reasoning?…

Filed under: Language,Metaphors — Patrick Durusau @ 11:23 am

When Do Natural Language Metaphors Influence Reasoning? A Follow-Up Study to Thibodeau and Boroditsky (2013) by Gerard J. Steen, W. Gudrun Reijnierse, and Christian Burgers.

Abstract:

In this article, we offer a critical view of Thibodeau and Boroditsky who report an effect of metaphorical framing on readers’ preference for political measures after exposure to a short text on the increase of crime in a fictitious town: when crime was metaphorically presented as a beast, readers became more enforcement-oriented than when crime was metaphorically framed as a virus. We argue that the design of the study has left room for alternative explanations. We report four experiments comprising a follow-up study, remedying several shortcomings in the original design while collecting more encompassing sets of data. Our experiments include three additions to the original studies: (1) a non-metaphorical control condition, which is contrasted to the two metaphorical framing conditions used by Thibodeau and Boroditsky, (2) text versions that do not have the other, potentially supporting metaphors of the original stimulus texts, (3) a pre-exposure measure of political preference (Experiments 1–2). We do not find a metaphorical framing effect but instead show that there is another process at play across the board which presumably has to do with simple exposure to textual information. Reading about crime increases people’s preference for enforcement irrespective of metaphorical frame or metaphorical support of the frame. These findings suggest the existence of boundary conditions under which metaphors can have differential effects on reasoning. Thus, our four experiments provide converging evidence raising questions about when metaphors do and do not influence reasoning.

The influence of metaphors on reasoning raises an interesting question for those attempting to duplicate the human brain in silicon: Can a previously recorded metaphor influence the outcome of AI reasoning?

Or can hearing the same information multiple times from different sources influence an AI’s perception of the validity of that information? (In a non-AI context, a relevant question for the Michael Brown grand jury discussion.)

On it own merits, a very good read and recommended to anyone who enjoys language issues.

December 6, 2014

Cultural Fault Lines Determine How New Words Spread On Twitter, Say Computational Linguists

Filed under: Computational Linguistics,Language,Linguistics — Patrick Durusau @ 9:11 am

Cultural Fault Lines Determine How New Words Spread On Twitter, Say Computational Linguists

From the post:

A dialect is a particular form of language that is limited to a specific location or population group. Linguists are fascinated by these variations because they are determined both by geography and by demographics. So studying them can produce important insights into the nature of society and how different groups within it interact.

That’s why linguists are keen to understand how new words, abbreviations and usages spread on new forms of electronic communication, such as social media platforms. It is easy to imagine that the rapid spread of neologisms could one day lead to a single unified dialect of netspeak. An interesting question is whether there is any evidence that this is actually happening.

Today, we get a fascinating insight into this problem thanks to the work of Jacob Eisenstein at the Georgia Institute of Technology in Atlanta and a few pals. These guys have measured the spread of neologisms on Twitter and say they have clear evidence that online language is not converging at all. Indeed, they say that electronic dialects are just as common as ordinary ones and seem to reflect same fault lines in society.

Disappointment for those who thought the Net would help people overcome the curse of Babel.

When we move into new languages or means of communication, we simply take our linguistic diversity with us, like well traveled but familiar luggage.

If you think about it, the difficulties of multiple semantics for OWL same:As is another instance of the same phenomena. Semantically distinct groups assigned the same token, OWL same:As different semantics. That should not have been a surprise. But it was and it will be every time on community privileges itself to be the giver of meaning for any term.

If you want to see the background for the post in full:

Diffusion of Lexical Change in Social Media by Jacob Eisenstein, Brendan O’Connor, Noah A. Smith, Eric P. Xing.

Abstract:

Computer-mediated communication is driving fundamental changes in the nature of written language. We investigate these changes by statistical analysis of a dataset comprising 107 million Twitter messages (authored by 2.7 million unique user accounts). Using a latent vector autoregressive model to aggregate across thousands of words, we identify high-level patterns in diffusion of linguistic change over the United States. Our model is robust to unpredictable changes in Twitter’s sampling rate, and provides a probabilistic characterization of the relationship of macro-scale linguistic influence to a set of demographic and geographic predictors. The results of this analysis offer support for prior arguments that focus on geographical proximity and population size. However, demographic similarity — especially with regard to race — plays an even more central role, as cities with similar racial demographics are far more likely to share linguistic influence. Rather than moving towards a single unified “netspeak” dialect, language evolution in computer-mediated communication reproduces existing fault lines in spoken American English.

December 4, 2014

Hebrew Astrolabe:…

Filed under: Astroinformatics,History,Language — Patrick Durusau @ 9:16 pm

Hebrew Astrolabe: A History of the World in 100 Objects, Status Symbols (1200 – 1400 AD) by Neil MacGregor.

From the webpage:

Neil MacGregor’s world history as told through objects at the British Museum. This week he is exploring high status objects from across the world around 700 years ago. Today he has chosen an astronomical instrument that could perform multiple tasks in the medieval age, from working out the time to preparing horoscopes. It is called an astrolabe and originates from Spain at a time when Christianity, Islam and Judaism coexisted and collaborated with relative ease – indeed this instrument carries symbols recognisable to all three religions. Neil considers who it was made for and how it was used. The astrolabe’s curator, Silke Ackermann, describes the device and its markings, while the historian Sir John Elliott discusses the political and religious climate of 14th century Spain. Was it as tolerant as it seems?

The astrolabe that is the focus of this podcast is quite remarkable. The Hebrew, Arabic and Spanish words on this astrolabe are all written in Hebrew characters.

Would you say that is multilingual?

BTW, this series from the British Museum will not be available indefinitely so start listening to these podcasts soon!

December 2, 2014

Cliques are nasty but Cliques are nastier

Filed under: Humor,Language — Patrick Durusau @ 3:34 pm

Cliques are nasty but Cliques are nastier by Lance Fortnow.

A heteronym that fails to make the listing at: The Heteronym Homepage.

From the Heteronym Homepage:

Heteronyms are words that are spelled identically but have different meanings when pronounced differently.

Before you jump to Lance’s post (see the comments as well), care to guess the pronunciations and meanings of “clique?”

Enjoy!

November 30, 2014

Old World Language Families

Filed under: Graphics,Language,Visualization — Patrick Durusau @ 2:00 pm

language tree

Be design (limitation of space) not all languages were included.

Despite that, the original post has gotten seven hundred and twenty-two (722) comments as of today. A large number of which mention wanting a poster of this visualization.

I could assemble the same information, sans the interesting graphic and get no comments and no requests for a poster version.

😉

What makes this presentation (map) compelling? Could you transfer it to another body of information with the same impact?

What do you make of: “The approximate sizes of our known living language populations, compared to year 0.”

Suggested reading on what makes some graphics compelling and others not?

Originally from: Stand Still Stay Silent Comic, although I first saw it at: Old World Language Families by Randy Krum.

PS: For extra credit, how many languages can you name that don’t appear on this map?

October 27, 2014

Building a language-independent keyword-based system with the Wikipedia Miner

Filed under: Keywords,Language,Translation,Wikipedia — Patrick Durusau @ 8:00 pm

Building a language-independent keyword-based system with the Wikipedia Miner by Gauthier Lemoine.

From the post:

Extracting keywords from texts and HTML pages is a common subject that opens doors to a lot of potential applications. These include classification (what is this page topic?), recommendation systems (identifying user likes to recommend the more accurate content), search engines (what is this page about?), document clustering (how can I pack different texts into a common group) and much more.

Most applications of these are usually based on only one language, usually english. However, it would be better to be able to process document in any language. For example, a case in a recommender system would be a user that speaks French and English. In his history, he gave positive ratings to a few pages containing the keyword “Airplane”. So, for next recommendations, we would boost this keyword. With a language-independent approach, we would also be able to boost pages containing “Avion”, the french term for airplane. If the user gave positive ratings to pages in English containing “Airplane”, and in French containing “Avion”, we would also be able to merge easily into the same keyword to build a language-independent user profile that will be used for accurate French and English recommendations.

This articles shows one way to achieve good results using an easy strategy. It is obvious that we can achieve better results using more complex algorithms.

The NSA can hire translators so I would not bother sharing this technique for harnessing the thousands of expert hours in Wikipedia with them.

Bear in mind that Wikipedia does not reach a large number of minority languages, dialects, and certainly not deliberate obscurity in any language. Your mileage will vary depending upon your particular use case.

October 22, 2014

Gram­mat­i­cal the­o­ry: From trans­for­ma­tion­al gram­mar to con­straint-​based ap­proach­es

Filed under: Grammar,Language — Patrick Durusau @ 4:09 pm

Gram­mat­i­cal the­o­ry: From trans­for­ma­tion­al gram­mar to con­straint-​based ap­proach­es by Ste­fan Müller.

From the webpage:

To ap­pear 2015 in Lec­ture Notes in Lan­guage Scineces, No 1, Berlin: Lan­guage Sci­ence Press. The book is a trans­la­tion and ex­ten­sion of the sec­ond edi­tion of my gram­mar the­o­ry book that ap­peared 2010 in the Stauf­fen­burg Ver­lag.

This book in­tro­duces for­mal gram­mar the­o­ries that play a role in cur­rent lin­guis­tics or con­tribut­ed tools that are rel­e­vant for cur­rent lin­guis­tic the­o­riz­ing (Phrase Struc­ture Gram­mar, Trans­for­ma­tion­al Gram­mar/Gov­ern­ment & Bind­ing, Gen­er­al­ized Phrase Struc­ture Gram­mar, Lex­i­cal Func­tion­al Gram­mar, Cat­e­go­ri­al Gram­mar, Head-​Driv­en Phrase Struc­ture Gram­mar, Con­struc­tion Gram­mar, Tree Ad­join­ing Gram­mar). The key as­sump­tions are ex­plained and it is shown how the re­spec­tive the­o­ry treats ar­gu­ments and ad­juncts, the ac­tive/pas­sive al­ter­na­tion, local re­order­ings, verb place­ment, and fronting of con­stituents over long dis­tances. The anal­y­ses are ex­plained with Ger­man as the ob­ject lan­guage.

In a final chap­ter the ap­proach­es are com­pared with re­spect to their pre­dic­tions re­gard­ing lan­guage ac­qui­si­tion and psy­cholin­guis­tic plau­si­bil­i­ty. The Na­tivism hy­poth­e­sis that as­sumes that hu­mans poss­es ge­net­i­cal­ly de­ter­mined in­nate lan­guage-​spe­cif­ic knowl­edge is ex­am­ined crit­i­cal­ly and al­ter­na­tive mod­els of lan­guage ac­qui­si­tion are dis­cussed. In ad­di­tion this chap­ter ad­dress­es is­sues that are dis­cussed con­tro­ver­sial­ly in cur­rent the­o­ry build­ing as for in­stance the ques­tion whether flat or bi­na­ry branch­ing struc­tures are more ap­pro­pri­ate, the ques­tion whether con­struc­tions should be treat­ed on the phrasal or the lex­i­cal level, and the ques­tion whether ab­stract, non-​vis­i­ble en­ti­ties should play a role in syn­tac­tic anal­y­ses. It is shown that the anal­y­ses that are sug­gest­ed in the re­spec­tive frame­works are often trans­lat­able into each other. The book clos­es with a sec­tion that shows how prop­er­ties that are com­mon to all lan­guages or to cer­tain lan­guage class­es can be cap­tured.

The webpage offers a download link for the current draft, teaching materials and a BibTeX file of all publications that the author cites in his works.

Interesting because of the application of these models to a language other than English and the author’s attempt to help readers avoid semantic confusion:

Unfortunately, linguistics is a scientific field which is afflicted by an unbelievable degree of terminological chaos. This is partly due to the fact that terminology originally defined for certain languages (e. g. Latin, English) was later simply adopted for the description of other languages as well. However, this is not always appropriate since languages differ from one another greatly and are constantly changing. Due to the problems this caused, the terminology started to be used differently or new terms were invented. when new terms are introduced in this book, I will always mention related terminology or differing uses of each term so that readers can relate this to other literature.

Unfortunately, it does not appear like the author gathered the new terms up into a table or list. Creating such a list from the book would be a very useful project.

September 20, 2014

Growing a Language

Filed under: Language,Language Design,Programming — Patrick Durusau @ 7:55 pm

Growing a Language by Guy L. Steele, Jr.

The first paper in a new series of posts from the Hacker School blog, “Paper of the Week.”

I haven’t found a good way to summarize Steele’s paper but can observe that a central theme is the growth of programming languages.

While enjoying the Steele paper, ask yourself how would you capture the changing nuances of a language, natural or artificial?

Enjoy!

September 8, 2014

Python-ZPar – Python Wrapper for ZPAR

Filed under: Chinese,Language,Natural Language Processing,Parsers — Patrick Durusau @ 7:05 pm

Python-ZPar – Python Wrapper for ZPAR by Nitin Madnani.

From the webpage:

python-zpar is a python wrapper around the ZPar parser. ZPar was written by Yue Zhang while he was at Oxford University. According to its home page: ZPar is a statistical natural language parser, which performs syntactic analysis tasks including word segmentation, part-of-speech tagging and parsing. ZPar supports multiple languages and multiple grammar formalisms. ZPar has been most heavily developed for Chinese and English, while it provides generic support for other languages. ZPar is fast, processing above 50 sentences per second using the standard Penn Teebank (Wall Street Journal) data.

I wrote python-zpar since I needed a fast and efficient parser for my NLP work which is primarily done in Python and not C++. I wanted to be able to use this parser directly from Python without having to create a bunch of files and running them through subprocesses. python-zpar not only provides a simply python wrapper but also provides an XML-RPC ZPar server to make batch-processing of large files easier.

python-zpar uses ctypes, a very cool foreign function library bundled with Python that allows calling functions in C DLLs or shared libraries directly.

Just in case you are looking for a language parser for Chinese or English.

It is only a matter of time before commercial opportunities are going to force greater attention on non-English languages. Forewarned is forearmed.

September 1, 2014

How Could Language Have Evolved?

Filed under: Evoluntionary,Language — Patrick Durusau @ 7:40 pm

How Could Language Have Evolved? by Johan J. Bolhuis, Ian Tattersall, Noam Chomsky, Robert C. Berwick.

Abstract:

The evolution of the faculty of language largely remains an enigma. In this essay, we ask why. Language’s evolutionary analysis is complicated because it has no equivalent in any nonhuman species. There is also no consensus regarding the essential nature of the language “phenotype.” According to the “Strong Minimalist Thesis,” the key distinguishing feature of language (and what evolutionary theory must explain) is hierarchical syntactic structure. The faculty of language is likely to have emerged quite recently in evolutionary terms, some 70,000–100,000 years ago, and does not seem to have undergone modification since then, though individual languages do of course change over time, operating within this basic framework. The recent emergence of language and its stability are both consistent with the Strong Minimalist Thesis, which has at its core a single repeatable operation that takes exactly two syntactic elements a and b and assembles them to form the set {a, b}.

Interesting that Chomsky and his co-authors have seized upon “hierarchical syntactic structure” as “the key distinguishing feature of language.”

Remember text as an Ordered Hierarchy of Content Objects (OHCO), which has made the rounds in markup circles since 1993. It’s staying power was quite surprising since examples are hard to find outside of markup text encodings. Your average text prior to markup can be mapped to OHCO only with difficulty in most cases.

Syntactic structures are attributed to languages so be mindful that any “hierarchical syntactic structure” is entirely of human origin separate and apart from language.

August 27, 2014

New York Times Annotated Corpus Add-On

New York Times corpus add-on annotations: MIDs and Entity Salience. (GitHub – Data)

From the webpage:

The data included in this release accompanies the paper, entitled “A New Entity Salience Task with Millions of Training Examples” by Jesse Dunietz and Dan Gillick (EACL 2014).

The training data includes 100,834 documents from 2003-2006, with 19,261,118 annotated entities. The evaluation data includes 9,706 documents from 2007, with 187,080 annotated entities.

An empty line separates each document annotation. The first line of a document’s annotation contains the NYT document id followed by the title. Each subsequent line refers to an entity, with the following tab-separated fields:

entity index automatically inferred salience {0,1} mention count (from our coreference system) first mention’s text byte offset start position for the first mention byte offset end position for the first mention MID (from our entity resolution system)

The background in Teaching machines to read between the lines (and a new corpus with entity salience annotations) by Dan Gillick and Dave Orr, will be useful.

From the post:

Language understanding systems are largely trained on freely available data, such as the Penn Treebank, perhaps the most widely used linguistic resource ever created. We have previously released lots of linguistic data ourselves, to contribute to the language understanding community as well as encourage further research into these areas.

Now, we’re releasing a new dataset, based on another great resource: the New York Times Annotated Corpus, a set of 1.8 million articles spanning 20 years. 600,000 articles in the NYTimes Corpus have hand-written summaries, and more than 1.5 million of them are tagged with people, places, and organizations mentioned in the article. The Times encourages use of the metadata for all kinds of things, and has set up a forum to discuss related research.

We recently used this corpus to study a topic called “entity salience”. To understand salience, consider: how do you know what a news article or a web page is about? Reading comes pretty easily to people — we can quickly identify the places or things or people most central to a piece of text. But how might we teach a machine to perform this same task? This problem is a key step towards being able to read and understand an article.

Term ratios are a start, but we can do better. Search indexing these days is much more involved, using for example the distances between pairs of words on a page to capture their relatedness. Now, with the Knowledge Graph, we are beginning to think in terms of entities and relations rather than keywords. “Basketball” is more than a string of characters; it is a reference to something in the real word which we already already know quite a bit about. (emphasis added)

Truly an important data set but I’m rather partial to that last line. 😉

So the question is if we “recognize” a entity as salient, do we annotate the entity and:

  • Present the reader with a list of links, each to a separate mention with or without ads?
  • Present the reader with what is known about the entity, with or without ads?

I see enough divided posts and other information that forces readers to endure more ads that I consciously avoid buying anything for which I see a web ad. Suggest you do the same. (If possible.) I buy books, for example, because someone known to me recommends it, not because some marketeer pushes it at me across many domains.

August 26, 2014

Biscriptal juxtaposition in Chinese

Filed under: Chinese,Language,Machine Learning — Patrick Durusau @ 6:36 pm

Biscriptal juxtaposition in Chinese by Victor Mair.

From the post:

We have often seen how the Roman alphabet is creeping into Chinese writing, both for expressing English words and morphemes that have been borrowed into Chinese, but also increasingly for writing Mandarin and other varieties of Chinese in Pinyin (spelling). Here are just a few earlier Language Log posts dealing with this phenomenon:

“A New Morpheme in Mandarin” (4/26/11)

“Zhao C: a Man Who Lost His Name” (2/27/09)

“Creeping Romanization in Chinese” (8/30/12)

Now an even more intricate application of alphabetic usage is developing in internet writing, namely, the juxtaposition and intertwining of simultaneous phrases with contrasting meaning.

Highly entertaining post on the complexities of evolving language usage.

The sort of usage that hasn’t made it into a dictionary, yet, but still needs to be captured and shared.

Sam Hunting brought this to my attention.

July 29, 2014

Using Category Theory to design…

Filed under: Category Theory,Language,Language Design — Patrick Durusau @ 7:33 pm

Using Category Theory to design implicit conversions and generic operators by John C. Reynolds.

Abstract:

A generalization of many-sorted algebras, called category-sorted algebras, is defined and applied to the language-design problem of avoiding anomalies in the interaction of implicit conversions and generic operators. The definition of a simple imperative language (without any binding mechanisms) is used as an example.

The greatest exposure most people have to implicit conversions is that they are handled properly.

This paper dates from 1980 so some of the category theory jargon will seem odd but consider it a “practical” application of category theory.

That should hold your interest. 😉

I first saw this in a tweet by scottfleischman.

July 16, 2014

Introducing Source Han Sans:…

Filed under: Fonts,Language — Patrick Durusau @ 2:57 pm

Introducing Source Han Sans: An open source Pan-CJK typeface by Caleb Belohlavek.

From the post:

Adobe, in partnership with Google, is pleased to announce the release of Source Han Sans, a new open source Pan-CJK typeface family that is now available on Typekit for desktop use. If you don’t have a Typekit account, it’s easy to set one up and start using the font immediately with our free subscription. And for those who want to play with the original source files, you can get those from our download page on SourceForge.

It’s rather difficult to describe your semantics when you can’t write in your own language.

Kudos to Adobe and Google for sponsoring this project!

I first saw this in a tweet by James Clark.

July 14, 2014

An Empirical Investigation into Programming Language Syntax

Filed under: Language,Language Design,Programming,Query Language — Patrick Durusau @ 4:02 pm

An Empirical Investigation into Programming Language Syntax by Greg Wilson.

A great synopsis of Andreas Stefik and Susanna Siebert’s “An Empirical Investigation into Programming Language Syntax.” ACM Transactions on Computing Education, 13(4), Nov. 2013.

A sample to interest you in the post:

  1. Programming language designers needlessly make programming languages harder to learn by not doing basic usability testing. For example, “…the three most common words for looping in computer science, for, while, and foreach, were rated as the three most unintuitive choices by non-programmers.”
  2. C-style syntax, as used in Java and Perl, is just as hard for novices to learn as a randomly-designed syntax. Again, this pain is needless, because the syntax of other languages (such as Python and Ruby) is significantly easier.

Let me repeat part of that:

C-style syntax, as used in Java and Perl, is just as hard for novices to learn as a randomly-designed syntax.

Randomly-designed syntax?

Now, think about the latest semantic syntax or semantic query syntax you have read about.

Was it designed for users? Was there any user testing at all?

Is there a lesson here for designers of semantic syntaxes and query languages?

Yes?

I first saw this in Greg Wilson’s Software Carpentry: Lessons Learned video.

July 10, 2014

Ontology-Based Interpretation of Natural Language

Filed under: Language,Ontology,RDF,SPARQL — Patrick Durusau @ 9:46 am

Ontology-Based Interpretation of Natural Language by Philipp Cimiano, Christina Unger, John McCrae.

Authors’ description:

For humans, understanding a natural language sentence or discourse is so effortless that we hardly ever think about it. For machines, however, the task of interpreting natural language, especially grasping meaning beyond the literal content, has proven extremely difficult and requires a large amount of background knowledge.

The book Ontology-based interpretation of natural language presents an approach to the interpretation of natural language with respect to specific domain knowledge captured in ontologies. It puts ontologies at the center of the interpretation process, meaning that ontologies not only provide a formalization of domain knowlegde necessary for interpretation but also support and guide the construction of meaning representations.

The links under Resources for Ontologies, Lexica and Grammars, as of today return “coming soon.”

Implementations fares a bit better, returning information on various aspects of lemon.

lemon is a proposed meta-model for describing ontology lexica with RDF. It is declarative, thus abstracts from specific syntactic and semantic theories, and clearly separates lexicon and ontology. It follows the principle of semantics by reference, which means that the meaning of lexical entries is specified by pointing to elements in the ontology.

lemon-core

It may just be me but the Lemon model seems more complicated than asking users what identifies their subjects and distinguishes them from other subjects.

Lemon is said to be compatible with RDF, OWL, SPARQL, etc.

But, accurate (to a user) identification of subjects and their relationships to other subjects is more important to me than compatibility with RDF, SPARQL, etc.

You?

I first saw this in a tweet by Stefano Bertolo.

July 1, 2014

The Proceedings of the Old Bailey, 1674-1913

Filed under: History,Language — Patrick Durusau @ 4:00 pm

The Proceedings of the Old Bailey, 1674-1913

From the webpage:

A fully searchable edition of the largest body of texts detailing the lives of non-elite people ever published, containing 197,745 criminal trials held at London’s central criminal court. If you are new to this site, you may find the Getting Started and Guide to Searching videos and tutorials helpful.

While writing about using The WORD on the STREET for examples of language change, I remember the proceedings from Old Bailey being online.

An extremely rich site with lots of help for the average reader but there was one section in particular I wanted to point out:

Gender in the Proceedings

Men’s and women’s experiences of crime, justice and punishment

Virtually every aspect of English life between 1674 and 1913 was influenced by gender, and this includes behaviour documented in the Old Bailey Proceedings. Long-held views about the particular strengths, weaknesses, and appropriate responsibilities of each sex shaped everyday lives, patterns of crime, and responses to crime. This page provides an introduction to gender roles in this period; a discussion of how they affected crime, justice, and punishment; and advice on how to analyse the Proceedings for information about gender.

Gender relations are but one example of the semantic distance that exists between us and our ancestors. We cannot ever eliminate that distance, any more than we can talk about the moon without remembering we have walked upon it.

But, we can do our best to honor that semantic distance by being aware that their world is not ours. Closely attending to language is a first step in that direction.

Enjoy!

May 23, 2014

Early Canadiana Online

Filed under: Data,Language,Library — Patrick Durusau @ 6:50 pm

Early Canadiana Online

From the webpage:

These collections contain over 80,000 rare books, magazines and government publications from the 1600s to the 1940s.

This rare collection of documentary heritage will be of interest to scholars, genealogists, history buffs and anyone who enjoys reading about Canada’s early days.

The Early Canadiana Online collection of rare books, magazines and government publications has over 80,000 titles (3,500,000 pages) and is growing. The collection includes material published from the time of the first European settlers to the first four decades of the 20th Century.

You will find books written in 21 languages including French, English, 10 First Nations languages and several European languages, Latin and Greek.

Every online collection such as this one, increases the volume of information that is accessible and also increases the difficulty of finding related information for any given subject. But the latter is such a nice problem to have!

I first saw this in a tweet from Lincoln Mullen.

May 15, 2014

Speak and learn with Spell Up, our latest Chrome Experiment

Filed under: Education,Language — Patrick Durusau @ 7:15 pm

Speak and learn with Spell Up, our latest Chrome Experiment by Xavier Barrade.

From the post:

As a student growing up in France, I was always looking for ways to improve my English, often with a heavy French-to-English dictionary in tow. Since then, technology has opened up a world of new educational opportunities, from simple searches to Google Translate (and our backpacks have gotten a lot lighter). But it can be hard to find time and the means to practice a new language. So when the Web Speech API made it possible to speak to our phones, tablets and computers, I got curious about whether this technology could help people learn a language more easily.

That’s the idea behind Spell Up, a new word game and Chrome Experiment that helps you improve your English using your voice—and a modern browser, of course. It’s like a virtual spelling bee, with a twist.

This rocks!

If Google is going to open source another project and support it, Spell Up should be it.

The machine pronunciation could use some work, or at least it seems that way to me. (My hearing may be a factor there.)

Thinking of the impact of Spell Up for lesser often taught languages.

May 13, 2014

Online Language Taggers

Filed under: Language,Linguistics,Tagging — Patrick Durusau @ 4:21 pm

UCREL Semantic Analysis System (USAS)

From the homepage:

The UCREL semantic analysis system is a framework for undertaking the automatic semantic analysis of text. The framework has been designed and used across a number of research projects and this page collects together various pointers to those projects and publications produced since 1990.

The semantic tagset used by USAS was originally loosely based on Tom McArthur’s Longman Lexicon of Contemporary English (McArthur, 1981). It has a multi-tier structure with 21 major discourse fields (shown here on the right), subdivided, and with the possibility of further fine-grained subdivision in certain cases. We have written an introduction to the USAS category system (PDF file) with examples of prototypical words and multi-word units in each semantic field.

There are four online taggers available:

English: 100,000 word limit

Italian: 2,000 word limit

Dutch: 2,000 word limit

Chinese: 3,000 character limit

Enjoy!

I first saw this in a tweet by Paul Rayson.

Non-English/Spanish Language by State

Filed under: Government,Language — Patrick Durusau @ 3:57 pm

I need your help. I saw this on a twitter feed from Slate.

non-english-spanish

I don’t have confirmation that any member of Georgia (United States) government reads Slate, but putting this type of information where it might be seen by Georgia government staffers strikes me as irresponsible news reporting.

Publishing all of Snowden’s documents as an unedited dump would have less of an impact than members of the Georgia legislature finding out there is yet another race to worry about in Georgia.

The legislature hardly knows which way to turn now, knowing about African-Americans and Latinos. Adding another group to that list will only make matters worse.

Question: How to suppress information about the increasing diversity of the population of Georgia?

Not for long, just until it becomes diverse enough to replace all the sitting members of the Georgia legislature in one fell swoop. 😉

The more diverse Georgia becomes, the more vibrant its rebirth will be following its current period of stagnation trying to hold onto the “good old days.”

May 4, 2014

What I Said Is Not What You Heard

Filed under: Communication,Language — Patrick Durusau @ 1:22 pm

Another example of where semantic impedance can impair communication, not to mention public policy decisions:

science-public-disconnect

Those are just a few terms that are used in public statements from scientists.

I can hardly imagine the disconnect between lawyers and the public. Or economists and the public.

To say nothing of computer science in general and the public.

I’m not sold on the solution to bias being heard as distortion, political motive, is: offset from an observation.

Literally true but I’m not sure omitting the reason for the offset is all that helpful.

Something more along the lines of: “test A misses the true value B by C, so we (subtract/add) C to A to get a more correct value.”

A lot more words but clearer.

The image is from: Communicating the science of climate change by Richard C. J. Somerville and Susan Joy Hassol. A very good article on the perils of trying to communicate with the general public about climate change.

But it isn’t just the general public that has difficulty understanding scientists. Scientists have difficulty understanding other scientists, particularly if the scientists in question are from different domains or even different fields within a domain.

All of which has to make you wonder: If human beings, including scientists fail to understand each other on a regular basis, who is watching for misunderstandings between computers?

I first saw this in a tweet by Austin Frakt.

PS: Pointers to research on words that fail to communicate greatly appreciated.

April 30, 2014

Language is a Map

Filed under: Language,Maps,Topic Maps — Patrick Durusau @ 7:46 pm

Language is a Map by Tim O’Reilly.

From the post:

I’ve twice given an Ignite talk entitled Language is a Map, but I’ve never written up the fundamental concepts underlying that talk. Here I do that.

When I first moved to Sebastopol, before I raised horses, I’d look out at a meadow and I’d see grass. But over time, I learned to distinguish between oats, rye, orchard grass, and alfalfa. Having a language to make distinctions between different types of grass helped me to see what I was looking at.

I first learned this notion, that language is a map that reflects reality, and helps us to see it more deeply – or if wrong, blinds us to it – from George Simon, whom I first met in 1969. Later, George went on to teach workshops at the Esalen Institute, which was to the human potential movement of the 1970s as the Googleplex or Apple’s Infinite Loop is to the Silicon Valley of today. I taught at Esalen with George when I was barely out of high school, and his ideas have deeply influenced my thinking ever since.

If you accept Tim’s premise that “language is a map,” the next question that comes to mind is how faithfully can an information system represent your map?

Your map, not the map of an IT developer or a software vendor but your map?

Does your information system capture the shades and nuances of your map?

Enjoy!

April 16, 2014

…Generalized Language Models…

Filed under: Language,Linguistics,Modeling — Patrick Durusau @ 1:19 pm

How Generalized Language Models outperform Modified Kneser Ney Smoothing by a Perplexity drop of up to 25% by René Pickhardt.

René reports on the core of his dissertation work.

From the post:

When you want to assign a probability to a sequence of words you will run into the Problem that longer sequences are very rare. People fight this problem by using smoothing techniques and interpolating longer order models (models with longer word sequences) with lower order language models. While this idea is strong and helpful it is usually applied in the same way. In order to use a shorter model the first word of the sequence is omitted. This will be iterated. The Problem occurs if one of the last words of the sequence is the really rare word. In this way omiting words in the front will not help.

So the simple trick of Generalized Language models is to smooth a sequence of n words with n-1 shorter models which skip a word at position 1 to n-1 respectively.

Then we combine everything with Modified Kneser Ney Smoothing just like it was done with the previous smoothing methods.

Unlike some white papers, webinars and demos, you don’t have to register, list your email and phone number, etc. to see both the test data and code that implements René’s ideas.

Data, Source.

Please send René useful feedback as a way to say thank you for sharing both data and code.

March 5, 2014

« Newer PostsOlder Posts »

Powered by WordPress