Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

February 22, 2018

Comparing Comprehensive English Grammars?

Filed under: Grammar,Language — Patrick Durusau @ 8:17 pm

Neal Goldfarb in SCOTUS cites CGEL (Props to Justice Gorsuch and the Supreme Court library) highlights two comprehensive grammars for English.

Both are known by the initials CGEL:

Being the more recent work, Cambridge Grammar of the English Language lists today for $279.30 (1860 pages), whereas Quirk’s 1985 Comprehensive Grammar of the English Language, can be had for $166.08 (1779 pages).

Interesting fact, the acronym CGEL was in use for 17 years by Comprehensive Grammar of the English Language before Cambridge Grammar of the English Language was published, using the same acronym.

Curious how much new information was added by the Cambridge grammar? If you had a machine readable text of both, excluded the examples and then calculated the semantic distance between sections on the same material, you could produce a measurement of the distance between the two texts.

Given the prices of academic texts, standardizing a method of comparison would be a boon to scholars and graduate students!

(No comment on the over-writing of the acronym for Quirk’s work by Cambridge.)

June 4, 2015

Open Review: Grammatical theory:…

Filed under: Grammar,Linguistics,Open Access,Peer Review — Patrick Durusau @ 2:22 pm

Open Review: Grammatical theory: From transformational grammar to constraint-based approaches by Stefan Müller (Author).

From the webpage:

This book is currently at the Open Review stage. You can help the author by making comments on the preliminary version: Part 1, Part 2. Read our user guide to get acquainted with the software.

This book introduces formal grammar theories that play a role in current linguistics or contributed tools that are relevant for current linguistic theorizing (Phrase Structure Grammar, Transformational Grammar/Government & Binding, Mimimalism, Generalized Phrase Structure Grammar, Lexical Functional Grammar, Categorial Grammar, Head-Driven Phrase Structure Grammar, Construction Grammar, Tree Adjoining Grammar, Dependency Grammar). The key assumptions are explained and it is shown how each theory treats arguments and adjuncts, the active/passive alternation, local reorderings, verb placement, and fronting of constituents over long distances. The analyses are explained with German as the object language.

In a final part of the book the approaches are compared with respect to their predictions regarding language acquisition and psycholinguistic plausibility. The nativism hypothesis that claims that humans posses genetically determined innate language-specific knowledge is examined critically and alternative models of language acquisition are discussed. In addition this more general part addresses issues that are discussed controversially in current theory building such as the question whether flat or binary branching structures are more appropriate, the question whether constructions should be treated on the phrasal or the lexical level, and the question whether abstract, non-visible entities should play a role in syntactic analyses. It is shown that the analyses that are suggested in the various frameworks are often translatable into each other. The book closes with a section that shows how properties that are common to all languages or to certain language classes can be captured.

(emphasis in the original)

Part of walking the walk of open access means participating in open reviews as your time and expertise permits.

Even if grammar theory isn’t your field, professionally speaking, it will be good mental exercise to see another view of the world of language.

I am intrigued by the suggestion “It shows that the analyses that are suggested in the various frameworks are often translatable into each other.” Shades of the application of category theory to linguistics? Mappings of identifications?

December 11, 2014

Semantic Parsing with Combinatory Categorial Grammars (Videos)

Filed under: Grammar,Linguistics,Parsing,Semantics — Patrick Durusau @ 10:45 am

Semantic Parsing with Combinatory Categorial Grammars by Yoav Artzi, Nicholas FitzGerald and Luke Zettlemoyer. (Tutorial)

Abstract:

Semantic parsers map natural language sentences to formal representations of their underlying meaning. Building accurate semantic parsers without prohibitive engineering costs is a long-standing, open research problem.

The tutorial will describe general principles for building semantic parsers. The presentation will be divided into two main parts: modeling and learning. The modeling section will include best practices for grammar design and choice of semantic representation. The discussion will be guided by examples from several domains. To illustrate the choices to be made and show how they can be approached within a real-life representation language, we will use λ-calculus meaning representations. In the learning part, we will describe a unified approach for learning Combinatory Categorial Grammar (CCG) semantic parsers, that induces both a CCG lexicon and the parameters of a parsing model. The approach learns from data with labeled meaning representations, as well as from more easily gathered weak supervision. It also enables grounded learning where the semantic parser is used in an interactive environment, for example to read and execute instructions.

The ideas we will discuss are widely applicable. The semantic modeling approach, while implemented in λ-calculus, could be applied to many other formal languages. Similarly, the algorithms for inducing CCGs focus on tasks that are formalism independent, learning the meaning of words and estimating parsing parameters. No prior knowledge of CCGs is required. The tutorial will be backed by implementation and experiments in the University of Washington Semantic Parsing Framework (UW SPF).

I previously linked to the complete slide set for this tutorial.

This page offers short videos (twelve (12) currently) and links into the slide set. More videos are forthcoming.

The goal of the project is “recover complete meaning representation” where complete meaning = “Complete meaning is sufficient to complete the task.” (from video 1).

That definition of “complete meaning” dodges a lot of philosophical as well as practical issues with semantic parsing.

Take the time to watch the videos, Yoav is a good presenter.

Enjoy!

November 21, 2014

Beyond You’re vs. Your: A Grammar Cheat Sheet Even The Pros Can Use

Filed under: Grammar,Writing — Patrick Durusau @ 2:41 pm

Beyond You’re vs. Your: A Grammar Cheat Sheet Even The Pros Can Use by Hayley Mullen.

From the post:

Grammar is one of those funny things that sparks a wide range of reactions from different people. While one person couldn’t care less about colons vs. semicolons, another person will have a visceral reaction to a misplaced apostrophe or a “there” where a “their” is needed (if you fall into the latter category, hello and welcome).

I think we can still all agree on one thing: poor grammar and spelling takes away from your message and credibility. In the worst case, a blog post rife with errors will cause you to think twice about how knowledgeable the person who wrote it really is. In lesser cases, a “then” where a “than” should be is just distracting and reflects poorly on your editing skills. Which is a bummer.

More than the ills mentioned by Hayley, poor writing is hard to understand. Using standards or creating topic maps is hard enough without having to decipher poor writing.

If you already write well, a refresher never hurts. If you don’t write so well, take Hayley’s post to heart and learn from it.

There are errors in standards that tend to occur over and over again. Perhaps I should write a cheat sheet for common standard writing errors. Possible entries: Avoiding Definite Article Abuse, Saying It Once Is Enough, etc.

October 22, 2014

Gram­mat­i­cal the­o­ry: From trans­for­ma­tion­al gram­mar to con­straint-​based ap­proach­es

Filed under: Grammar,Language — Patrick Durusau @ 4:09 pm

Gram­mat­i­cal the­o­ry: From trans­for­ma­tion­al gram­mar to con­straint-​based ap­proach­es by Ste­fan Müller.

From the webpage:

To ap­pear 2015 in Lec­ture Notes in Lan­guage Scineces, No 1, Berlin: Lan­guage Sci­ence Press. The book is a trans­la­tion and ex­ten­sion of the sec­ond edi­tion of my gram­mar the­o­ry book that ap­peared 2010 in the Stauf­fen­burg Ver­lag.

This book in­tro­duces for­mal gram­mar the­o­ries that play a role in cur­rent lin­guis­tics or con­tribut­ed tools that are rel­e­vant for cur­rent lin­guis­tic the­o­riz­ing (Phrase Struc­ture Gram­mar, Trans­for­ma­tion­al Gram­mar/Gov­ern­ment & Bind­ing, Gen­er­al­ized Phrase Struc­ture Gram­mar, Lex­i­cal Func­tion­al Gram­mar, Cat­e­go­ri­al Gram­mar, Head-​Driv­en Phrase Struc­ture Gram­mar, Con­struc­tion Gram­mar, Tree Ad­join­ing Gram­mar). The key as­sump­tions are ex­plained and it is shown how the re­spec­tive the­o­ry treats ar­gu­ments and ad­juncts, the ac­tive/pas­sive al­ter­na­tion, local re­order­ings, verb place­ment, and fronting of con­stituents over long dis­tances. The anal­y­ses are ex­plained with Ger­man as the ob­ject lan­guage.

In a final chap­ter the ap­proach­es are com­pared with re­spect to their pre­dic­tions re­gard­ing lan­guage ac­qui­si­tion and psy­cholin­guis­tic plau­si­bil­i­ty. The Na­tivism hy­poth­e­sis that as­sumes that hu­mans poss­es ge­net­i­cal­ly de­ter­mined in­nate lan­guage-​spe­cif­ic knowl­edge is ex­am­ined crit­i­cal­ly and al­ter­na­tive mod­els of lan­guage ac­qui­si­tion are dis­cussed. In ad­di­tion this chap­ter ad­dress­es is­sues that are dis­cussed con­tro­ver­sial­ly in cur­rent the­o­ry build­ing as for in­stance the ques­tion whether flat or bi­na­ry branch­ing struc­tures are more ap­pro­pri­ate, the ques­tion whether con­struc­tions should be treat­ed on the phrasal or the lex­i­cal level, and the ques­tion whether ab­stract, non-​vis­i­ble en­ti­ties should play a role in syn­tac­tic anal­y­ses. It is shown that the anal­y­ses that are sug­gest­ed in the re­spec­tive frame­works are often trans­lat­able into each other. The book clos­es with a sec­tion that shows how prop­er­ties that are com­mon to all lan­guages or to cer­tain lan­guage class­es can be cap­tured.

The webpage offers a download link for the current draft, teaching materials and a BibTeX file of all publications that the author cites in his works.

Interesting because of the application of these models to a language other than English and the author’s attempt to help readers avoid semantic confusion:

Unfortunately, linguistics is a scientific field which is afflicted by an unbelievable degree of terminological chaos. This is partly due to the fact that terminology originally defined for certain languages (e. g. Latin, English) was later simply adopted for the description of other languages as well. However, this is not always appropriate since languages differ from one another greatly and are constantly changing. Due to the problems this caused, the terminology started to be used differently or new terms were invented. when new terms are introduced in this book, I will always mention related terminology or differing uses of each term so that readers can relate this to other literature.

Unfortunately, it does not appear like the author gathered the new terms up into a table or list. Creating such a list from the book would be a very useful project.

November 3, 2013

A multi-Teraflop Constituency Parser using GPUs

Filed under: GPU,Grammar,Language,Parsers,Parsing — Patrick Durusau @ 4:45 pm

A multi-Teraflop Constituency Parser using GPUs by John Canny, David Hall and Dan Klein.

Abstract:

Constituency parsing with rich grammars remains a computational challenge. Graphics Processing Units (GPUs) have previously been used to accelerate CKY chart evaluation, but gains over CPU parsers were modest. In this paper, we describe a collection of new techniques that enable chart evaluation at close to the GPU’s practical maximum speed (a Teraflop), or around a half-trillion rule evaluations per second. Net parser performance on a 4-GPU system is over 1 thousand length- 30 sentences/second (1 trillion rules/sec), and 400 general sentences/second for the Berkeley Parser Grammar. The techniques we introduce include grammar compilation, recursive symbol blocking, and cache-sharing.

Just in case you are interested in parsing “unstructured” data, mostly what they also call “texts.”

I first saw the link: BIDParse: GPU-accelerated natural language parser at hgup.org. Then I started looking for the paper. 😉

July 12, 2012

grammar why ! matters

Filed under: Grammar,Language — Patrick Durusau @ 6:41 pm

grammar why ! matters

Bob Carpenter has a good rant on grammar.

The test I would urge everyone to use before buying software or even software services is to ask to see their documentation.

Give it to one of your technical experts and ask them to turn to any page and start reading.

If at any point your expert asks what was meant, thank the vendor for their time and show them the door.

It will save you time and expense in the long run to use only software with good documentation. (It would be nice to have software that doesn’t crash often too but I would not ask for the impossible.)

Powered by WordPress