## Archive for the ‘Rhetoric’ Category

### Why “Russian Troll” is NOT a Useful Category/Class

Thursday, November 30th, 2017

From the post:

Bottom line: when a stranger on the internet accuses you of being a Kremlin agent, of being a “useful idiot”, of “regurgitating Kremlin talking points”, this is simply their way of informing you that they have no argument for the actual thing that you are saying. If you’re using hard facts to point out the gaping plot holes in the Russiagate narrative, for example, and all they can do is call your argument Russian propaganda, this means that they have no counter-argument for the hard facts that you are presenting. They are deliberately shutting down the possibility of any dialogue with you because the cognitive dissonance you are causing them is making them uncomfortable.

Yes, paid shills for governments all over the world do indeed exist. But the odds are much greater that the stranger you are interacting with online is simply a normal person who isn’t convinced by the arguments that have been presented by the position you espouse. If your position is defensible you should be able to argue for it normally, regardless of whom you are speaking to.
… (emphasis in original)

Johnstone’s: Russian Troll accusation = No meaningful argument, postulate is a compelling one.

However, as the examples in Johnstone’s post also demonstrate, there is no common set of attributes that trigger its use.

“Russian Troll” is a brimful container of arbitrary whims, caprices and prejudices, which vary from user to user.

Arbitrary usage means it is unsuitable for use as a category or class, since any use is one off and unique.

I would not treat “Russian Troll” as a topic subject to merging but only as a string. Hopefully the 434K instances of it as a string (today, with quotes) will put users on notice of its lack of meaningful usage.

### How to Read a Book:…

Saturday, October 31st, 2015

I should have thought about this book when I posted How to Read a Paper. I haven’t seen a copy in years but that’s a flimsy excuse for forgetting about it. I was reminded of it today when I saw it in a tweet by Michael Nielson.

Amazon has this description:

With half a million copies in print, How to Read a Book is the best and most successful guide to reading comprehension for the general reader, completely rewritten and updated with new material.

Originally published in 1940, this book is a rare phenomenon, a living classic that introduces and elucidates the various levels of reading and how to achieve them—from elementary reading, through systematic skimming and inspectional reading, to speed reading. Readers will learn when and how to “judge a book by its cover,” and also how to X-ray it, read critically, and extract the author’s message from the text.

Also included is instruction in the different techniques that work best for reading particular genres, such as practical books, imaginative literature, plays, poetry, history, science and mathematics, philosophy and social science works.

Finally, the authors offer a recommended reading list and supply reading tests you can use measure your own progress in reading skills, comprehension, and speed.

Is How to Read a Book as relevant today as it was in 1940?

In chapter 1, Adler makes a critical distinction between facts and understanding and laments the packaging of opinions:

Perhaps we know more about the world than we used to, and insofar as knowledge is prerequisite to understanding, that is all to the good. But knowledge is not as much a prerequisite to understanding as is commonly supposed. We do not have to know everything about something in order to understand it; too many facts are often as much of an obstacle to understanding as too few. There is a sense in which we moderns are inundated with facts to the detriment of understanding.

One of the reasons for this situation is that the very media we have mentioned are so designed as to make thinking seem unnecessary (though this is only an appearance). The packaging of intellectual positions and views is one of the most active enterprises of some of the best minds of our day. The viewer of television, the listener to radio, the reader of magazines, is presented with a whole complex of elements—all the way from ingenious rhetoric to carefully selected data and statistics—to make it easy for him to “make up his own mind” with the minimum of difficulty and effort. But the packaging is often done so effectively that the viewer, listener, or reader does not make up his own mind at all. Instead, he inserts a packaged opinion into his mind, somewhat like inserting a cassette into a cassette player. He then pushes a button and “plays back” the opinion whenever it seems appropriate to do so. He has performed acceptably without having had to think.

I can’t imagine Adler’s characterization of Fox News, CNN, Facebook and other forums that inundate us with nothing but pre-packaged opinions and repetition of the same.

Although not in modern gender neutral words:

…he inserts a packaged opinion into his mind, somewhat like inserting a cassette into a cassette player. He then pushes a button and “plays back” the opinion whenever it seems appropriate to do so. He has performed acceptably without having had to think.

In a modern context, such viewers, listeners, or readers, in addition to the “play back” function are also quick to denounce anyone who questions their pre-recorded narrative as a “troll.” Fearing discussion of other narratives, alternative experiences or explanations, is a sure sign of a pre-recorded opinion. Discussion interferes with the propagation of pre-recorded opinions.

How to Mark a Book has delightful advice from Adler on marking books. It captures the essence of Adler’s love of books and reading.

### The Debunking Handbook

Sunday, November 23rd, 2014

The Debunking Handbook by John Cook, Stephan Lewandowsky.

From the post:

The Debunking Handbook, a guide to debunking misinformation, is now freely available to download. Although there is a great deal of psychological research on misinformation, there’s no summary of the literature that offers practical guidelines on the most effective ways of reducing the influence of myths. The Debunking Handbook boils the research down into a short, simple summary, intended as a guide for communicators in all areas (not just climate) who encounter misinformation.

The Handbook explores the surprising fact that debunking myths can sometimes reinforce the myth in peoples’ minds. Communicators need to be aware of the various backfire effects and how to avoid them, such as:

It also looks at a key element to successful debunking: providing an alternative explanation. The Handbook is designed to be useful to all communicators who have to deal with misinformation (eg – not just climate myths).

I think you will find this a delightful read! From the first section, titled: Debunking the first myth about debunking,

It’s self-evident that democratic societies should base their decisions on accurate information. On many issues, however, misinformation can become entrenched in parts of the community, particularly when vested interests are involved.1,2 Reducing the influence of misinformation is a difficult and complex challenge.

A common misconception about myths is the notion that removing its influence is as simple as packing more information into people’s heads. This approach assumes that public misperceptions are due to a lack of knowledge and that the solution is more information – in science communication, it’s known as the “information deficit model”. But that model is wrong: people don’t process information as simply as a hard drive downloading data.

Refuting misinformation involves dealing with complex cognitive processes. To successfully impart knowledge, communicators need to understand how people process information, how they modify
their existing knowledge and how worldviews affect their ability to think rationally. It’s not just what people think that matters, but how they think.

I would have accepted the first sentence had it read: It’s self-evident that democratic societies don’t base their decisions on accurate information.

😉

I don’t know of any historical examples of democracies making decisions on accurate information.

For example, there are any number of “rational” and well-meaning people who have signed off on the “war on terrorism” as though the United States is in any danger.

Deaths from terrorism in the United States since 2001 – fourteen (14).

Deaths by entanglement in bed sheets between 2001-2009 – five thousand five hundred and sixty-one (5561).

Despite being a great read, Debunking has a problem, it presumes you are dealing with a “rational” person. Rational as defined by…, as defined by what? Hard to say. It is only mentioned once and I suspect “rational” means that you agree with debunking the climate “myth.” I do as well but that’s happenstance and not because I am “rational” in some undefined way.

Realize that “rational” is a favorable label people apply to themselves and little more than that. It rather conveniently makes anyone who disagrees with you “irrational.”

I prefer to use “persuasion” on topics like global warming. You can use “facts” for people who are amenable to that approach but also religion (stewarts of the environment), greed (exploitation of the Third World for carbon credits), financial interest in government funded programs, or whatever works to persuade enough people to support your climate change program. Being aware that other people with other agendas are going to be playing the same game. The question is whether you want to be “rational” or do you want to win?

Personally I am convinced of climate change and our role in causing it. I am also aware of the difficulty of sustaining action by people with an average attention span of fifteen (15) seconds over the period of the fifty (50) years it will take for the environment to stabilize if all human inputs stopped tomorrow. It’s going to take far more than “facts” to obtain a better result.

### #shirtgate, #shirtstorm, and the rhetoric of science

Tuesday, November 18th, 2014

Unless you have been in a coma or just arrived from off-world, you have probably heard about #shirtgate/#shirtstorm. If not, take a minute to search on those hash tags to come up to speed.

During the ensuing flood of posts, tweets, etc., I happened to stumble upon To the science guys who want to understand #shirtstorm by Janet D. Stemwedel.

It is impressive because despite the inability of men and women to fully appreciate the rhetoric of the other gender, Stemwedel finds a third rhetoric, that of science, in which to conduct her argument.

Not that the rhetoric of science is a perfect fit for either gender but it is a rhetoric in which both genders share some assumptions and methods of reasoning. Those partially shared assumptions and methods make Stemwedel’s argument effective.

Take her comments on data gathering (formatted on her blog as tweets):

So, first big point: women’s accounts of their own experiences are better data than your preexisting hunches about their experiences.

Another thing you science guys know: sometimes we observe unexpected outcomes. We don’t say, That SHOULDN’T happen! but, WHY did it happen?

Imagine, for sake of arg, that women’s rxn to @mggtTaylor’s porny shirt was a TOTAL surprise. Do you claim that rxn shouldn’t hv happened?

Or, do you think like a scientist & try to understand WHY it happened? Do you stay stuck in your hunches or get some relevant data?

Do you recognize that women’s experiences in & with science (plus larger society) may make effect of porny shirt on #Rosetta publicity…

…on those women different than effect of porny shirt was on y’all science guys? Or that women KNOW how they feel about it better than you?

Science guys telling women “You shouldn’t be mad about porny shirt on #Rosetta video because…” is modeling bad scientific method!

Finding a common rhetoric is at the core of creating sustainable mappings between differing semantics. Stemwedel illustrates the potential for such a rhetoric even in a highly charged situation.

PS: You need to read Stemwedel’s post in the original.

### Exploiting Discourse Analysis…

Wednesday, October 16th, 2013

Exploiting Discourse Analysis for Article-Wide Temporal Classification by Jun-Ping Ng, Min-Yen Kan, Ziheng Lin, Wei Feng, Bin Chen, Jian Su, Chew-Lim Tan.

Abstract:

In this paper we classify the temporal relations between pairs of events on an article-wide basis. This is in contrast to much of the existing literature which focuses on just event pairs which are found within the same or adjacent sentences. To achieve this, we leverage on discourse analysis as we believe that it provides more useful semantic information than typical lexico-syntactic features. We propose the use of several discourse analysis frameworks, including 1) Rhetorical Structure Theory (RST), 2) PDTB-styled discourse relations, and 3) topical text segmentation. We explain how features derived from these frameworks can be effectively used with support vector machines (SVM) paired with convolution kernels. Experiments show that our proposal is effective in improving on the state-of-the-art significantly by as much as 16% in terms of F1, even if we only adopt less-than-perfect automatic discourse analyzers and parsers. Making use of more accurate discourse analysis can further boost gains to 35%

Cutting edge of discourse analysis, which should be interesting if you are automatically populating topic maps based upon textual analysis.

It won’t be perfect, but even human editors are not perfect. (Or so rumor has it.)

A robust topic map system should accept, track and if approved, apply user submitted corrections and changes.

### Rhetological Fallacies

Tuesday, April 3rd, 2012

Rhetological Fallacies: Errors and manipulations of rhetorical and logical thinking.

Useful and deeply amusing chart of errors in rhetoric and logic by David McCandless.

Each error has a symbol along with a brief explanation.

These symbols really should appear in Unicode. 😉

Or at least have a TeX symbol set defined for them.

I have written to ask about a version with separate images/glyphs for each fallacy. Would make it easier to “tag” arguments.