Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

September 27, 2019

Weaponizing Your Information?

Filed under: Advertising,Fake News,Social Media,Social Networks — Patrick Durusau @ 8:29 pm

Study: Weaponized misinformation from political parties is now a global problem by Cara Curtis.

Social media, a tool created to guard freedom of speech and democracy, has increasingly been used in more sinister ways.

Memory check! Memory check! Is that how you remember the rise of social media? Have you ever thought of usenet as guarding freedom of speech (maybe) and democracy (unlikely)?

The Global Information Disorder report, the basis for Curtis’ report, treats techniques and tactics at a high level view, leaving you to file in the details for an information campaign. I prefer information as “disinformation” is in the eye of the reader.

I don’t have cites (apologies) to advertising literature on the shaping of information content for ads. Techniques known to work for advertisers, who have spent decades and $billions sharpening their techniques, should work for spreading information as well. Suggested literature?

September 11, 2018

EveryCRSReport.com [Better than Liberal and Conservative News Sources]

Filed under: Fake News,Journalism,News,Reporting — Patrick Durusau @ 8:42 pm

EveryCRSReport.com

From the homepage:

We’re publishing reports by Congress’s think tank, the Congressional Research Service, which provides valuable insight and non-partisan analysis of issues of public debate. These reports are already available to the well-connected — we’re making them available to everyone for free.

From the about page:

Congressional Research Service reports are the best way for anyone to quickly get up to speed on major political issues without having to worry about spin — from the same source Congress uses.

CRS is Congress’ think tank, and its reports are relied upon by academics, businesses, judges, policy advocates, students, librarians, journalists, and policymakers for accurate and timely analysis of important policy issues. The reports are not classified and do not contain individualized advice to any specific member of Congress. (More: What is a CRS report?)

Congressional Research Service reports have a point of view. Any report worth reading has a point of view. CRS reports name and evaluate their sources, give reasons for the views reported, they empower readers to evaluate reports, as opposed to swallowing them whole. (Contrast that with average media reporting.)

For example, Decision to Stop U.S. Funding of UNRWA (for Palestinian Refugees) gives a brief background on this controversial issue, followed by a factual recitation of events up to the date of the report, an evaluation of the possible impact of ending funding for UNRWA, folowed by options for Congress.

If you are at all aware of the bitterness that surrounds any discussion of Palestine and/or the Palestinians, the CRS report is a tribute to the even-handedness of the Congressional Research Service.

New reports appear daily so check back often and support this project.

June 30, 2018

What’s Your Viral Spread Score?

Filed under: Fake News,News,Social Media,Social Networks — Patrick Durusau @ 4:13 pm

The Hoaxy homepage reports:

Visualize the spread of claims and fact checking.

Of course, when you get into the details, out of the box, Hoaxy isn’t setup to measure your ability to spread virally.

From the FAQ:


How does Hoaxy search work?

The Hoaxy corpus tracks the social sharing of links to stories published by two types of websites: (1) Low-credibility sources that often publish inaccurate, unverified, or satirical claims according to lists compiled and published by reputable news and fact-checking organizations. (2) Independent fact-checking organizations, such as snopes.com, politifact.com, and factcheck.org, that routinely fact check unverified claims.

What does the visualization show?

Hoaxy visualizes two aspects of the spread of claims and fact checking: temporal trends and diffusion networks. Temporal trends plot the cumulative number of Twitter shares over time. The user can zoom in on any time interval. Diffusion networks display how claims spread from person to person. Each node is a Twitter account and two nodes are connected if a link to a story is passed between those two accounts via retweets, replies, quotes, or mentions. The color of a connection indicates the type of information: claims and fact checks. Clicking on an edge reveals the tweet(s) and the link to the shared story; clicking on a node reveals claims shared by the corresponding user. The network may be pruned for performance.

(emphasis in original)

Bottom line is you won’t be able to ask someone for their Hoaxy score. Sorry.

On the bright side, the Hoaxy frontend and backend source code is available, so you can create a customized version (not using the Hoaxy name) with different capabilities.

The other good news is that you can study the techniques of messages that do spread virally, so you can get better at creating messages that go viral.

May 4, 2018

Propaganda For Our Own Good

Filed under: Fake News,Government,Journalism,News — Patrick Durusau @ 10:43 pm

US and Western government propaganda has been plentiful for decades but Caitlin Johnstone uncovers why a prominent think tank is calling for more Western propaganda.

Atlantic Council Explains Why We Need To Be Propagandized For Our Own Good

From the post:

I sometimes try to get establishment loyalists to explain to me exactly why we’re all meant to be terrified of this “Russian propaganda” thing they keep carrying on about. What is the threat, specifically? That it makes the public less willing to go to war with Russia and its allies? That it makes us less trusting of lying, torturing, coup-staging intelligence agencies? Does accidentally catching a glimpse of that green RT logo turn you to stone like Medusa, or melt your face like in Raiders of the Lost Ark?

“Well, it makes us lose trust in our institutions,” is the most common reply.

Okay. So? Where’s the threat there? We know for a fact that we’ve been lied to by those institutions. Iraq isn’t just something we imagined. We should be skeptical of claims made by western governments, intelligence agencies and mass media. How specifically is that skepticism dangerous?

A great read as always but I depart from Johnstone when she concludes:


If our dear leaders are so worried about our losing faith in our institutions, they shouldn’t be concerning themselves with manipulating us into trusting them, they should be making those institutions more trustworthy.

Don’t manipulate better, be better. The fact that an influential think tank is now openly advocating the former over the latter should concern us all.

I tweeted to George Lakoff quite recently, asking for more explicit treatment of how to use persuasion techniques.

Being about to recognize persuasion used against you, in propaganda for example, is good. Being about to construct such techniques to use in propaganda against others, is great! Sadly, no response from Lakoff. Perhaps he was busy.

The “other side,” your pick, isn’t going to stop using propaganda. Hoping, wishing, praying they will, are exercises in being ineffectual.

If you seek to counter decades of finely honed war-mongering, exploitive Western narrative, be prepared to use propaganda and to use it well.

March 11, 2018

Spreading “Fake News,” Science Says It Wasn’t Russian Bots

Filed under: Fake News,Politics,Twitter — Patrick Durusau @ 2:04 pm

The spread of true and false news online by Soroush Vosoughi, Deb Roy, and Sinan Aral. (Science 09 Mar 2018: Vol. 359, Issue 6380, pp. 1146-1151 DOI: 10.1126/science.aap9559)

Abstract:

We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.

Real data science. The team had access to all the Twitter data and not a cherry-picked selection, which of course can’t be shared due to Twitter rules, or so say ISIS propaganda scholars.

The paper merits a slow read but highlights for the impatient:

  1. Don’t invest in bots or high-profile Twitter users for the 2018 mid-term elections.
  2. Craft messages with a high novelty factor that disfavor your candidates opponents.
  3. Your messages should inspire fear, disgust and surprise.

Democrats working hard to lose the 2018 mid-terms will cry you a river about issues, true facts, engagement on the issues and a host of other ideas used to explain losses to losers.

There’s still time to elect a progressive Congress in 2018.

Are you game?

March 8, 2018

Contesting the Right to Deliver Disinformation

Filed under: Fake News,Journalism,News — Patrick Durusau @ 8:42 pm

Eric Singerman reports on a recent conference titled: Understanding and Addressing the Disinformation Ecosystem.

He summarizes the conference saying:

The problem of mis- and disinformation is far more complex than the current obsession with Russian troll factories. It’s the product of the platforms that distribute this information, the audiences that consume it, the journalist and fact-checkers that try to correct it – and even the researchers who study it.

In mid-December, First Draft, the Annenberg School of Communication at the University of Pennsylvania and the Knight Foundation brought academics, journalists, fact-checkers, technologists and funders together in a two-day workshop to discuss the challenges produced by the current disinformation ecosystem. The convening was intended to highlight relevant research, share best-practices, identify key questions of scholarly and practical concern and outline a potential research agenda designed to answer these questions.

In preparation for the workshop, a number of attendees prepared short papers that could act as starting points for discussion. These papers covered a broad range of topics – from the ways that we define false and harmful content, to the dystopian future of computer-generated visual disinformation.

Download the papers here.

Singerman points out the very first essay concedes that “fake news” isn’t anything new. Although I would read Schudson and Zelizer (authors of the first paper) with care. They contend:


Fake news lessened in centrality only in the late 1800s as printed news, particularly in Britain and the United States, came to center on what Jean Chalaby called “fact-centered discursive practices” and people realized that newspapers could compete with one another not simply on the basis of partisan affiliation or on the quality of philosophical and political essays but on the immediacy and accuracy of factual reports (Chalaby 1996).

I’m sorry, that’s just factually incorrect. The 1890’s were the age of “yellow journalism,” a statement confirmed by the Digital Library of America‘s resource collection: Fake News in the 1890s: Yellow Journalism:

Alternative facts, fake news, and post-truth have become common terms in the contemporary news industry. Today, social media platforms allow sensational news to “go viral,” crowdsourced news from ordinary people to compete with professional reporting, and public figures in offices as high as the US presidency to bypass established media outlets when sharing news. However, dramatic reporting in daily news coverage predates the smartphone and tablet by over a century. In the late nineteenth century, the news media war between Joseph Pulitzer’s New York World and William Randolph Hearst’s New York Journal resulted in the rise of yellow journalism, as each newspaper used sensationalism and manipulated facts to increase sales and attract readers.

Many trace the origin of yellow journalism to coverage of the sinking of the USS Maine in Havana Harbor on February 15, 1898, and America’s entry in the Spanish-American War. Both papers’ reporting on this event featured sensational headlines, jaw-dropping images, bold fonts, and aggrandizement of facts, which influenced public opinion and helped incite America’s involvement in what Hearst termed the “Journal’s War.”

The practice, and nomenclature, of yellow journalism actually predates the war, however. It originated with a popular comic strip character known as The Yellow Kid in Hogan’s Alley. Created by Richard F. Outcault in 1895, Hogan’s Alley was published in color by Pulitzer’s New York World. When circulation increased at the New York World, William Randolph Hearst lured Outcault to his newspaper, the New York Journal. Pulitzer fought back by hiring another artist to continue the comic strip in his newspaper.

The period of peak yellow journalism by the two New York papers ended in the late 1890s, and each shifted priorities, but still included investigative exposés, partisan political coverage, and other articles designed to attract readers. Yellow journalism, past and present, conflicts with the principles of journalistic integrity. Today, media consumers will still encounter sensational journalism in print, on television, and online, as media outlets use eye-catching headlines to compete for audiences. To distinguish truth from “fake news,” readers must seek multiple viewpoints, verify sources, and investigate evidence provided by journalists to support their claims.

You can see the evidence relied upon by the DPLA for its claims about yellow dog journalism here: Fake News in the 1890s: Yellow Journalism.

Why Schudson and Zelizer thought Chalaby, J. “Journalism as an Anglo-American Invention,” European Journal of Communication 11 (3), 1996, 303-326, supported their case isn’t clear.

If you read the Chalaby article, you find it is primarily concerned with contrasting the French press with Anglo-American practices, a comparison in which the French come off a distant second best.

More to the point, the New York World, the New York Journal, nor yellowdog journalism appears anywhere in the Chalaby article. Check for yourself: Journalism as an Anglo-American Invention.

Chalaby does claim the origin of “fact-centered discursive practices” in the 1890’s but the absence of any mention of journalism that lead to the Spanish-American war, casts doubt on how much we should credit Chalaby’s knowledge of US journalism.

I haven’t checked the other footnotes of Schudson and Zelizer, I leave that as an exercise for interested readers.

I do think Schudson and Zelizer capture the main driver of concern over “fake news” when they say:

First, there is a great anxiety today about the border between professional journalists and others who through digital media have easy access to promoting their ideas, perspectives, factual reports, pranks, inanities, conspiracy theories, fakes and lies….

Despite being framed as a contest between factual reporting and disinformation, the dispute over disinformation/fake news is over the right to profit from disinformation/fake news.

If you need a modern example of yellow journalism, consider the ongoing media frenzy over Russian “interference” in US elections.

How often do you hear reports of context that include instances of US-sponsored assassinations, funded and armed government overthrows, active military interference with both elections and governments, by the US?

What? Some Russians bought Facebook ads and used election hashtags on Twitter? That compares to overthrowing other governments? The long history of the U.S. interfering with elections elsewhere. (tip of the iceberg)

The constant hyperbole in the “Russian interference” story is a clue that journalists and social media are re-enacting the roles played by the New York World and the New York Journal, which lead to the Spanish-American war.

Truth be told, we should thank social media for the free distribution of disinformation, previously available only by subscription.

Discerning what is or is not accurate information, as always, falls on the shoulders of readers. It has ever been thus.

February 28, 2018

Liberals Amping Right Wing Conspiracies

Filed under: Fake News,News,Social Media,Social Networks — Patrick Durusau @ 9:19 pm

You read the headline correctly: Liberals Amping Right Wing Conspiracies.

It’s the only reasonable conclusion after reading Molly McKew‘s post: How Liberals Amped up a Paranoid Shooting Conspiracy Theory.

From the post:


This terminology camouflages the war for minds that is underway on social media platforms, the impact that this has on our cognitive capabilities over time, and the extent to which automation is being engaged to gain advantage. The assumption, for example, that other would-be participants in social media information wars who choose to use these same tactics will gain the same capabilities or advantage is not necessarily true. This is a playing field that is hard to level: Amplification networks have data-driven, machine learning components that work better with refinement over time. You can’t just turn one on and expect it to work perfectly.

The vast amounts of content being uploaded every minute cannot possibly be reviewed by human beings. Algorithms, and the poets who sculpt them, are thus given an increasingly outsized role in the shape of our information environment. Human minds are on a battlefield between warring AIs—caught in the crossfire between forces we can’t see, sometimes as collateral damage and sometimes as unwitting participants. In this blackbox algorithmic wonderland, we don’t know if we are picking up a gun or a shield.

McKew has a great description of the amplification in the Parkland shooting conspiracy case, but it’s after the fact and not a basis for predicting the next amplification event.

Any number of research projects suggest themselves:

  • Observing and testing social media algorithms against content
  • Discerning patterns in amplified content
  • Testing refinement of content
  • Building automated tools to apply lessons in amplification

No doubt all those are underway in various guises for any number of reasons. But are you going to share in those results to protect your causes?

February 22, 2018

If You Like “Fake News,” You Will Love “Fake Science”

Filed under: Fake News,Media,Science,Skepticism — Patrick Durusau @ 4:53 pm

Prestigious Science Journals Struggle to Reach Even Average Reliability by Björn Brembs.

Abstract:

In which journal a scientist publishes is considered one of the most crucial factors determining their career. The underlying common assumption is that only the best scientists manage to publish in a highly selective tier of the most prestigious journals. However, data from several lines of evidence suggest that the methodological quality of scientific experiments does not increase with increasing rank of the journal. On the contrary, an accumulating body of evidence suggests the inverse: methodological quality and, consequently, reliability of published research works in several fields may be decreasing with increasing journal rank. The data supporting these conclusions circumvent confounding factors such as increased readership and scrutiny for these journals, focusing instead on quantifiable indicators of methodological soundness in the published literature, relying on, in part, semi-automated data extraction from often thousands of publications at a time. With the accumulating evidence over the last decade grew the realization that the very existence of scholarly journals, due to their inherent hierarchy, constitutes one of the major threats to publicly funded science: hiring, promoting and funding scientists who publish unreliable science eventually erodes public trust in science.

Facts, even “scientific facts,” should be questioned, tested and never blindly accepted.

Knowing a report appears in Nature, or Science, or (zine of your choice), helps you find it. Beyond that, you have to read and evaluate the publication to credit it with more than a place of publication.

Reading beyond abstracts or click-bait headlines, checking footnotes or procedures, do those things very often and you will be in danger of becoming a critical reader. Careful!

February 7, 2018

Were You Pwned by the “Human Cat” Story?

Filed under: Facebook,Fake News — Patrick Durusau @ 5:55 pm

Overseas Fake News Publishers Use Facebook’s Instant Articles To Bring In More Cash by Jane Lytvynenko

Fake stories first:

From the post:

While some mainstream publishers are abandoning Facebook’s Instant Articles, fake news sites based overseas are taking advantage of the format — and in some cases Facebook itself is earning revenue from their false stories.

BuzzFeed News found 29 Facebook pages, and associated websites, that are using Instant Articles to help their completely false stories load faster on Facebook. At least 24 of these pages are also signed up with Facebook Audience Network, meaning Facebook itself earns a share of revenue from the fake news being read on its platform.

Launched in 2015, Instant Articles offer a way for publishers to have their articles load quickly and natively within the Facebook mobile app. Publishers can insert their own ads or use Facebook’s ad network, Audience Network, to automatically place advertisements into their articles. Facebook takes a cut of the revenue when sites monetize with Audience Network.

“We’re against false news and want no part of it on our platform; including in Instant Articles,” said an email statement from a Facebook spokesperson. “We’ve launched a comprehensive effort across all products to take on these scammers, and we’re currently hosting third-party fact checkers from around the world to understand how we can more effectively solve the problem.”

The spokesperson did not respond to questions about the use of Instant Articles by spammers and fake news publishers, or about the fact that Facebook’s ad network was also being used for monetization. The articles sent to Facebook by BuzzFeed News were later removed from the platform. The company also removes publishers from Instant Articles if they’ve been flagged by third-party fact-checkers.

Really? You could be pwned by a “human cat” story?

Why I should be morally outraged and/or willing to devote attention to stopping that type of fake news?

Or ask anyone else to devote their resources to it?

Would you seek out Flat Earthers to dispel their delusions? If not, leave the “fake news” to people who seem to enjoy it. It’s their dime.

January 11, 2018

Fact Forward: Fact Free Assault on Online Misinformation

Filed under: Fake News,Journalism,News,Reporting — Patrick Durusau @ 3:00 pm

Fact Forward: If you had $50,000, how would you change fact-checking?

From the post:

The International Fact-Checking Network wants to support your next big idea.

We recognize the importance of making innovation a key part of fact-checking in the age of online misinformation and we are also aware that innovation requires investment. For those reasons, we are opening Fact Forward. A call for fact-checking organizations and/or teams of journalists, designers, developers or data scientists to submit projects that can represent a paradigmatic innovation for fact-checkers in any of these areas: 1) formats, 2) business models 3) technology-assisted fact-checking.

With Fact Forward, the IFCN will grant 50,000 USD to the winning project.

For this fund, an innovative project is defined as one that provides a distinct, novel user experience that seamlessly integrates content, design, and business strategy. The innovation should serve both the audience and the organization.

The vague definition of “innovative project” leaves the impression the judges have no expertise with software development. A quick check of the judges credentials reveals that is indeed the case. Be forewarned, fluffy pro-fact checking phrases are likely to outweigh any technical merit in your proposals.

If you doubt this is an ideological project, consider the implied premises of “…the age of online misinformation….” Conceding that online misinformation does exist, those include:

1. Online misinformation influences voters:

What evidence does exist, is reported by Hunt Allcott, Matthew Gentzkow in Social Media and Fake News in the 2016 Election, the astract reads:

Following the 2016 U.S. presidential election, many have expressed concern about the effects of false stories (“fake news”), circulated largely through social media. We discuss the economics of fake news and present new data on its consumption prior to the election. Drawing on web browsing data, archives of fact-checking websites, and results from a new online survey, we find: (i) social media was an important but not dominant source of election news, with 14 percent of Americans calling social media their “most important” source; (ii) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times; (iii) the average American adult saw on the order of one or perhaps several fake news stories in the months around the election, with just over half of those who recalled seeing them believing them; and (iv) people are much more likely to believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks.

Or as summarized in Don’t blame the election on fake news. Blame it on the media by Duncan J. Watts and David M. Rothschild:


In addition, given what is known about the impact of online information on opinions, even the high-end estimates of fake news penetration would be unlikely to have had a meaningful impact on voter behavior. For example, a recent study by two economists, Hunt Allcott and Matthew Gentzkow, estimates that “the average US adult read and remembered on the order of one or perhaps several fake news articles during the election period, with higher exposure to pro-Trump articles than pro-Clinton articles.” In turn, they estimate that “if one fake news article were about as persuasive as one TV campaign ad, the fake news in our database would have changed vote shares by an amount on the order of hundredths of a percentage point.” As the authors acknowledge, fake news stories could have been more influential than this back-of-the-envelope calculation suggests for a number of reasons (e.g., they only considered a subset of all such stories; the fake stories may have been concentrated on specific segments of the population, who in turn could have had a disproportionate impact on the election outcome; fake news stories could have exerted more influence over readers’ opinions than campaign ads). Nevertheless, their influence would have had to be much larger—roughly 30 times as large—to account for Trump’s margin of victory in the key states on which the election outcome depended.

Just as one example, online advertising is routinely studied, Understanding Interactive Online Advertising: Congruence and Product Involvement in Highly and Lowly Arousing, Skippable Video Ads by Daniel Belanche, Carlos Flavián, Alfredo Pérez-Rueda. But the IFCN offers no similar studies for what it construes as “…online misinformation….”

Without some evidence for and measurement of the impact of “…online misinformation…,” what is the criteria for success for your project?

2. Correcting online misinformation influences voters:

The second, even more problematic assumption in this project is that correcting online misinformation influences voters.

Facts, even “correct” facts do a poor job of changing opinions. Even the lay literature is legion on this point: Facts Don’t Change People’s Minds. Here’s What Does; Why Facts Don’t Change Our Minds; The Backfire Effect: Why Facts Don’t Win Arguments; In the battle to change people’s minds, desires come before facts; The post-fact era.

Any studies to the contrary? Surely the IFCN has some evidence that correcting misinformation changes opinions or influences voter behavior?

(I reserve this space for any studies supplied by the IFCN or others to support that premise.)

I don’t disagree with fact checking per se. Readers should be able to rely upon representations of fact. But Glenn Greenwald’s The U.S. Media Suffered Its Most Humiliating Debacle in Ages and Now Refuses All Transparency Over What Happened makes it clear that misinformation isn’t limited to appearing online.

One practical suggestion: If $50,000 is enough for your participation in an ideological project, use sentiment analysis to identify pro-Trump materials. Anything “pro-Trump” is, for some funders, “misinformation.”

PS: I didn’t vote for Trump and loathe his administration. However, pursuing fantasies to explain his victory in 2016 won’t prevent a repeat of same in 2020. Whether he is defeated with misinformation or correct information makes no difference to me. His defeat is the only priority.

Practical projects with a defeat of Trump in 2020 goal are always of interest. Ping me.

December 24, 2017

A/B Tests for Disinformation/Fake News?

Filed under: A/B Tests,Fake News,Journalism,News — Patrick Durusau @ 2:59 pm

Digital Shadows says it:

Digital Shadows monitors, manages, and remediates digital risk across the widest range of sources on the visible, deep, and dark web to protect your organization.

It recently published The Business of Disinformation: A Taxonomy – Fake news is more than a political battlecry.

It’s not long, fourteen (14) pages and it has the usual claims about disinformation and fake news you know from other sources.

However, for all its breathless prose and promotion of its solution, there is no mention of any A/B tests to show that disinformation or fake news is effective in general or against you in particular.

The value proposition offered by Digital Shadows is everyone says disinformation and fake news are important, therefore spend money with us to combat it.

Alien abduction would be important but I won’t be buying alien abduction insurance or protection services any time soon.

Proof of the effectiveness of disinformation and fake news is on a par with proof of alien abduction.

Anything possible but spending money or creating policies requires proof.

Where’s the proof for the effectiveness of disinformation or fake news? No proof, no spending. Yes?

November 3, 2017

Scoop Mainstream Media on “… 6 Russian Government Officials Involved In DNC Hack”

Filed under: Fake News,Journalism,News,Reporting — Patrick Durusau @ 1:11 pm

You have read US Identifies 6 Russian Government Officials Involved In DNC Hack or similar coverage on Russian “interference” with the 2016 presidential election.

Here’s your opportunity to scoop mainstream media on the identities of the “…6 Russian Government Officials Involved In DNC Hack.”

Resources to use:

Russian Political Directory 2017

The Russian Political Directory is the definitive guide to people in power throughout Russia. All the top decision-makers are included in this one-volume publication, which details hundreds of government ministries, departments, agencies, corporations and their connected bodies. The Directory is a trusted resource for studies and research in all matters of Russian government, politics and civil society activities. Government organization entries contain the names and titles of officials, postal and e-mail addresses, telephone, fax numbers plus an overview of their main activities.

Truly comprehensive in scope, and listing all federal and regional government ministries, departments, agencies, corporations and their connected bodies, this directory provides a uniquely comprehensive view of government activity.

For playing “…guess a possible defendant…,” $200 is a bit pricey but opening to a random page is a more principled approach than you will see from the Justice Department in its search for defendants.

If timeliness isn’t an issue, consider the Directory of Soviet Officials: Republic Organizations:

From the preface:

The Directory of Soviet Officials identifies individuals who hold positions in selected party, government, and public organizations of the USSR. It may be used to find the incumbents of given positions within an organization or the positions of given individuals. For some organizations, it serves as a guide to the internal structure of the organization.

This directory dates from 1987 but since Justice only needs Russian sounding names and not physical defendants, consider it a backup source for possible defendants.

For the absolute latest information, at least those listed, consider The Russian Government. The official site for the Russian government and about as dull as any website you are likely to encounter. Sorry, but that’s true.

Last but be no means least, check out Johnson’s Russia List, which is an enormous collection of resources on Russia. It has a 2001 listing of online databases for Russian personalities. It also has a wealth of Russian names for your defendant lottery list.

When Justice does randomly name some defendants, ask yourself and Justice:

  1. What witness statements or documents link this person to the alleged hacking?
  2. What witness statements or documents prove a direct order from Putin to a particular defendant?
  3. What witness statements or documents establish the DNC “hack?” (It may well have been a leak.)
  4. Can you independently verify the witness statements or documents?

Any evidence that cannot be disclosed because of national security considerations should be automatically excluded from your reporting. If you can’t verify it, then it’s not a fact. Right?

Justice won’t have any direct evidence on anyone they name or on Putin. It’s strains the imagination to think Russian security is that bad, assuming any hack took place at all.

No direct evidence means Justice is posturing for reasons best know to it. Don’t be a patsy of Justice, press for direct evidence, dates, documents, witnesses.

Or just randomly select six defendants and see if your random selection matches that of Justice.

October 30, 2017

Russians Influence 2017 World Series #Upsidasium (Fake News)

Filed under: Fake News,Humor,Journalism — Patrick Durusau @ 8:36 am

Unnamed sources close to moose and squirrel, who are familiar with the evidence, say Russians are likely responsible for contamination of 2017 World Series baseballs with Upsidaisium. The existence and properties of Upsidaisium was documented in the early 1960s. This is the first known use of Upsidaisium to interfere with the World Series.

Sports Illustrated has photographic evidence that world series baseballs are “slicker” that a “normal” baseball, one sign of the use of Upsidaisium.

Unfortunately, Upsidaisim decays completely after the impact of being hit, into a substance indistinguishable from cowhide.

Should you obtain more unattributed statements from sources close to:

By Source, Fair use, Link

or,

By Source, Fair use, Link

Please add it in the comments below.

Thanks!

Journalists/Fake News hunters: Part truth, part fiction, just like reports of Russian “influence” (whatever the hell that means) in the 2016 presidential election and fears of Kasperkey Lab software.

Yes, Russia exists; yes, there was a 2016 presidential election; yes, Clinton is likely disliked by Putin, so do millions of others; yes, Wikileaks conducted a clever ad campaign with leaked emails, bolstered by major news outlets; but like Upsidaisim, there is no evidence tying Russians, much less Putin to anything to do with the 2016 election.

A lot of supposes, maybes and could have beens are reported, but no evidence. But US media outlets have kept repeating “Russia influenced the 2016” election until even reasonable people assume it is true.

Don’t do be complicit in that lie. Make #Upsidasium the marker for such fake news.

October 19, 2017

Gender Discrimination and Pew – The Obvious and Fake News

Filed under: Fake News,Feminism,Journalism,News — Patrick Durusau @ 9:07 pm

Women are more concerned than men about gender discrimination in tech industry by Kim Parker and Cary Funk.

From the post:

Women in the U.S. are substantially more likely than men to say gender discrimination is a major problem in the technology industry, according to a Pew Research Center survey conducted in July and August.

The survey comes amid public debate about underrepresentation and treatment of women – as well as racial and ethnic minorities – in the industry. Critics of Silicon Valley have cited high-profile cases as evidence that the industry has fostered a hostile workplace culture. For their part, tech companies point to their commitment to increasing workforce diversity, even as some employees claim the industry is increasingly hostile to white males.

Was Pew repeating old news?

Well, Vogue: New Study Finds Gender Discrimination in the Tech Industry Is Still Sky-High (2016), Forbes: The Lack Of Diversity In Tech Is A Cultural Issue (2015), Gender Discrimination and Sexism in the Technology Industry (2014), Women Matter (2013), to cite only a few of the literally thousands of studies and surveys, onto which to stack the repetitive Pew report.

Why waste Pew funds to repeat what was commonly known and demonstrated by published research?

One not very generous explanation is the survey provided an opportunity to repeat “fake news.” You know, news that gets repeated so often that you don’t remember its source but it has credibility because you hear it so often?

“Fake news,” is the correct category for:

…even as some employees claim the industry is increasingly hostile to white males.

Repeating that claim in a Pew publication legitimates the equivalent of cries coming from an asylum.

One quick quote from Forbes, hardly a bastion of forward social thinking dispels the “hostile to white male” fantasy, The Lack Of Diversity In Tech Is A Cultural Issue:


It has been a commonly held belief that the gender gap in tech is primarily a pipeline issue; that there are simply not enough girls studying math and science. Recently updated information indicates an equal number of high school girls and boys participating in STEM electives, and at Stanford and Berkeley, 50% of the introductory computer science students are women. That may be the case, but the U.S. Census Bureau reported last year that twice as many men as women with the same qualifications were working in STEM fields.

A USA Today study discloses that top universities graduate black and Hispanic computer science and computer engineering students at twice the rate that leading technology companies hire them. Although these companies state they don’t have a qualified pool of applicants, the evidence does not support that claim.

When 2/3 of the workers in a field are male, it’s strains the imagination to credit claims of “hostility.”

I have no fact based explanation for the reports of “hostility” to white males.

Speculations abound, perhaps they are so obnoxious that even other males can’t stand them? Perhaps they are using “hostility” as a cover for incompetence? Who knows?

What is known is that money is needed to address sexism in the workplace (not repeating the research of others) and fake news such as “hostile to white males” should not be repeated by reputable sources, like Pew.

Powered by WordPress