Archive for November, 2016

Internet Censor(s) Spotted in Mirror

Wednesday, November 30th, 2016

How to solve Facebook’s fake news problem: experts pitch their ideas by Nicky Woolf.

From the post:

The impact of fake news, propaganda and misinformation has been widely scrutinized since the US election. Fake news actually outperformed real news on Facebook during the final weeks of the election campaign, according to an analysis by Buzzfeed, and even outgoing president Barack Obama has expressed his concerns.

But a growing cadre of technologists, academics and media experts are now beginning the quixotic process of trying to think up solutions to the problem, starting with a rambling 100+ page open Google document set up by Upworthy founder Eli Pariser.

Woolf captures the essential wrongness with the now, 120 pages, of suggestions, quoting Claire Wardle:


“The biggest challenge is who wants to be the arbiter of truth and what truth is,” said Claire Wardle, research director for the Tow Center for Digital Journalism at Columbia University. “The way that people receive information now is increasingly via social networks, so any solution that anybody comes up with, the social networks have to be on board.”

Don’t worry, selecting the arbiter of truth and what truth is won’t be difficult.

The authors of these suggestions see their favorite candidate every day:

mirror-460

So long as they aren’t seeing my image (substitute your name/image) in the mirror, I’m not interested in any censorship proposal.

Personally, even if offered the post of Internet Censor, I would turn it down.

I can’t speak for you but I am unable to be equally impartial to all. Nor do I trust anyone else to be equally impartial.

The “solution” to “fake news,” if you think that is a meaningful term, is more news, not less.

Enable users to easily compare and contrast news sources, if they so choose. Freedom means being free to make mistakes as well as good choices (from some point of view).

Constitution Free Zone [The Only Advantage To Not Living In Hawaii]

Wednesday, November 30th, 2016

Know Your Rights: The Government’s 100-Mile “Border” Zone – Map

From the post:

Many people think that border-related policies impact only people living in border towns like El Paso or San Diego. The reality is that Border Patrol’s interior enforcement operations encroach deep into and across the United States, affecting the majority of Americans.

Roughly two-thirds of the United States’ population, about 200 million people, lives within the 100-mile zone that an outdated federal regulation defines as the border zone—that is, within 100 miles of a U.S. land or coastal border.

Although this zone is not literally “Constitution free”—constitutional protections do still apply—the Border Patrol frequently ignores those protections and runs roughshod over individuals’ civil liberties.

Learn more about the government’s 100-mile border zone.

Read the ACLU factsheet on Custom and Border Protection’s 100-mile zone

constitutionfreezonemap-460

The ACLU map demonstrates there are no locations in Hawaii where the border zone does not reach.

Now you can name the one advantage of living outside of Hawaii, just in case it comes up on Jeopardy.

😉

In some ways, this map is mis-leading.

The U.S. government runs roughshod over everyone within and without its borders.

Ask the people of Aleppo for tales of the American government. A city rumored to be founded in the 6th millennium BCE, may be about to become the largest graveyard in history.

Be sure to mention that on holiday cards to the Obama White House.

Urgent: Update Your Tor Browser [Today, Yes, Today] + Aside on shallow bugs

Wednesday, November 30th, 2016

Tor Browser 6.0.7 is released

From the webpage:

Tor Browser 6.0.7 is now available from the Tor Browser Project page and also from our distribution directory.

This release features an important security update to Firefox and contains, in addition to that, an update to NoScript (2.9.5.2).

The security flaw responsible for this urgent release is already actively exploited on Windows systems. Even though there is currently, to the best of our knowledge, no similar exploit for OS X or Linux users available the underlying bug affects those platforms as well. Thus we strongly recommend that all users apply the update to their Tor Browser immediately. A restart is required for it to take effect.

Tor Browser users who had set their security slider to “High” are believed to have been safe from this vulnerability.

We will have alpha and hardened Tor Browser updates out shortly. In the meantime, users of these series can mitigate the security flaw in at least two ways:

1) Set the security slider to “High” as this is preventing the exploit from working.
2) Switch to the stable series until updates for alpha and hardened are available, too.

Here is the full changelog since 6.0.6:

  • All Platforms
    • Update Firefox to 45.5.1esr
    • Update NoScript to 2.9.5.2

A reminder from the Tor project that:

many eyes make all bugs shallow

is marketing talk for open source, nothing more.

For more on that theme: Linus’s Law aka “Many Eyes Make All Bugs Shallow” by Jeff Jones.

A little over 10 years old now, predating HeartBleed for example, but still an interesting read.

I am and remain an open source advocate but not on the basis of false claims of bug finding. Open source improves your changes of finding spyware. No guarantees but open source improves your chances.

Why any government or enterprise would run closed source software is a mystery to me. Upload all your work to the NSA on a weekly basis. With uploads you create a reminder of your risk, which is missing with non-open source software.

Hacking Journalists (Of self-protection)

Wednesday, November 30th, 2016

Inside the mind of digital attackers: Part 1 — The connection by Justin Kosslyn.

From the post:

John has a target: name, country, brief context, and maybe the email address or website. John has been given a goal: maybe eavesdropping, taking a website offline, or stealing intellectual property. And John has been given constraints: maybe he cannot risk detection, or he has to act within 24 hours, or he cannot reach out to the state-owned telecommunications company for help.

John is a government-backed digital attacker. He sits in an office building somewhere, at a desk. Maybe this is the job he wanted when he was growing up, or maybe it was a way to pay the bills and stretch his technical muscles. He probably has plans for the weekend.

Let’s say, for the sake of this example, that John’s target is Henry, in the same country as John. John’s goal is to copy all the information on Henry’s computer without being detected. John can get help from other government agencies. There’s no rush.

The first thing to realize is that John, like most people, is a busy guy. He’s not going to do more work than necessary. First, he’ll try to use traditional, straightforward techniques — nothing fancy — and only if those methods fail will he try to be more creative with his attack.

The start of an interesting series from Jigsaw:

A technology incubator at Alphabet that tackles geopolitical problems.

Justin proposes to take us inside the mind of hackers who target journalists.

Understanding the enemy and their likely strategies is a starting place for effective defense/protection.

My only caveat is the description of John as a …government-backed digital attacker….

Could be and increases John’s range of tools but don’t premise any defense on attackers being government-backed.

There are only two types of people in the world:

  1. People who are attacking your system.
  2. People have not yet attacked your system.

Any sane and useful security policy accounts for both.

I’m looking forward to the next installment in this series.

1 Million Compromised Google Accounts – 86 Goolian Infected Apps – In Sort Order

Wednesday, November 30th, 2016

“Gooligan” Android Malware Compromised 1 Million Google Accounts by Bogdan Popa.

From the post:

Security experts at Check Point have discovered a new very aggressive form of Android malware that already compromised no less than 1 million Google accounts and which can infect approximately 74 percent of the Android phones currently on the market.

The firm warns that the malware which they call Gooligan is injected into a total of 86 Android apps that are delivered through third-party marketplaces (you can check the full list of apps in the box at the end of the article). Once installed, these apps root the phone to get full access to the device and then attempt to deploy malicious software which can be used to steal authentication tokens for Google accounts.

This pretty much gives the attackers full control over the targeted Google accounts, and as long as vulnerable phones have Gmail, Google Drive, Google Chrome, YouTube, Google Photos, or any other Google app that can be used with an account, there’s a big chance that the attack is successful.
…(emphasis in original)

You can check to see if your account has been breached: Gooligan Checker.

The article also lists 86 Goolian infected apps, in no particular order. (Rhetorical questions: Why do people make it difficult for readers? What is their payoff?)

To save you from digging through and possibly missing an infected app, here are the 86 Googlian infected apps in dictionary order:

  • แข่งรถสุดโหด
  • Assistive Touch
  • ballSmove_004
  • Battery Monitor
  • Beautiful Alarm
  • Best Wallpapers
  • Billiards
  • Blue Point
  • CakeSweety
  • Calculator
  • Chrono Marker
  • Clean Master
  • Clear
  • com.browser.provider
  • com.example.ddeo
  • com.fabullacop.loudcallernameringtone
  • Compass Lite
  • com.so.itouch
  • Daily Racing
  • Demm
  • Demo
  • Demoad
  • Detecting instrument
  • Dircet Browser
  • Fast Cleaner
  • Fingerprint unlock
  • Flashlight Free
  • Fruit Slots
  • FUNNY DROPS
  • gla.pev.zvh
  • Google
  • GPS
  • GPS Speed
  • Hip Good
  • HotH5Games
  • Hot Photo
  • Html5 Games
  • Kiss Browser
  • KXService
  • Light Advanced
  • Light Browser
  • memory booste
  • memory booster
  • Memory Booster
  • Minibooster
  • Multifunction Flashlight
  • Music Cloud
  • OneKeyLock
  • Pedometer
  • Perfect Cleaner
  • phone booster
  • PornClub
  • PronClub
  • Puzzle Bubble-Pet Paradise
  • QPlay
  • SettingService
  • Sex Cademy
  • Sex Photo
  • Sexy hot wallpaper
  • Shadow Crush
  • Simple Calculator
  • Slots Mania
  • Small Blue Point
  • SmartFolder
  • Smart Touch
  • Snake
  • So Hot
  • StopWatch
  • Swamm Browser
  • System Booster
  • Talking Tom 3
  • TcashDemo
  • Test
  • Touch Beauty
  • tub.ajy.ics
  • UC Mini
  • Virtual
  • Weather
  • Wifi Accelerate
  • WiFi Enhancer
  • Wifi Master
  • Wifi Speed Pro
  • YouTube Downloader
  • youtubeplayer
  • 小白点
  • 清理大师

Visualizing XML Schemas

Tuesday, November 29th, 2016

I don’t have one of the commercial XML packages at the moment and was casting about for a free visualization technique for a large XML schema when I encountered:

schema-visualization-460

I won’t be trying it on my schema until tomorrow but I thought it looked interesting enough to pass along.

Further details: Visualizing Complex Content Models with Spatial Schemas by Joe Pairman.

This looks almost teachable.

Thoughts?

Other “free” visualization tools to suggest?

Gab – Censorship Lite?

Tuesday, November 29th, 2016

I submitted my email today at Gab and got this message:

Done! You’re #1320420 in the waiting list.

Only three rules:

Illegal Pornography

We have a zero tolerance policy against illegal pornography. Such material will be instantly removed and the owning account will be dealt with appropriately per the advice of our legal counsel. We reserve the right to ban accounts that share such material. We may also report the user to local law enforcement per the advice our legal counsel.

Threats and Terrorism

We have a zero tolerance policy for violence and terrorism. Users are not allowed to make threats of, or promote, violence of any kind or promote terrorist organizations or agendas. Such users will be instantly removed and the owning account will be dealt with appropriately per the advice of our legal counsel. We may also report the user to local and/or federal law enforcement per the advice of our legal counsel.

What defines a ‘terrorist organization or agenda’? Any group that is labelled as a terrorist organization by the United Nations and/or United States of America classifies as a terrorist organization on Gab.

Private Information

Users are not allowed to post other’s confidential information, including but not limited to, credit card numbers, street numbers, SSNs, without their expressed authorization.

If Gab is listening, I can get the rules down to one:

Court Ordered Removal

When Gab receives a court order from a court of competent jurisdiction ordering the removal of identified, posted content, at (service address), the posted, identified content will be removed.

Simple, fair, gets Gab and its staff out of the censorship business and provides a transparent remedy.

At no cost to Gab!

What’s there not to like?

Gab should review my posts: Monetizing Hate Speech and False News and Preserving Ad Revenue With Filtering (Hate As Renewal Resource), while it is in closed beta.

Twitter and Facebook can keep spending uncompensated time and effort trying to be universal and fair censors. Gab has the opportunity to reach up and grab those $100 bills flying overhead for filtered news services.

What is the New York Times if not an opinionated and poorly run filter on all the possible information it could report?

Apply that same lesson to social media!

PS: Seriously, before going public, I would go to the one court-based rule on content. There’s no profit and no wins in censoring any content on your own. Someone will always want more or less. Courts get paid to make those decisions.

Check with your lawyers but if you don’t look at any content, you can’t be charged with constructive notice of it. Unless and until someone points it out, then you have to follow DCMA, court orders, etc.

Spies in the Skies [Fostered by Obama, Inherited by Trump]

Tuesday, November 29th, 2016

Spies in the Skies by Peter Aldhous and Charles Seife.

Post in April of 2016, it reads in part:

Each weekday, dozens of U.S. government aircraft take to the skies and slowly circle over American cities. Piloted by agents of the FBI and the Department of Homeland Security (DHS), the planes are fitted with high-resolution video cameras, often working with “augmented reality” software that can superimpose onto the video images everything from street and business names to the owners of individual homes. At least a few planes have carried devices that can track the cell phones of people below. Most of the aircraft are small, flying a mile or so above ground, and many use exhaust mufflers to mute their engines — making them hard to detect by the people they’re spying on.

The government’s airborne surveillance has received little public scrutiny — until now. BuzzFeed News has assembled an unprecedented picture of the operation’s scale and sweep by analyzing aircraft location data collected by the flight-tracking website Flightradar24 from mid-August to the end of December last year, identifying about 200 federal aircraft. Day after day, dozens of these planes circled above cities across the nation.

The FBI and the DHS would not discuss the reasons for individual flights but told BuzzFeed News that their planes are not conducting mass surveillance.

The DHS said that its aircraft were involved with securing the nation’s borders, as well as targeting drug smuggling and human trafficking, and may also be used to support investigations by the FBI and other law enforcement agencies. The FBI said that its planes are only used to target suspects in specific investigations of serious crimes, pointing to a statement issued in June 2015, after reporters and lawmakers started asking questions about FBI surveillance flights.

“It should come as no surprise that the FBI uses planes to follow terrorists, spies, and serious criminals,” said FBI Deputy Director Mark Giuliano, in that statement. “We have an obligation to follow those people who want to hurt our country and its citizens, and we will continue to do so.”

I’m not surprised the FBI follows terrorists, spies, and serious criminals.

What’s problematic is that the FBI follows all of us and then, after the fact, picks out alleged terrorists, spies and serious criminals.

The FBI could just as easily select people on their way to a tryst with a government official’s wife, or to attend an AA meeting, or to attend an unpopular church.

Once collected, the resulting information is subject to any number of uses and abuses.

Aldhous and Seife report the flights drop 70% on the weekend so if you are up to mischief, plan around your weekends.

When writing about the inevitable surveillance excesses under President Trump, give credit to President Obama and his supporters, who built the surveillance state Trump inherited.

Trump, Twitter and Bullying The Press

Tuesday, November 29th, 2016

Jay Smooth tweeted yesterday:

Keep in mind the purpose of this clown show: the President-Elect of the United States is using twitter to single out & bully a journalist.

Attaching an image that contained tweets 5 through 8 from the following list:

  1. “Nobody should be allowed to burn the American flag – if they do, there must be consequences – perhaps loss of citizenship or year in jail!”
  2. “I thought that @CNN would get better after they failed so badly in their support of Hillary Clinton however, since election, they are worse!”
  3. “The Great State of Michigan was just certified as a Trump WIN giving all of our MAKE AMERICA GREAT AGAIN supporters another victory – 306!”
  4. “@CNN is so embarrassed by their total (100%) support of Hillary Clinton, and yet her loss in a landslide, that they don’t know what to do.”
  5. “@sdcritic: @HighonHillcrest @jeffzeleny @CNN There is NO QUESTION THAT #voterfraud did take place, and in favor of #CorruptHillary !”
  6. “@FiIibuster: @jeffzeleny Pathetic – you have no sufficient evidence that Donald Trump did not suffer from voter fraud, shame! Bad reporter.”
  7. ‘”@JoeBowman12: @jeffzeleny just another generic CNN part time wannabe journalist !” @CNN still doesn’t get it. They will never learn!’
  8. “@HighonHillcrest: @jeffzeleny what PROOF do u have DonaldTrump did not suffer from millions of FRAUD votes? Journalist? Do your job! @CNN”
  9. “Just met with General Petraeus–was very impressed!”
  10. “If Cuba is unwilling to make a better deal for the Cuban people, the Cuban/American people and the U.S. as a whole, I will terminate deal.”

Can Trump bully @jeffzeleny if Jeff and the press aren’t listening?

Jeff filters @realDonaldTrump excluding any tweets with @jeffzeleny and subscribes to a similar filter for all journalists twitter handles.

His feed from @realDonaldTrump now reads:

  1. “Nobody should be allowed to burn the American flag – if they do, there must be consequences – perhaps loss of citizenship or year in jail!”
  2. “I thought that @CNN would get better after they failed so badly in their support of Hillary Clinton however, since election, they are worse!”
  3. “The Great State of Michigan was just certified as a Trump WIN giving all of our MAKE AMERICA GREAT AGAIN supporters another victory – 306!”
  4. “@CNN is so embarrassed by their total (100%) support of Hillary Clinton, and yet her loss in a landslide, that they don’t know what to do.”
  5. “Just met with General Petraeus–was very impressed!”
  6. “If Cuba is unwilling to make a better deal for the Cuban people, the Cuban/American people and the U.S. as a whole, I will terminate deal.”

Trump’s tweets still contain enough material for a stand up routine by a comic or the front page of a news paper.

On the other hand, shareable user filters starve Trump (and other bullies) of the ability to be bullies.

Why isn’t Twitter doing something as dead simple as user filters than can be shared?

You would have to ask Twitter that question, I certainly don’t know.

CIA Cartography [Comparison to other maps?]

Monday, November 28th, 2016

CIA Cartography

From the webpage:

Tracing its roots to October 1941, CIA’s Cartography Center has a long, proud history of service to the Intelligence Community (IC) and continues to respond to a variety of finished intelligence map requirements. The mission of the Cartography Center is to provide a full range of maps, geographic analysis, and research in support of the Agency, the White House, senior policymakers, and the IC at large. Its chief objectives are to analyze geospatial information, extract intelligence-related geodata, and present the information visually in creative and effective ways for maximum understanding by intelligence consumers.

Since 1941, the Cartography Center maps have told the stories of post-WWII reconstruction, the Suez crisis, the Cuban Missile crisis, the Falklands War, and many other important events in history.

There you will find:

Cartography Tools 211 photos

Cartography Maps 1940s 22 photos

Cartography Maps 1950s 14 photos

Cartography Maps 1960s 16 photos

Cartography Maps 1970s 19 photos

Cartography Maps 1980s 12 photos

Cartography Maps 1990s 16 photos

Cartography Maps 2000s 16 photos

Cartography Maps 2010s 15 photos

The albums have this motto at the top:

CIA Cartography Center has been making vital contributions to our Nation’s security, providing policymakers with crucial insights that simply cannot be conveyed through words alone.

President-elect Trump is said to be gaining foreign intelligence from sources other than his national security briefings. Trump is ignoring daily intelligence briefings, relying on ‘a number of sources’ instead. That report is based on a Washington Post account, which puts its credibility somewhere between a conversation overhead in a laundry mat and a stump speech by a member of Congress.

Assuming Trump is gaining intelligence from other sources, just how good are other sources of intelligence?

This release of maps by the CIA, some 160 maps spread from the 1940’s to the 2010’s, provides one axis for evaluating CIA intelligence versus what was commonly known at the time.

As a starting point, may I suggest: Image matching for historical maps comparison by C. Balletti and F. Guerrae, Perimetron, Vol. 4, No. 3, 2009 [180-186] www.e-perimetron.org | ISSN 1790-3769?

Abstract:

In cartographic heritage we suddenly find maps of the same mapmaker and of the same area, published in different years, or new editions due to integration of cartographic, such us in national cartographic series. These maps have the same projective system and the same cut, but they present very small differences. The manual comparison can be very difficult and with uncertain results, because it’s easy to leave some particulars out. It is necessary to find an automatic procedure to compare these maps and a solution can be given by digital maps comparison.

In the last years our experience in cartographic data processing was opted for find new tools for digital comparison and today solution is given by a new software, ACM (Automatic Correlation Map), which finds areas that are candidate to contain differences between two maps. ACM is based on image matching, a key component in almost any image analysis process.

Interesting paper but it presupposes a closeness of the maps that is likely to be missing when comparing CIA maps to other maps of the same places and time period.

I am in the process of locating other tools for map comparison.

Any favorites you would like to suggest?

False News: Trump and the Emoluments Clause

Sunday, November 27th, 2016

Numerous false news accounts are circulating about president-elect Trump and the Emoluments Clause.

The story line is that Trump must divest himself of numerous businesses to avoid violating the “Emoluments Clause” of the U.S. Constitution. But when you read the Emoluments Clause:

Clause 8. No Title of Nobility shall be granted by the United States: And no Person holding any Office of Profit or Trust under them, shall, without the Consent of the Congress accept of any present, Emolument, Office, or Title, of any kind whatever, from any King, Prince, or foreign State.

that conclusion is far from clear.

Why would it say: “…without the Consent of Congress….”

That question was answered in 1871 and sheds light on the issue of today:

In 1871 the Attorney General of the United States ruled that: “A minister of the United States abroad is not prohibited by the Constitution from rendering a friendly service to a foreign power, even that of negotiating a treaty for it, provided he does not become an officer of that power . . . but the acceptance of a formal commission, as minister plenipotentiary, creates an official relation between the individual thus commissioned and the government which in this way accredits him as its representative,” which is prohibited by this clause of the Constitution. 2013

ftnt: 2013 13 Ops. Atty. Gen. 538 (1871).

All of that is from: Constitution Annotated | Congress.gov | Library of Congress, in particular: https://www.congress.gov/content/conan/pdf/GPO-CONAN-REV-2016-9-2.pdf.

If you read the Emoluments Clause to prohibit Trump from representing another government, unless Congress consents, it makes sense as written.

Those falsely claiming that Trump must divest himself of his business interests and/or put them in a blind trust under the Emoluments Clause, Lawrence Tribe comes to mind, are thinking of a tradition of presidents using blind trusts.

But tradition doesn’t amend the Constitution.

Any story saying that the Emoluments Clause compels president-elect Trump to either divest himself of assets and/or use a blind trust are false.

PS: I have admired Prof. Lawrence Tribe’s work for years and am saddened that he is willing to sully his reputation in this way.

Ulysses, Joyce and Stanford CoreNLP

Saturday, November 26th, 2016

Introduction to memory and time usage

From the webpage:

People not infrequently complain that Stanford CoreNLP is slow or takes a ton of memory. In some configurations this is true. In other configurations, this is not true. This section tries to help you understand what you can or can’t do about speed and memory usage. The advice applies regardless of whether you are running CoreNLP from the command-line, from the Java API, from the web service, or from other languages. We show command-line examples here, but the principles are true of all ways of invoking CoreNLP. You will just need to pass in the appropriate properties in different ways. For these examples we will work with chapter 13 of Ulysses by James Joyce. You can download it if you want to follow along.

You have to appreciate the use of a non-trivial text for advice on speed and memory usage of CoreNLP.

How does your text stack up against Chapter 13 of Ulysses?

I’m supposed to be reading Ulysses long distance with a friend. I’m afraid we have both fallen behind. Perhaps this will encourage me to have another go at it.

What favorite or “should read” text would you use to practice with CoreNLP?

Suggestions?

Programming has Ethical Consequences?

Friday, November 25th, 2016

Has anyone tracked down the blinding flash that programming has ethical consequences?

Programmers are charged to point out ethical dimensions and issues not noticed by muggles.

This may come as a surprise but programmers in the broader sense have been aware of ethical dimensions to programming for decades.

Perhaps the best known example of a road to Damascus type event is the Trinity atomic bomb test in New Mexico. Oppenheimer recalling a line from the Bhagavad Gita:

“Now I am become Death, the destroyer of worlds.”

To say nothing of the programmers who labored for years to guarantee world wide delivery of nuclear warheads in 30 minutes or less.

But it isn’t necessary to invoke a nuclear Armageddon to find ethical issues that have faced programmers prior to the current ethics frenzy.

Any guesses as to how red line maps were created?

Do you think “red line” maps just sprang up on their own? Or was someone collecting, collating and analyzing the data, much as we would do now but more slowly?

Every act of collecting, collating and analyzing data, now with computers, can and probably does have ethical dimensions and issues.

Programmers can and should raise ethical issues, especially when they may be obscured or clouded by programming techniques or practices.

However, programmers announcing ethical issues to their less fortunate colleagues isn’t likely to lead to a fruitful discussion.

China Gets A Facebook Filter, But Not You

Thursday, November 24th, 2016

Facebook ‘quietly developing censorship tool’ for China by Bill Camarda.

From the post:


That’s one take on the events that might have led to today’s New York Times expose: it seems Facebook has tasked its development teams with “quietly develop[ing] software to suppress posts from appearing in people’s news feeds in specific geographic areas”.

As “current and former Facebook employees” told the Times, Facebook wouldn’t do the suppression themselves, nor need to. Rather:

It would offer the software to enable a third party – in this case, most likely a partner Chinese company – to monitor popular stories and topics that bubble up as users share them across the social network… Facebook’s partner would then have full control to decide whether those posts should show up in users’ feeds.

This is a step beyond the censorship Facebook has already agreed to perform on behalf of governments such as Turkey, Russia and Pakistan. In those cases, Facebook agreed to remove posts that had already “gone live”. If this software were in use, offending posts could be halted before they ever appeared in a local user’s news feed.

You can’t filter your own Facebook timeline or share your filter with other Facebook users, but the Chinese government can filter the timelines of 721,000,000+ internet users?

My proposal for Facebook filters would generate income for Facebook, filter writers and enable the 3,600,000,000+ internet users around the world to filter their own content.

All of Zuckerberg’s ideas:

Stronger detection. The most important thing we can do is improve our ability to classify misinformation. This means better technical systems to detect what people will flag as false before they do it themselves.

Easy reporting. Making it much easier for people to report stories as fake will help us catch more misinformation faster.

Third party verification. There are many respected fact checking organizations and, while we have reached out to some, we plan to learn from many more.

Warnings. We are exploring labeling stories that have been flagged as false by third parties or our community, and showing warnings when people read or share them.

Related articles quality. We are raising the bar for stories that appear in related articles under links in News Feed.

Disrupting fake news economics. A lot of misinformation is driven by financially motivated spam. We’re looking into disrupting the economics with ads policies like the one we announced earlier this week, and better ad farm detection.

Listening. We will continue to work with journalists and others in the news industry to get their input, in particular, to better understand their fact checking systems and learn from them.

Enthrone Zuckerman as Censor of the Internet.

His blinding lust to be Censor of the Internet*, is responsible for Zuckerman passing up $millions if not $billions in filtering revenue.

Facebook shareholders should question this loss of revenue at every opportunity.

* Zuckerberg’s “lust” to be “Censor of the Internet” is an inference based on the Facebook centered nature of his “ideas” for dealing with “fake news.” Unpaid censorship instead of profiting from user-centered filtering is a sign of poor judgment and/or madness.

Fake News Is Not the Only Problem

Thursday, November 24th, 2016

Fake News Is Not the Only Problem by Gilad Lotan.

From the post:

There have been so many conversations on the impact of fake news on the recent US elections. An already polarized public is pushed further apart by stories that affirm beliefs or attack the other side. Yes. Fake news is a serious problem that should be addressed. But by focusing solely on that issue, we are missing the larger, more harmful phenomenon of misleading, biased propaganda.

It’s not only fringe publications. Think for a moment about the recent “Hamilton”-Pence showdown. What actually happened there? How disrespectful was the cast towards Mike Pence? Was he truly being “Booed Like Crazy” as the Huffington Post suggests? The short video embedded in that piece makes it seem like it. But this video on ABC suggests otherwise. “There were some cheers and some boos,” says Pence himself.

In an era of post-truth politics, driven by the 24-hour news cycle, diminishing trust in institutions, rich visual media, and the ubiquity and velocity of social networked spaces, how do we identify information that is tinted — information that is incomplete, that may help affirm our existing beliefs or support someone’s agenda, or that may be manipulative — effectively driving a form of propaganda?

Biased information — misleading in nature, typically used to promote or publicize a particular political cause or point of view — is a much more prevalent problem than fake news. It’s a problem that doesn’t exist only within Facebook but across social networks and other information-rich services (Google, YouTube, etc.).

A compelling piece of work but I disagree that biased information “….is a much more prevalent problem than fake news.

I don’t disagree with Lotan’s “facts.” I would go further and say all information is “biased,” from one viewpoint or another.

Collecting, selecting and editing information are done to attract readers by biased individuals for delivery to biased audiences. Biased audiences who are driving the production of content which they find agreeable.

Non-news example: How long would a classical music record label survive insisting its purchasers enjoy rap music?

At least if they were attempting to use a classical music mailing list for their records?

To blame “news/opinion” writers for bias is akin to shooting the messenger.

A messenger who is delivering the content readers requested.

Take Lotan’s example of providing more “context” for a story drawn from the Middle East:


A more recent example from the Middle East is that of Ahmed Manasra, a 13-year old Palestinian-Israeli boy who stabbed a 13-year old Israeli Jew in Jerusalem last Fall. A video [warning: graphic content] that was posted to a public Facebook page shows Mansara wounded, bleeding, and being cursed at by an Israeli. It was viewed over 2.5M times with the following caption:

Israeli Zionists curse a dying Palestinian child as Israeli Police watch…. His name was Ahmad Manasra and his last moments were documented in this video.

But neither the caption nor the video itself presents the full context. Just before Manasra was shot, he stabbed a few passersby, as well as a 13-year old Israeli Jew. Later, he was taken to a hospital.

Lotan fails to mention Ahmad Manasra’s actions were in the context of a decades old, systematic campaign by the Israeli government (not the Israeli people) to drive Palestinians from illegally occupied territory. A campaign in which thousands of Palestinians have died, homes and olive groves have been destroyed, etc.

Bias? Context? Your call.

Whichever way you classify my suggested “additional” context for the story of Ahmad Manasra, will be considered needed correction by some and bias by others.

In his conclusion, Lotan touches every so briefly on the issue upper most in my mind when discussion “fake” or “biased” content:


There are other models of automated filtering and downgrading for limiting the spread of misleading information (the Facebook News Feed already does plenty of filtering and nudging). But again, who decides what’s in or out, who governs? And who gets to test the potential bias of such an algorithmic system?

In a nutshell: who governs?

Despite unquestioned existence of “false,” “fake,” “biased,” “misleading,” information, “who governs?,” has only one acceptable answer:

No one.

Enabling readers to discover, if they wish, alternative, or in the view of some, more complete or contextual accounts, great! We have the beginnings of technology to do so.

A story could be labeled “false,” “fake,” by NPR and if you subscribe to NPR labeling, that appears in your browser. Perhaps I subscribe to Lady GaGa labeling and it has no opinion on that story and unfortunate subscribers to National Review labeling see a large green $$$ or whatever it is they use to show approval.

I fear censors far more than any form or degree of “false,” “fake,” “biased,” “misleading,” information.

You should too.

Learning R programming by reading books: A book list

Thursday, November 24th, 2016

Learning R programming by reading books: A book list by Liang-Cheng Zhang.

From the post:

Despite R’s popularity, it is still very daunting to learn R as R has no click-and-point feature like SPSS and learning R usually takes lots of time. No worries! As self-R learner like us, we constantly receive the requests about how to learn R. Besides hiring someone to teach you or paying tuition fees for online courses, our suggestion is that you can also pick up some books that fit your current R programming level. Therefore, in this post, we would like to share some good books that teach you how to learn programming in R based on three levels: elementary, intermediate, and advanced levels. Each level focuses on one task so you will know whether these books fit your needs. While the following books do not necessarily focus on the task we define, you should focus the task when you reading these books so you are not lost in contexts.

Books and reading form the core of my most basic prejudice: Literacy is the doorway to unlimited universes.

A prejudice so strong that I have to work hard at realizing non-literates live in and sense worlds not open to literates. Not less complex, not poorer, just different.

But book lists in particular appeal to that prejudice and since my blog is read by literates, I’m indulging that prejudice now.

I do have a title to add to the list: Practical Data Science with R by Nina Zumel and John Mount.

Judging from the other titles listed, Practical Data Science with R falls in the intermediate range. Should not be your first R book but certainly high on the list for your second R book.

Avoid the rush! Start working on your Amazon wish list today! 😉

NPR Posts “Fake News” Criticism of “Fake News”

Wednesday, November 23rd, 2016

There may be others but this is the first “fake news” story that I have seen that is critical of “fake news.” At least by NPR.

Students Have ‘Dismaying’ Inability To Tell Fake News From Real, Study Finds by Camila Domoske

Domoske does a credible summary of the contents of the executive summary, for which only one paragraph is necessary to opt out of presenting this story on NPR:


When we began our work we had little sense of the depth of the problem. We even found ourselves rejecting ideas for tasks because we thought they would be too easy. Our first round of piloting shocked us into reality. Many assume that because young people are fluent in social media they are equally savvy about what they find there. Our work shows the opposite. We hope to produce a series of high-quality web videos to showcase the depth of the problem revealed by students’ performance on our tasks and demonstrate the link between digital literacy and citizenship. By drawing attention to this connection, a series of videos could help to mobilize educators, policymakers, and others to address this threat to democracy.

Comparing the NPR coverage and the executive summary, the article reflects the steps taken by the study, but never questions its conclusion that an inability to assess online information is indeed a “threat to democracy.”

To support that conclusion, which earned this story a spot on NPR, the researchers would need historical data on how well or poorly, students assessed sources of information at other time periods in American history, along with an assessment of “democracy” at the time, along with the demonstration of a causal relationship between the two.

But as you can see from the NPR article, Domoske fails to ask the most rudimentary questions about this study, such as:

“Is there a relationship between democracy and the ability to evaluate sources of information?”

Or, “What historical evidence demonstrates a relationship between democracy and the ability to evaluate sources of information?”

Utter silence on the part of Domoske.

The real headline for a follow-up on this story should be:

NPR Reporter Unable To Distinguish Credible Research From Headline Driven Reports.

I’m going to be listening for that report.

Are you?

“sexy ads or links” – Facebook can’t catch a break

Wednesday, November 23rd, 2016

The Fact Checker’s guide for detecting fake news by Glenn Kessler.

Glenn’s post isn’t an outright attack on Facebook, the standard fare at the New York Times since Donald Trump’s election. How long the Times is going to sulk over its rejection by most Americans isn’t clear.

Glenn descends into the sulking with the Times when he writes:


Look at the ads

A profusion of pop-up ads or other advertising indicates you should handle the story with care. Another sign is a bunch of sexy ads or links, designed to be clicked — “Celebs who did Porn Movies” or “Naughty Walmart Shoppers Who have no Shame at All” — which you generally do not find on legitimate news sites.

The examples are nearly Facebook ad headlines and Glenn knows that.

Rather than saying “Facebook,” Glenn wants you to conclude that “on your own.” (An old manipulation/propaganda technique.)

Glenn’s “read the article closely” was #4, coming in after #1, “determine whether the article is from a legitimate website,” #2, “Check the ‘contact us’ page,” or #3, “examine the byline of the reporter and see whether it makes sense.”

How To Recognize A Fake News Story has “read past the headline” first.

Even “legitimate websites” make mistakes, omit facts, and sometimes are mis-led by governments and others.

Read content critically, even content about spotting “fake news.”

1,198 Free High Resolution Maps of U.S. National Parks

Wednesday, November 23rd, 2016

1,198 Free High Resolution Maps of U.S. National Parks

From the post:

I cannot, and do not wish to, imagine the U.S. without its National Park system. The sale and/or despoliation of this more than 80 million acres of mountain, forest, stream, ocean, geyser, cavern, canyon, and every other natural formation North America contains would diminish the country immeasurably. “National parks,” wrote novelist Wallace Stegner, “are the best idea we ever had. Absolutely American, absolutely democratic, they reflect us at our best rather than our worst.”

Stegner’s quote—which gave Ken Burns’ National Parks documentary its subtitle–can sound overoptimistic when we study the parks’ history. Though not officially designated until the 20th century, the idea stretches back to 1851, when a battalion, intent on finding and destroying an Indian village, also found Yosemite. Named for what the soldiers thought was the tribe they killed and burned, the word actually translates as “they are killers.”

Westward expansion and the annexation of Hawaii have left us many sobering stories like that of Yosemite’s “discovery.” And during their development in the early- to mid-20th century, the parks often required the mass displacement of people, many of whom had lived on the land for decades—or centuries. But despite the bloody history, the creation of these sanctuaries have preserved the country’s embarrassment of natural beauty and irreplaceable biodiversity for a century now. (The National Park Service celebrated its 100th anniversary just this past August.)

The National Park Service and its allies have acted as bulwarks against privateers who would turn places like Yosemite into prohibitively expensive resorts, and perhaps fell the ancient Redwood National forests or blast away the Smokey Mountains. Instead, the parks remain “absolutely democratic,” open to all Americans and international visitors, the pride of conservationists, scientists, hikers, bird watchers, and nature-lovers of all kinds. Given the sprawling, idealistic, and violent history of the National Parks, it may be fair to say that these natural preserves reflect the country at both its worst and its best. And in that sense, they are indeed “absolutely American.”

Links to numerous resources, including National Parks Maps. (Home of 1,198 free high resolution maps of U.S. national parks.)

The national parks of the United States were born in violence and disenfranchisement of the powerless. It is beyond our power to atone for those excesses and injuries done in the past.

It is our task, to preserve those parks as monuments to our violence against the powerless and as natural treasures for all humanity.

Taping Donald, Melania, Mike and others

Wednesday, November 23rd, 2016

Just in time for a new adminstration, Great. Now even your headphones can spy on you by Andy Greenberg.

From the post:

CAUTIOUS COMPUTER USERS put a piece of tape over their webcam. Truly paranoid ones worry about their devices’ microphones—some even crack open their computers and phones to disable or remove those audio components so they can’t be hijacked by hackers. Now one group of Israeli researchers has taken that game of spy-versus-spy paranoia a step further, with malware that converts your headphones into makeshift microphones that can slyly record your conversations.

Researchers at Israel’s Ben Gurion University have created a piece of proof-of-concept code they call “Speake(a)r,” designed to demonstrate how determined hackers could find a way to surreptitiously hijack a computer to record audio even when the device’s microphones have been entirely removed or disabled. The experimental malware instead repurposes the speakers in earbuds or headphones to use them as microphones, converting the vibrations in air into electromagnetic signals to clearly capture audio from across a room.

“People don’t think about this privacy vulnerability,” says Mordechai Guri, the research lead of Ben Gurion’s Cyber Security Research Labs. “Even if you remove your computer’s microphone, if you use headphones you can be recorded.”

But the Ben Gurion researchers took that hack a step further. Their malware uses a little-known feature of RealTek audio codec chips to silently “retask” the computer’s output channel as an input channel, allowing the malware to record audio even when the headphones remain connected into an output-only jack and don’t even have a microphone channel on their plug. The researchers say the RealTek chips are so common that the attack works on practically any desktop computer, whether it runs Windows or MacOS, and most laptops, too. RealTek didn’t immediately respond to WIRED’s request for comment on the Ben Gurion researchers’ work. “This is the real vulnerability,” says Guri. “It’s what makes almost every computer today vulnerable to this type of attack.”

(emphasis in original)

Wired doesn’t give up any more details but that should be enough to get you started.

You must search for RealTek audio codec datasheets. RealTek wants a signed NDA from a development partner before you can access the datasheets.

Among numerous others, I know for a fact that datasheets on ALC655, ALC662, ALC888, ALC1150, and ALC5631Q are freely available online.

You will have to replicate the hack but then:

  1. Choose your targets for taping
  2. Obtain their TV/music preferences from Amazon, etc.
  3. License new content (would not want to upset the RIAA) for web streaming
  4. Offer your target the “latest” TV/music by (name) for free 30 day trial

For the nosy non-hacker, expect to see “hacked” earphones for sale on the Dark Web.

Perhaps even in time for holiday shopping!

Warning:Hacking or buying hacked headphones is a violation of any number of federal, state and local laws, depending on your jurisdiction.

PS: I am curious if the mic in cellphones is subject to a similar hack.

Perhaps this is the dawning of the age of transparency. 😉

Comic Book Security

Wednesday, November 23rd, 2016

The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels in Comic Book Narratives by Mohit Iyyer, et al.

Abstract:

Visual narrative is often a combination of explicit information and judicious omissions, relying on the viewer to supply missing details. In comics, most movements in time and space are hidden in the “gutters” between panels. To follow the story, readers logically connect panels together by inferring unseen actions through a process called “closure”. While computers can now describe the content of natural images, in this paper we examine whether they can understand the closure-driven narratives conveyed by stylized artwork and dialogue in comic book panels. We collect a dataset, COMICS, that consists of over 1.2 million panels (120 GB) paired with automatic textbox transcriptions. An in-depth analysis of COMICS demonstrates that neither text nor image alone can tell a comic book story, so a computer must understand both modalities to keep up with the plot. We introduce three cloze-style tasks that ask models to predict narrative and character-centric aspects of a panel given n preceding panels as context. Various deep neural architectures underperform human baselines on these tasks, suggesting that COMICS contains fundamental challenges for both vision and language.

From the introduction:

comics-460

Comics are fragmented scenes forged into full-fledged stories by the imagination of their readers. A comics creator can condense anything from a centuries-long intergalactic war to an ordinary family dinner into a single panel. But it is what the creator hides from their pages that makes comics truly interesting, the unspoken conversations and unseen actions that lurk in the spaces (or gutters) between adjacent panels. For example, the dialogue in Figure 1 suggests that between the second and third panels, Gilda commands her snakes to chase after a frightened Michael in some sort of strange cult initiation. Through a process called closure [40], which involves (1) understanding individual panels and (2) making connective inferences across panels, readers form coherent storylines from seemingly disparate panels such as these. In this paper, we study whether computers can do the same by collecting a dataset of comic books (COMICS) and designing several tasks that require closure to solve.

(emphasis in original)

Comic book security: A method for defeating worldwide data slurping and automated analysis.

The authors find that human results easily exceed automated analysis, raising the question of the use of a mixture of text and images as a means to evade widespread data sweeps.

Security based on a lack of human eyes to review content is chancy but depending upon your security needs, it may be sufficient.

For example, a cartoon in a local newspaper that designates a mission target and time, only needs to be secure from the time of its publication until the mission has finished. That it is discovered days, weeks or even months later, doesn’t impact the operational security of the mission.

The data set of cartoons is available at: http://github.com/miyyer/comics.

Guaranteed, algorithmic security is great, but hiding in gaps of computational ability may be just as effective.

Enjoy!

How To Recognize A Fake News Story

Wednesday, November 23rd, 2016

How To Recognize A Fake News Story by Nick Robin-Searly.

A handy “fake news” graphic:

fake-news-huffington-460

Even if Facebook, Twitter, etc., eventually take up my idea of shareable content filters, you should evaluate all stories (including mine) with the steps in this graphic.

Short form: Don’t be a passive consumer of content. Engage with content. Question its perspective, what was left unsaid, sources that were or were not relied upon, etc.

Your ignorance is your own and no one can fix that other than you.

The 10 Commandments of Exfiltration

Tuesday, November 22nd, 2016

‘Perfect’ Data Exfiltration Demonstrated by Larry Loeb.

From the post:

The 10 Commandments of Exfiltration

Following the experiment, the researchers came up with a technique of exfiltration based on their newly established 10 commandments. According to the SafeBreach presentation, these commandments are:

  1. No security through obscurity should be used.
  2. Only Web browsing and derived traffic is allowed.
  3. Anything that may theoretically be perceived as passing information is forbidden.
  4. Scrutinize every packet during comprehensive network monitoring.
  5. Assume TLS/SSL termination at the enterprise level.
  6. Assume the receiving party has no restrictions.
  7. Assume no nation-state or third-party site monitoring.
  8. Enable time synchronization between the communicating parties.
  9. There’s bonus points for methods that can be implemented manually from the sender side.
  10. Active disruption by the enterprise is always possible.

The technique discussed is criticized as “low bandwidth” but then I think, how much bandwidth does it take to transmit an admin login and password?

Definitely worth a slow read.

Other contenders for similar 10 commandments of exflitration?

As a trivial example, consider a sender who leaves work every day at the same time through a double door. If they exit to their right, it is a 0 and if they exit to their left, it is a 1. Perhaps only on set days of the week or month.

Very low bandwidth but as I said, for admin login/password, it would be sufficient.

How imaginative is your exflitration security?

Egyptological Museum Search

Tuesday, November 22nd, 2016

Egyptological Museum Search

From the post:

The Egyptological museum search is a PHP tool aimed to facilitate locating the descriptions and images of ancient Egyptian objects in online catalogues of major museums. Online catalogues (ranging from selections of highlights to complete digital inventories) are now offered by almost all major museums holding ancient Egyptian items and have become indispensable in research work. Yet the variety of web interfaces and of search rules may overstrain any person performing many searches in different online catalogues.

Egyptological museum search was made to provide a single search point for finding objects by their inventory numbers in major collections of Egyptian antiquities that have online catalogues. It tries to convert user input into search queries recognised by museums’ websites. (Thus, for example, stela Geneva D 50 is searched as “D 0050,” statue Vienna ÄS 5046 is searched as “AE_INV_5046,” and coffin Turin Suppl. 5217 is searched as “S. 05217.”) The following online catalogues are supported:

The search interface uses a short list of aliases for museums.

Once you see/use the interface proper, here, I hope you are interested in volunteering to improve it.

Manipulate XML Text Data Using XQuery String Functions

Tuesday, November 22nd, 2016

Manipulate XML Text Data Using XQuery String Functions by Adam Steffanick.

Adam’s notes from the Vanderbilt University XQuery Working Group:

We evaluated and manipulated text data (i.e., strings) within Extensible Markup Language (XML) using string functions in XQuery, an XML query language, and BaseX, an XML database engine and XQuery processor. This tutorial covers the basics of how to use XQuery string functions and manipulate text data with BaseX.

We used a limited dataset of English words as text data to evaluate and manipulate, and I’ve created a GitHub gist of XML input and XQuery code for use with this tutorial.

A quick run-through of basic XQuery string functions that takes you up to writing your own XQuery function.

While we wait for more reports from the Vanderbilt University XQuery Working Group, have you considered using XQuery to impose different views on a single text document?

For example, some Bible translations follow the “traditional” chapter and verse divisions (a very late addition), while others use paragraph level organization and largely ignore the tradition verses.

Creating a view of a single source text as either one or both should not involve permanent changes to a source file in XML. Or at least not the original source file.

If for processing purposes there was a need for a static file rendering one way or the other, that’s doable but should be separate from the original XML file.

Geek Jeopardy – Display Random Man Page

Tuesday, November 22nd, 2016

While writing up Julia Evans’ Things to learn about Linux, I thought it would be cool to display random man pages.

Which resulted in this one-liner in an executable file (man-random, invoke ./man-random):

man $(ls /usr/share/man/man* | shuf -n1 | cut -d. -f1)

As written, it displays a random page from the directories man1 – man8.

If you replace /man* with /man1/, you will only get results for man1 (the usual default).

All of which made me think of Geek Jeopardy!

Can you name this commands from their first paragraph descriptions? (omit their names)

  • remove sections from each line of files
  • pattern scanning and processing language
  • stream editor for filtering and transforming text
  • generate random permutations
  • filter reverse line feeds from input
  • dump files in octal and other formats

Looks easy now, but after a few glasses of holiday cheer? With spectators? Ready to try another man page section?

Enjoy!

Solution:

  • cut: remove sections from each line of files
  • awk: pattern scanning and processing language
  • sed: stream editor for filtering and transforming text
  • shuf: generate random permutations
  • col: filter reverse line feeds from input
  • od: dump files in octal and other formats

PS: I changed the wildcard in the fourth suggested solution from “?” to “*” to arrive at my solution. (Ubuntu 14.04)

Things to learn about Linux

Tuesday, November 22nd, 2016

Things to learn about Linux

From the post:

I asked on Twitter today what Linux things they would like to know more about. I thought the replies were really cool so here’s a list (many of them could be discussed on any Unixy OS, some of them are Linux-specific)

I count forty-seven (47) entries on Julia’s list, which should keep you busy through any holiday!

Enjoy!

The five-step fact-check (Africa Check)

Tuesday, November 22nd, 2016

The five-step fact-check from AfricaCheck

From the post:

Print our useful flow-chart and stick it up in a place where you can quickly refer to it when a deadline is pressing.

africa-check-fact-check-460

Click here to download the PDF for printing.

A great fact checking guide for reporters but useful insight for readers as well.

What’s missing from a story you are reading right now?

AfricaCheck offers to fact check claims about Africa tweeted with: #AfricaCheckIt.

There’s a useful service to the news community!

A quick example, eNCA (South African news site) claimed Zimbabwe’s President Robert Mugabe announced his retirement.

Africa Check responded with Mugabe’s original words plus translation.

I don’t read Mugabe as announcing his retirement but see for yourself.

Advancing exploitation: a scriptless 0day exploit against Linux desktops

Tuesday, November 22nd, 2016

Advancing exploitation: a scriptless 0day exploit against Linux desktops by Chris Evans.

From the post:

A powerful heap corruption vulnerability exists in the gstreamer decoder for the FLIC file format. Presented here is an 0day exploit for this vulnerability.

This decoder is generally present in the default install of modern Linux desktops, including Ubuntu 16.04 and Fedora 24. Gstreamer classifies its decoders as “good”, “bad” or “ugly”. Despite being quite buggy, and not being a format at all necessary on a modern desktop, the FLIC decoder is classified as “good”, almost guaranteeing its presence in default Linux installs.

Thanks to solid ASLR / DEP protections on the (some) modern 64-bit Linux installs, and some other challenges, this vulnerability is a real beast to exploit.

Most modern exploits defeat protections such as ASLR and DEP by using some form of scripting to manipulate the environment and make dynamic decisions and calculations to move the exploit forward. In a browser, that script is JavaScript (or ActionScript etc.) When attacking a kernel from userspace, the “script” is the userspace program. When attacking a TCP stack remotely, the “script” is the program running on the attacker’s computer. In my previous full gstreamer exploit against the NSF decoder, the script was an embedded 6502 machine code program.

But in order to attack the FLIC decoder, there simply isn’t any scripting opportunity. The attacker gets, once, to submit a bunch of scriptless bytes into the decoder, and try and gain code execution without further interaction…

… and good luck with that! Welcome to the world of scriptless exploitation in an ASLR environment. Let’s give it our best shot.

Above my head, at the moment, but I post it as a test for hackers who want to test their understanding/development of exploits.

BTW, some wag, I didn’t bother to see which one, complained Chris’ post is “irresponsible disclosure.”

Sure, the CIA, FBI, NSA and their counter-parts in other governments, plus their cybersecurity contractors should have sole access to such exploits. Ditto for the projects concerned. (NOT!)

“Responsible disclosure” is just another name for unilateral disarmament, on behalf of all of us.

Open and public discussion is much better.

Besides, a hack of Ubuntu 16.04 won’t be relevant at most government installations for years.

Plenty of time for a patched release. 😉

Practical Palaeography: Recreating the Exeter Book in a Modern Day ‘Scriptorium’

Tuesday, November 22nd, 2016

Practical Palaeography: Recreating the Exeter Book in a Modern Day ‘Scriptorium’

From the post:

Dr Johanna Green is a lecturer in Book History and Digital Humanities at the University of Glasgow. Her PhD (English Language, University of Glasgow 2012) focused on a palaeographical study of the textual division and subordination of the Exeter Book manuscript. Here, she tells us about the first of two sessions she led for the Society of Northumbrian Scribes, a group of calligraphers based in North East England, bringing palaeographic research and modern-day calligraphy together for the public.
(emphasis in original)

Not phrased in subject identity language, but concerns familiar to the topic map community are not far away:


My own research centres on the scribal hand of the manuscript, specifically the ways in which the poems are divided and subdivided from one another and the decorative designs used for these litterae notabiliores throughout. For much of my research, I have spent considerable time (perhaps more than I am willing to admit) wondering where one ought to draw the line with palaeography. When do the details become so tiny to no longer be of any significance? When are they just important enough to mean something significant for our understanding of how the manuscript was created and arranged? How far am I willing to argue that these tiny features have significant impact? Is, for example, this littera notabilior Đ on f. 115v (Judgement Day I, left) different enough in a significant way to this H on f.97v, (The Partridge, bottom right), and in turn are both of these litterae notabiliores performing a different function than the H on f.98r (Soul and Body II, far right)?[5]
(emphasis in original, footnote omitted)

When Dr. Green says:

…When do the details become so tiny to no longer be of any significance?…

I would say: When do the subjects (details) become so tiny we want to pass over them in silence? That is they could be but are not represented in a topic map.

Green ends her speculation, to a degree, by enlisting scribes to re-create the manuscript of interest under her observation.

I’ll leave her conclusions for her post but consider a secondary finding:


The experience also made me realise something else: I had learned much by watching them write and talking to them during the process, but I had also learned much by trying to produce the hand myself. Rather than return to Glasgow and teach my undergraduates the finer details of the script purely through verbal or written description, perhaps providing space for my students to engage in the materials of manuscript production, to try out copying a script/exemplar for themselves would help increase their understanding of the process of writing and, in turn, deepen their knowledge of the constituent parts of a letter and their significance in palaeographic endeavour. This last is something I plan to include in future palaeography teaching.

Dr. Green’s concern over palaeographic detail illustrates two important points about topic maps:

  1. Potential subjects for a topic map are always unbounded.
  2. Different people “see” different subjects.

Which also account for my yawn when Microsoft drops the Microsoft Concept Graph of more than 5.4 million concepts.

…[M]ore than 5.4 million concepts[?]

Hell, Copleston’s History of Western Philosophy easily has more concepts.

But the Microsoft Concept Graph is more useful than a topic map of Copleston in your daily, shallow, social sea.

What subjects do you see and how would capturing them and their identities make a difference in your life (professional or otherwise)?