Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

June 8, 2016

Car Thieves Get A Bump In Social Status

Filed under: Cybersecurity,Security — Patrick Durusau @ 4:48 pm

Car thieves generally have low social status. After all, for the most part they “hot-wire,” find cars with the keys left in them, or more recently, resort to car jacking. None of which requires any degree of intelligence and/or organization.

That may be about to change, at least for modern car thieves.

Top Story: 100,000 cars can be hacked – Is yours being recalled? by Kelli Uhrich reports:


Drivers of the Mitsubishi Outlander Hybrid could be vulnerable to being hacked through the car’s Wi-Fi console. A flaw was recently discovered that could allow hackers to disable the alarm before the car was stolen.

Beyond disabling the alarm, cyberattackers could drain the car’s battery life, and even start the vehicle on some models.

Researchers discovered the vulnerability because of how the car’s Wi-Fi module works. Rather than using a GSM module, the Mitsubishi Outlander allows mobile devices to connect to the car by hosting its own Wi-Fi access point. This means your device must first disconnect from any other networks to connect.

It took researchers less than four days to hack into the system and explore the potential destruction hackers could create.

The first one took four days but with the hints in Kelli’s post, I suspect a trained researcher could perform the hack in less than four days.

Within a few months, Mitsubishi Outlander “apps” will appear that let you open and start any Outlander on a parking lot.

How impressive with that be for your date? Just pick one.

Not that you should take a vehicle other than your own joy-riding, but considerate joy-riders leave the vehicle clean, unlocked and in a well-lighted location.

Intelligence Suicide By Data

Filed under: FBI,Government,Intelligence,NSA — Patrick Durusau @ 4:33 pm

Facing Data Deluge, Secret U.K. Spying Report Warned of Intelligence Failure by Ryan Gallagher.

From the post:


The amount of data being collected, however, proved difficult for MI5 to handle. In March 2010, in another secret report, concerns were reiterated about the agency’s difficulties processing the material it was harvesting. “There is an imbalance between collection and exploitation capabilities, resulting in a failure to make effective use of some of the intelligence collected today,” the report noted. “With the exception of the highest priority investigations, a lack of staff and tools means that investigators are presented with raw and unfiltered DIGINT data. Frequently, this material is not fully assessed because of the significant time required to review it.”

Ironic this story appears less than two (2) weeks after reports of the FBI seeking NSL (national security letter) authority to obtain email records and browsing histories.

gun_suicide_silhouette_800x600-460

I should not complain about the FBI, NSA and other government agencies committing intelligence suicide by data.

Their rapidly growing ineffectiveness shields innocents from their paranoid fantasies.

At the same time, that ineffectiveness inhibits the performance of legitimate purposes. (The FBI, once upon a time, had a legitimate purpose, some of the others, well, that’s an issue for debate.)

So we are clear, I don’t consider contracts for “butts in seats” for either contractors or agencies to be for “legitimate purposes.” I reserve the phrase “legitimate purposes” for activities that further the stated goals of the agency, not padding staffing rolls, not occupying as much office space as possible, not having the most forms or whatever other criteria functions as the measure of success in a particular agency.

Hints for federal agencies already committing intelligence suicide by data or approaching that point:

  1. What data sources have proven valuable in the past? (Reminder: Phone metadata records have not. Not ever.)
  2. What data sources, in order of historical importance, are available in case X?
  3. Assemble the data from the top performing resources

For example, if an informant has direct contact with an alleged Islamic State supporter, isn’t that the best source of evidence for their plans and thinking? Do you really need their websearch history from an internet services provider? Considering that you will ask for everyone’s web search history to avoid disclosing the particular web history you are seeking.

To be sure, vendors will sell you as much data processing and storage capacity as you care to purchase, but you won’t be any closer to stopping terrorism. Just closer to the end of your budget for the current fiscal year.

Is intelligence suicide by data a goal of your agency?

Finding/Verifying YouTube Videos

Filed under: Journalism,News,Reporting — Patrick Durusau @ 3:09 pm

5 free tools for finding and verifying YouTube videos in news by Alastair Reid.

From the post:

With more than 500 hours of video uploaded to YouTube every minute, Google’s video platform is still the most popular in the world for publicly sharing videos with the rest of humanity.

Granted, some of them may be teenagers playing computer games, unboxing consumer goods or just accidentally filming their feet, but YouTube is a vital resource for eyewitness media around news stories. Here are some tools to bear in mind for finding and verifying such footage.

I maintain an internal webpage with links grouped by categories. Bookmarks are too easy to forget and why bother with searching?

All five of these links will be clustered in YouTube videos.

Enjoy!

PS: If you work in one those organizations where sharing isn’t all that odd, consider having a communal internal webpage for common resources.

AI Cultist On Justice System Reform

Filed under: Artificial Intelligence,BigData,Machine Learning — Patrick Durusau @ 2:31 pm

White House Challenges Artificial Intelligence Experts to Reduce Incarceration Rates by Jason Shueh.

From the post:

The U.S. spends $270 billion on incarceration each year, has a prison population of about 2.2 million and an incarceration rate that’s spiked 220 percent since the 1980s. But with the advent of data science, White House officials are asking experts for help.

On Tuesday, June 7, the White House Office of Science and Technology Policy’s Lynn Overmann, who also leads the White House Police Data Initiative, stressed the severity of the nation’s incarceration crisis while asking a crowd of data scientists and artificial intelligence specialists for aid.

“We have built a system that is too large, and too unfair and too costly — in every sense of the word — and we need to start to change it,” Obermann said, speaking at a Computing Community Consortium public workshop.

She argued that the U.S., a country that has the highest amount incarcerated citizens in the world, is in need of systematic reforms with both data tools to process alleged offenders and at the policy level to ensure fair and measured sentences. As a longtime counselor, advisor and analyst for the Justice Department and at the city and state levels, Overman said she has studied and witnessed an alarming number of issues in terms of bias and unwarranted punishments.

For instance, she said that statistically, while drug use is about equal between African Americans and Caucasians, African Americans are more likely to be arrested and convicted. They also receive longer prison sentences compared to Caucasian inmates convicted of the same crimes.

Other problems, Oberman said, are due to inflated punishments that far exceed the severity of crimes. She recalled her years spent as an assistant public defender for Florida’s Miami-Dade County Public Defender’s Office as an example.

“I represented a client who was looking at spending 40 years of his life in prison because he stole a lawnmower and a weedeater from a shed in a backyard,” Obermann said, “I had another person who had AIDS and was offered a 15-year sentence for stealing mangos.”

Data and digital tools can help curb such pitfalls by increasing efficiency, transparency and accountability, she said.
… (emphasis added)

Spotting a cultist tip: Before specifying criteria for success or even understanding a problem, a cultist announces the approach that will succeed.

Calls like this one are a disservice to legitimate artificial intelligence research, to say nothing of experts in criminal justice (unlike Lynn Overmann), who have struggled for decades to improve the criminal justice system.

Yes, Overmann has experience in the criminal justice system, both in legal practice and at a policy level, but that makes her no more of an expert on criminal justice reform than having multiple flat tires makes me an expert on tire design.

Data is not, has not been, nor will it ever be a magic elixir that solves undefined problems posed to it.

White House sponsored AI cheer leading is a disservice to AI practitioners, experts in the field of criminal justice reform and more importantly, to those impacted by the criminal justice system.

Substitute meaningful problem definitions for the AI pom-poms if this is to be more than resume padding and currying favor with contractors project.

The anatomy of online deception:… [ Statistics Can Be Deceiving – Even In Academic Papers]

Filed under: Deception,Machine Learning — Patrick Durusau @ 12:16 pm

The anatomy of online deception: what makes automated text convincing? by Richard M. Everett, Jason R. C. Nurse, Arnau Erola.

Abstract:

Technology is rapidly evolving, and with it comes increasingly sophisticated bots (i.e. software robots) which automatically produce content to inform, influence, and deceive genuine users. This is particularly a problem for social media networks where content tends to be extremely short, informally written, and full of inconsistencies. Motivated by the rise of bots on these networks, we investigate the ease with which a bot can deceive a human. In particular, we focus on deceiving a human into believing that an automatically generated sample of text was written by a human, as well as analysing which factors affect how convincing the text is. To accomplish this, we train a set of models to write text about several distinct topics, to simulate a bot’s behaviour, which are then evaluated by a panel of judges. We find that: (1) typical Internet users are twice as likely to be deceived by automated content than security researchers; (2) text that disagrees with the crowd’s opinion is more believably human; (3) light-hearted topics such as Entertainment are significantly easier to deceive with than factual topics such as Science; and (4) automated text on Adult content is the most deceptive regardless of a user’s background.

The statistics presented are impressive:


We found that automated text is twice as likely to deceive Internet users than security researchers. Also, text that disagrees with the Crowd’s opinion increases the likelihood of deception by up to 78%, while text on light-hearted Topics such as Entertainment increases the likelihood by up to 85%. Notably, we found that automated text on Adult content is the most deceptive for both typical Internet users and security researchers, increasing the likelihood of deception by at least 30% compared to other Topics on average. Together, this shows that it is feasible for a party with technical resources and knowledge to create an environment populated by bots that could successfully deceive users.
… (at page 1120)

To evaluate those statistics consider the judges panels that create the supporting data:


To evaluate this test dataset, a panel of judges is used where every judge receives the entire test set with no other accompanying data such as Topic and Crowd opinion. Then, each judge evaluates the comments based solely on their text and labels each as either human or bot, depending who they believe wrote it. To fill this panel, three judges were selected – in keeping with the average procedure of the work highlighted by Bailey et al. [2] – for two distinct groups:

  • Group 1: Three cyber security researchers who are actively involved in security work with an intimate knowledge of the Internet and its threats.
  • Group 2: Three typical Internet users who browse social media daily but are not experienced with technology or security, and therefore less aware of the threats.

… (pages 1117-1118)

The paper reports human vs. generation evaluations of topics by six (6) people.

I’m suddenly less impressed than I hoped to be from reading the abstract.

A more informative title would have been: 6 People Classify Machine/Human Generated Reddit Comments.

To their credit, the authors were explicit about the judging panels in their study.

I am forced to conclude peer review wasn’t used for the SAC 2016 31st ACM Symposium on Applied Computing or its peer reviewers left a great deal to be desired.

As a conference goer, would you be interested in human/machine judgments of six unknown panelists?

Hacker puppets explain why malware and popups are still a thing online

Filed under: Cybersecurity,Humor — Patrick Durusau @ 10:08 am

>Hacker puppets explain why malware and popups are still a thing online by Cory Doctorow.

This may not improve security at your office but at least it is an entertaining lesson on cybersecurity.

The intern memorizing the name of their anti-virus software is insufficient.

For virus notifications of any kind:

  1. Close browser
  2. Run your anti-virus software

Do not depend on names in notifications.

People who will lie about “anti-virus” warnings will also lie about software names.

For all virus warnings, run your anti-virus software. If true, your software will find them again.

June 7, 2016

Doctorow on Encrypted Media Extensions (EME) @ W3C and DRM

Filed under: Fair Use,Government,Intellectual Property (IP) — Patrick Durusau @ 7:17 pm

Cory Doctorow outlines the important public policy issues semi-hidden in W3C efforts to standardize Encrypted Media Extensions (EME).

I knew I would agree with Cory’s points, more or less, before even reading the post. But I also knew that many of his points, if not all, aren’t going to be persuasive to some in the DRM discussion.

If you already favor reasonable accommodation between consumers of content and rightsholders, recognition of “fair use,” and allowances for research and innovation, enjoy Cory’s post and do what you can to support the EFF and others in this particular dispute.

If you are currently a rightsholder and strong supporter of DRM, I don’t think Cory’s post is going to be all that persuasive.

Rather than focusing on public good, research, innovation, etc., I have a very different argument for rightsholders, who I distinguish from people who will profit from DRM and its implementations.

I will lay out all the nuances either tomorrow or the next day, but the crux of my argument is the question: “What is the ROI for rightsholders from DRM?

You will be able to satisfy yourself of my analysis, using your own confidential financial statements. The real ones, not the ones you show the taxman.

To be sure, someone intends to profit from DRM and its implementation, but it isn’t who you think it is.

In the meantime, enjoy Cory’s post!

June 6, 2016

Public Bounty Launch Newsletter (Are Hackers, Bugs or Both Dense?)

Filed under: Cybersecurity,Security — Patrick Durusau @ 8:42 pm

Public Bounty Launch Newsletter

From the webpage:

Sign-up to receive an email when a new public bounty launches or when a bounty increases their high-end reward amount.

Bounty announcement for web, mobile, IoT, automotive, and network/host.

Looking a bit further, this is from bugcrowd, whose what-we-do page reports:

IT TAKES A CROWD TO BEAT A CROWD

Companies are in an unfair fight when it comes to cybersecurity. Regardless of how robust security efforts are, companies will always be outnumbered by the thousands of malicious hackers worldwide. We bring thousands of good hackers to the fight, helping companies even the odds and find bugs before the bad guys do.

As of today, fifty-four (54) current programs, 28 for rewards, 26 for points and 1 for charity.

It has attracted non-trivial venture capital, Series B, $15M, so take that as a positive sign.

An interesting twist on Schneier’s question: How Many Vulnerabilities Are there in Software?

Bugcrowd proposes that a density of “good hackers” is more useful than current software practices in detecting vulnerabilities.

What density of “good hackers” is required, for what types of software, what rewards are required to attract that density of “good hackers,” etc., remain open questions.

However, given the record of software vulnerabilities to this point, bugcrowd’s density of “good hackers” approach could hardly do worse than current practices.

Personally I think rewards need to increase to the point where “good hackers” can make a reasonable living.

Bettering software for the “common good” doesn’t pay utility bills or mortgage notes.

Liability for selling or using vulnerable software would help drive a rewards based “good hacker” economy.

Breaking Californication (An Act Performed On The Public)

Filed under: Government,Government Data,Transparency — Patrick Durusau @ 4:43 pm

Law Enforcement Lobby Succeeds In Killing California Transparency Bill by Kit O’Connell.

From the post:

A California Senate committee killed a bill to increase transparency in police misconduct investigations, hampering victims’ efforts to obtain justice.

Chauncee Smith, legislative advocate at the ACLU of California, told MintPress News that the state Legislature “caved to the tremendous influence and power of the law enforcement lobby” and “failed to listen to the demands and concerns of everyday Californian people.”

California has some of the most secretive rules in the country when it comes to investigations into police misconduct and excessive use of force. Records are kept sealed, regardless of the outcome, as the ACLU of Northern California explains on its website:

“In places like Texas, Kentucky, and Utah, peace officer records are made public when an officer is found guilty of misconduct. Other states make records public regardless of whether misconduct is found. This is not the case in California.”

“Right now, there is a tremendous cloud of secrecy that is unparalleled compared to many other states,” Smith added. “California is in the minority in which the public do not know basic information when someone is killed or potentially harmed by those are sworn to serve and protect them.”

In February, Sen. Mark Leno, a Democrat from San Francisco, introduced SB 1286, the “Enhance Community Oversight on Police Misconduct and Serious Uses of Force” bill. It would have allowed “public access to investigations, findings and discipline information on serious uses of force by police” and would have increased transparency in other cases of police misconduct, according to an ACLU fact sheet. Polling data cited by the ACLU suggests about 80 percent of Californians would support the measure.

But the bill’s progress through the legislature ended on May 27, when it failed to pass out of the Senate Appropriations committee.

“Today is a sad day for transparency, accountability, and justice in California,” said Peter Bibring, police practices director for the ACLU of California, in a May 27 press release.

Mistrust between police officers and citizens makes the job of police officers more difficult and dangerous, while denying citizens the full advantages of a trained police force, paid for by their tax dollars.

The state legislature, finding sowing and fueling mistrust between police officers and citizens has election upsides for them, fans those flames with secrecy over police misconduct investigations.

Open, not secret (read grand jury) proceedings where witnesses can be fairly examined (unlike the deliberately thrown Michael Brown investigation), can go a long way to re-establishing trust between the police and the public.

Members of the community know when someone was a danger to police officers and others. Whether their family members will admit it or not. Likewise, police officers know which officers are far to quick to escalate to deadly force. Want better community policing? What better citizen cooperation? That’s not going to happen with completely secret police misconduct investigations.

So the State of California is going to collect the evidence, statements, etc., in police misconduct investigations, but won’t share that information with the public. At least not willingly.

Official attempts to break illegitimate government secrecy failed. Even if it had succeeded you’d be paying least $0.25 per page plus a service fee.

Two observations about government networks:

  • Secret (and otherwise) government documents are usually printed on networked printers.
  • Passively capturing Ethernet traffic (network tap) captures printer traffic too.

Whistle blowers don’t have to hack heavily monitored systems, steal logins/passwords, leaking illegally withheld documents is within the reach of anyone who can plug in an Ethernet cable.

There’s a bit more to it than that, but remember all those network cables running through the ceiling, walls, closets, the next time your security consultant, assures you of your network’s security.

As a practical matter, if you start leaking party menus and football pools, someone will start looking for a network tap.

Leak when it makes a significant difference to public discussion and/or legal proceedings. Even then, look for ways to attribute the leak to factions within the government.

Remember the DoD’s amused reaction to State’s huffing and puffing over the Afghan diplomatic cables? That sort of rivalry exists at every level of government. You should use it to your advantage.

The State of California would have you believe that government information sharing is at its sufferance.

I beg to differ.

So should you.

June 5, 2016

Software Carpentry Bug BBQ (June 13th, 2016)

Filed under: Programming,Research Methods,Researchers,Science — Patrick Durusau @ 9:02 pm

Software Carpentry Bug BBQ

From the post:

Software Carpentry is having a Bug BBQ on June 13th

Software Carpentry is aiming to ship a new version (5.4) of the Software Carpentry lessons by the end of June. To help get us over the finish line we are having a Bug BBQ on June 13th to squash as many bugs as we can before we publish the lessons. The June 13th Bug BBQ is also an opportunity for you to engage with our world-wide community. For more info about the event, read-on and visit our Bug BBQ website.

How can you participate? We’re asking you, members of the Software Carpentry community, to spend a few hours on June 13th to wrap up outstanding tasks to improve the lessons. Ahead of the event, the lesson maintainers will be creating milestones to identify all the issues and pull requests that need to be resolved we wrap up version 5.4. In addition to specific fixes laid out in the milestones, we also need help to proofread and bugtest the
lessons.

Where will this be? Join in from where you are: No need to go anywhere – if you’d like to participate remotely, start by having a look at the milestones on the website to see what tasks are still open, and send a pull request with your ideas to the corresponding repo. If you’d like to get together with other people working on these lessons live, we have created this map for live sites that are being organized. And if there’s no site listed near you, organize one yourself and let us know you are doing that here so that we can add your site to the map!

The Bug BBQ is going to be a great chance to get the community together, get our latest lessons over the finish line, and wrap up a product that gives you and all our contributors credit for your hard work with a citable object – we will be minting a DOI for this on publication.

A community BBQ that is open to everyone, dietary restrictions or not!

And the organizers have removed distance as a consideration for “attending.”

For those of us on non-BBQ diets, a unique opportunity to participate with others in the community for a worthy cause.

Mark your calendars today!

EU Plays What-a-Mole with URLs (RTBF)

Filed under: EU,Privacy — Patrick Durusau @ 10:44 am

Researchers Uncover a Flaw in Europe’s Tough Privacy Rules by Mark Scott.

From the post:

Europe likes to think it leads the world in protecting people’s privacy, and that is particularly true for the region’s so-called right to be forgotten. That legal right allows people connected to the Continent to ask the likes of Google to remove links about themselves from online search results, under certain conditions.

Yet that right — one of the world’s most widespread efforts to protect people’s privacy online — may not be as effective as many European policy makers think, according to new research by computer scientists based, in part, at New York University.

The academic team, which also included experts from the Federal University of Minas Gerais in Brazil, said that in roughly a third of the cases examined, the researchers were able to discover the names of people who had asked for links to be removed. Those results, based on the researchers’ use of basic coding, came despite the individuals’ expressed efforts to remove their names from online searches.

The findings, which had not previously been made public and will be presented at an academic conference next month, raise questions about how successful Europe’s “right to be forgotten” can be if people’s identities can still be found with just a few clicks of a mouse. The paper says such breaches may undermine “the spirit” of the legal ruling.

From the positive conclusions on the Right to Be Forgotten (RTBF) by the paper authors:


We end this paper with a few opinions and recommendations based on the results and observations of this paper. After having studied RTBF and its consequences from a data perspective, the authors feel that RTBF has been largely working and responding to legitimate privacy concerns of many Europeans. We feel that Google’s process for determining which links should be delisted seems fair and reasonable. We feel that Google is being fairly transparent about how it processes RTBF requests [13]. Other academics have called more transparency [12]. However, by being more specific about how delisting decisions are made, it may become easier for the attacker to rediscover delisted URLs and the corresponding requesters.

I have to conclude they are collectively innocent of reading George Orwell’s 1984.

-if all records told the same tale — then the lie passed into history and became truth. ‘Who controls the past,’ ran the Party slogan, ‘controls the future: who controls the present controls the past.’ (George Orwell, 1984, Part 1, Chapter 2)

The paper does expose the EU efforts to control the past are akin to playing whack-a-mole:

with URLs.

Except that unlike the video, the EU doesn’t play very well.

As the paper outlines in some detail, delisting isn’t the same thing as making all records tell the same tale.

No only can you discover the “delisted,” you can often find evidence of who requested the “delisting.”

If “delisting” at Google becomes commonplace it will create opportunities for new web services. A web service that accepts URLs and passes through the content, annotated with Google Delisted Content – Suspected Delister: (delister’s name and current twitter handle).

1984 did not end well.

For a different (not necessarily better) outcome, resist all attempts to control the past, or to at least make it harder to discover.

June 4, 2016

Surprise, Surprise, Surprise! Hacker Details Leaked (along with others)

Filed under: Cybersecurity,Security — Patrick Durusau @ 9:10 pm

Sh0ping.su Hacked, Thousands of Credit Cards and Accounts Leaked by Wagas.

From the post:

The year 2016 has been hard on internet users and websites alike since more than 1,076 data breaches have occurred. The latest one is ShOping.su previously known as ShOping.net, a Dark Net platform where hackers and cyber criminals sell hacked and stolen accounts. Recently, someone decided to take care of the stolen data stored on ShOping.su’s server by stealing thousands of accounts and putting it for sale online – But days after the hackers decided to leak the data to the public.

The hackers behind the leak claim to have leaked 16,000 ShOping.su’s registered accounts, 15,000 user accounts which were stolen from other sites and stored on the hacked servers and around 9000 credit card data. Hacked-DB, the data mining company who first discovered the data contacted HackRead with an in-depth analysis according to them the leaked data is legit and stolen from platforms across the web. The dumped data contains 16,566 user accounts with email addresses and their encrypted passwords, 9,000 accounts from platforms like Uber, cPanel, WebMail, GoDaddy, Twitter, PayPal, Amazon and more. (The 9,000 accounts were available on ShOping.su for sale.)

The analysis also revealed sensitive data dumped containing personal and credit card data of 5,000 users including ID card numbers, social security numbers, credit card numbers along with their CVV codes, type of card, zip code, users’ date of births, name or the state and city, phone numbers, usernames, email addresses, price and date of purchased.

“Hacked-DB has detected a data breach on ShOping.su website. The leaked data contains user account information and full credit card details, credit number, CVV, expiration date, holder name, credit type etc. the website was down for maintenance after the data breach but now it is back online,” said the company’s representatives.

Just for conversation, let’s assume you want to traffic in stolen credit card information. Not that you would but just for conversation.

Question: As you are about to engage in credit card fraud, would you record your true name, address, credit card information on a website that traffics in stolen credit card information?

Question: Have you read any stories lately about credit card information being stolen from websites?

Question: Do you have any uneasiness about sharing your credit card information on a site owned and operated by self-professed criminals?

Unless all the 16,000 ShOping.su’s registered accounts turn out to have 1060 West Addison (Wrigley Field, Chicago) addresses and equally bogus other information, whatever happens to ShOping.su’s clients is well deserved.

Law enforcement agencies are likely sharing that data by geographic areas even as I write this post.

If you ever decide to become a criminal (not recommended), try to follow Dogbert’s advice in this chain letter cartoon.

Learning to like design documents

Filed under: Programming,Software,Software Engineering — Patrick Durusau @ 8:41 pm

Learning to like design documents by Julia Evans.

From the post:

Hi everyone! Today we’re going to talk about software engineering and process!

A design document is where, before starting to implement a system, you write up a thing explaining what the system is supposed to do first and how you’re planning to accomplish that. I think there are basically two goals:

  • tell people what you’re doing
  • figure out design problems with the system before you’ve been coding for 2 months

I understand that it’s super important to think ahead a lot before huge projects, but a little bit of thinking can be helpful even for smaller projects. I asked some people recently if they write design docs for small projects and some of them said “yeah totally! small ones! it helps! :D”.

I used to get kind of grumpy when someone was like “hey julia can you write a design document for your system?” It would seem like a reasonable idea, though, so I’d try to do it! But the first couple of times I tried to write one I felt like it didn’t actually really help me! I liked the idea in principle, but I didn’t really know how to apply it and I felt like it was hard to get good feedback.

Last week I wrote a design doc and I thought it was sort of helpful. Here are some current thoughts.

Be forewarned that Julia is a gifted writer and you will enjoy her posts more than your design documents. 😉

Still, Julia makes a great case for the use of design documents (a/k/a “documentation”).

Unless your job security is tied up in undocumented, spaghetti COBOL code (or its equivalent in another language), try putting Julia’s advice into action.

If you are looking for really broad but practical reading in programming, check out Julia’s list of all her posts. Pick one at random every week. You won’t be disappointed.

Universal Windows Hack, Going Once – $95K, Going Twice – $90K, Free at Exploit.in?

Filed under: Cybersecurity,Microsoft — Patrick Durusau @ 4:45 pm

Swati Khandelwal reports a universal Windows hack in Hackers Selling Unpatched Microsoft Windows Zero-Day Exploit for $90,000.

John McAfee tweeted today the hack is free on Exploit.in.

mcafee-exploit-460

I know John is busy, running for U.S. president and all that stuff, but how long does it take to paste in a link?

I visited Exploit.in today and paged back to 01 May 2016 (the original report was 11 May 2016).

Nothing that I could identify as the hack, free or otherwise.

You?

PS: If you make factual claims on Twitter (read anywhere), include a link/citation. It will save everyone time and effort.

Unless your purpose is to waste the time/effort of others.

PPS: I nearly posted without including the image of John’s post. Including the image saves you from searching Twitter to see if John really posted such a claim. At least if you are willing to accept its not faked in some way (it’s not).

Took an extra minute or two but multiple that by the number of users who might otherwise search. That’s how much time including the image has saved.

New York Fed As Vending Machine

Filed under: Cybersecurity,Security — Patrick Durusau @ 10:32 am

Cyberecurity at the Federal Reserve Bank of New York is said to equal:

superman-241-1-460

But as Krishna N. Das and Jonathan Spicer report in Exclusive: NY Fed first rejected cyber-heist transfers, then moved $81 million, it is more akin to putting slugs into a vending machine.

A “slug” is a mental object about the size and weight of a coin and can be placed in a vending machine, some of which will credit you as if money were deposited. Condom vending machines are frequent victims of “slugs.”

condom-vending-460

Das and Spicer report that the Fed rejected improperly formatted requests, only upon resubmission later that day, pay out $101 million, $20 which was reversed do to a misspelling.

After reading Das and Spicer, should the Superman’s fortress key or the condom vending machine image appear on the next cybersecurity report for the Federal Reserve Bank of New York?

I’m voting for the condom vending machine.

You?

June 3, 2016

Deep Learning Trends @ ICLR 2016 (+ Shout-Out to arXiv)

Filed under: Deep Learning,Machine Learning,Neural Networks — Patrick Durusau @ 7:12 pm

Deep Learning Trends @ ICLR 2016 by Tomasz Malisiewicz.

From the post:

Started by the youngest members of the Deep Learning Mafia [1], namely Yann LeCun and Yoshua Bengio, the ICLR conference is quickly becoming a strong contender for the single most important venue in the Deep Learning space. More intimate than NIPS and less benchmark-driven than CVPR, the world of ICLR is arXiv-based and moves fast.

Today’s post is all about ICLR 2016. I’ll highlight new strategies for building deeper and more powerful neural networks, ideas for compressing big networks into smaller ones, as well as techniques for building “deep learning calculators.” A host of new artificial intelligence problems is being hit hard with the newest wave of deep learning techniques, and from a computer vision point of view, there’s no doubt that deep convolutional neural networks are today’s “master algorithm” for dealing with perceptual data.

Information packed review of the conference and if that weren’t enough, this shout-out to arXiv:


ICLR Publishing Model: arXiv or bust
At ICLR, papers get posted on arXiv directly. And if you had any doubts that arXiv is just about the single awesomest thing to hit the research publication model since the Gutenberg press, let the success of ICLR be one more data point towards enlightenment. ICLR has essentially bypassed the old-fashioned publishing model where some third party like Elsevier says “you can publish with us and we’ll put our logo on your papers and then charge regular people $30 for each paper they want to read.” Sorry Elsevier, research doesn’t work that way. Most research papers aren’t good enough to be worth $30 for a copy. It is the entire body of academic research that provides true value, for which a single paper just a mere door. You see, Elsevier, if you actually gave the world an exceptional research paper search engine, together with the ability to have 10-20 papers printed on decent quality paper for a $30/month subscription, then you would make a killing on researchers and I would endorse such a subscription. So ICLR, rightfully so, just said fuck it, we’ll use arXiv as the method for disseminating our ideas. All future research conferences should use arXiv to disseminate papers. Anybody can download the papers, see when newer versions with corrections are posted, and they can print their own physical copies. But be warned: Deep Learning moves so fast, that you’ve gotta be hitting refresh or arXiv on a weekly basis or you’ll be schooled by some grad students in Canada.

Is your publishing < arXiv?

Do you hit arXiv every week?

Newspaper Publishers Protecting Consumers (What?)

Filed under: Ad Targeting,Publishing — Patrick Durusau @ 4:28 pm

Newspaper industry asks FTC to investigate “deceptive” adblockers by John Zorabedian.

From the post:

Fearing that online publishers may be on the losing side of their battle with commercial adblockers, the newspaper publishing industry is now seeking relief from the US government.

The Newspaper Association of America (NAA), an industry group representing 2000 newspapers, filed a complaint with the US Federal Trade Commission (FTC) asking the consumer watchdog to investigate adblocker companies’ “deceptive” and “unlawful” practices.

The NAA is not alleging that adblockers themselves are illegal – rather, it says that adblocker companies make misleading claims about their products, a violation of the Federal Trade Commission Act.

Do you feel safer knowing the Newspaper Association of America (NAA) is protecting you from deceptive ads by adblocker companies?

A better service would be to protect consumers from deceptive ads in their publications but I suppose that would be a conflict of interest.

The best result would be for the FTC to declare you can display (or not) content received on your computer any way you like.

You cannot, of course, re-transmit that content, but if a user chooses to combine your content with that of another site, that is entirely on their watch.

Ad-blocking, transformation of lawfully delivered content, including merging of content, are rights that every user should enjoy.

Reproducible Research Resources for Research(ing) Parasites

Filed under: Open Access,Open Data,Open Science,Research Methods,Researchers,Science — Patrick Durusau @ 3:58 pm

Reproducible Research Resources for Research(ing) Parasites by Scott Edmunds.

From the post:

Two new research papers on scabies and tapeworms published today showcase a new collaboration with protocols.io. This demonstrates a new way to share scientific methods that allows scientists to better repeat and build upon these complicated studies on difficult-to-study parasites. It also highlights a new means of writing all research papers with citable methods that can be updated over time.

While there has been recent controversy (and hashtags in response) from some of the more conservative sections of the medical community calling those who use or build on previous data “research parasites”, as data publishers we strongly disagree with this. And also feel it is unfair to drag parasites into this when they can teach us a thing or two about good research practice. Parasitology remains a complex field given the often extreme differences between parasites, which all fall under the umbrella definition of an organism that lives in or on another organism (host) and derives nutrients at the host’s expense. Published today in GigaScience are articles on two parasitic organisms, scabies and on the tapeworm Schistocephalus solidus. Not only are both papers in parasitology, but the way in which these studies are presented showcase a new collaboration with protocols.io that provides a unique means for reporting the Methods that serves to improve reproducibility. Here the authors take advantage of their open access repository of scientific methods and a collaborative protocol-centered platform, and we for the first time have integrated this into our submission, review and publication process. We now also have a groups page on the portal where our methods can be stored.

A great example of how sharing data advances research.

Of course, that assumes that one of your goals is to advance research and not solely yourself, your funding and/or your department.

Such self-centered as opposed to research-centered individuals do exist, but I would not malign true parasites by describing them as such, even colloquially.

The days of science data hoarders are numbered and one can only hope that the same is true for the “gatekeepers” of humanities data, manuscripts and artifacts.

The only known contribution of hoarders or “gatekeepers” has been to the retarding of their respective disciplines.

Given the choice of advancing your field along with yourself, or only yourself, which one will you choose?

On-Again/Off-Again Democracy In New York

Filed under: FOIA,Government,Security — Patrick Durusau @ 3:22 pm

Andrew Denney reports in Panel Supports City’s Denial of Data on NYPD Surveillance that the NYPD can refuse to acknowledge the existence of records requested under the state equivalent of FOIA.

From the post:

Police properly applied a legal doctrine allowing it to refuse to acknowledge the existence of records, requested under state Freedom of Information Law, that related to surveillance programs, a Manhattan appeals court found.

The ruling by the Appellate Division, First Department, settles a dispute between two trial judges who disagreed in 2014 as to whether the New York City Police Department could use the “Glomar Doctrine.” The policy allows federal departments to cite security concerns to neither confirm nor deny the existence of records requested under the federal Freedom of Information Act.

The doctrine is named for an inquiry into a salvage operation of a Soviet nuclear submarine by a ship named the Hughes Glomar Explorer.

An NYPD spokesman commented:

“We are all safer because of this ruling, which confirms that the NYPD is not required to reveal the targets of counterterrorism surveillance,” department spokesman Nicholas Paolucci said.

I would agree with Paolucci had he said:

“Illegal, unauthorized and abusive ‘counterterrorism surveillance’ will be safer because of this ruling.”

National (think FBI) and local law enforcement authorities have long histories of illegal misconduct, a large amount of which is only discovered years or even decades later. There is no reason to believe that “counterterrorism surveillance” is any less prone to similar abuses.

Without public oversight and transparency, “counterterrorism surveillance” is a recipe for an ongoing abuse of the rights.

Having denied the access needed for meaningful public oversight, the courts and NYPD should not complain about uncontrolled releases of the same information.

When faced with an on-again/off-again democracy, what alternative does the public have?

I first saw this in a tweet by North Star Post.

Weekend Hacking Homework: “Irongate” (SWIFT)

Filed under: Cybersecurity,Security — Patrick Durusau @ 10:10 am

‘Irongate’ attack looks like Stuxnet, quacks like Stuxnet … by Darren Pauli.

From the post:

FireEye threat researchers have found a complex malware instance that borrows tricks from Stuxnet and is specifically designed to work on Siemens industrial control systems.

Josh Homan, Sean McBride, and Rob Caldwell named the malware “Irongate” and say it is probably a proof-of-concept that is likely not used in wild.

Industrial control system malware are complex beasts in large part because exploitation requires knowledge of often weird, archaic, and proprietary systems.

The steep learning curve required to grok such systems limits the risk presented by the many holes they contain.

See Darren’s post for references on the “replay” mechanism used by “Irongate.”

What caught my attention was: “…often weird, archaic, and proprietary systems.

Does that sound like SWIFT and financial software in general?

If SWIFT and related software has the vulnerability characteristics of Flash, the financial community is in deep doo-doo.

Won’t know until someone spends some serious time with that weird, archaic, and proprietary system known as SWIFT.

You should get an account at TotalVirus. Reported as where “Irongate” first appeared.

Anonymous Resistance

Filed under: Government,Politics — Patrick Durusau @ 9:02 am

Something to get your blood pumping on a Friday morning (my local time)!

This is not an endorsement of all actions claimed under the moniker Anonymous.

Take defacing the websites of fascists for example. You could deface websites of fascists from sunup to sundown and never have the end in sight.

It’s like teaching a pig to sing. I grant that it annoys the pig but it also wastes your time.

June 2, 2016

Experts Agree, Banks Are Where The Money Really Is

Filed under: Cybersecurity,Security — Patrick Durusau @ 8:51 pm

Espionage cited as the US Federal Reserve reports 50-plus breaches from 2011 to 2015 by Grant Gross.

Grant’s report has even fewer specifics than Fed records show dozens of cybersecurity breaches, which at least had these graphics:

fed-security-incidents-460

My take away from Grant’s report is the Fed had cybersecurity incidents. Full stop. How does that compare to other financial institutions? Don’t know, no one says. What happened as a response to those attacks? Don’t know or at least no one says. (Security by obscurity.)

The high point of Grant’s article was this passage:

The banking system is often a “hard target,” but the potential rewards are high for attackers who have a sophisticated skill set, added Richard Ford, chief scientist at security vendor Forcepoint. “There’s a certain brand of attacker who loves going after banks,” he said. “That’s really where the money is.”

Something we can all agree on and quite possibly, that’s becoming common knowledge.

After the SWIFT attacks, you have to wonder if the ‘hard target’ reputation of banks and the finance sector is bluff and bravado or something more serious?

So long as security by obscurity is SOP (standard operating procedure), that question will go unanswered, until it is too late.

#NoTROHere – Defending Free Speech and MuckRock

Filed under: Censorship,Free Speech — Patrick Durusau @ 3:27 pm

Court grants Temporary Restraining Order forcing removal of MuckRock documents by Michael Morisy.

From the post:

A King County, Washington court has ordered MuckRock to un-publish documents that the City of Seattle released to one of our users, Phil Mocek, via a public records request concerning private contractors who bid on the city’s smart utility meter program.

Although MuckRock has complied with the order, we disagree with the court’s decision and are confident that ultimately we will vindicate our right to publish the documents that Mr. Mocek lawfully obtained.

I don’t see my name or yours on the TRO or any of the other documents.

To enable you (and me) to compare the original documents to both the redacted copies and the highly amusing affidavits filed in support of the facially flawed TRO, find the following:

Landis-Gyr-Managed-Services-Report-2015-Final.pdf

Req-9-Security-Overview.pdf

Disclaimer: I did NOT obtain these documents from MuckRock or anyone known to me to be associated with MuckRock or named in any of the pleadings referenced above. (Just to forestall any groundless accusations of contempt, etc.)

Request: In the unlikely event that someone serves me with a TRO, please repost the documents on your blogs, discussion lists, twitter feeds, etc.

If Landis+Gyr wants to violate the U.S. Constitution and state constitutions in every county/parish (Louisiana) in the United States, let’s give them that opportunity. #NoTROHere

Before I forget, all the legal documents are here, including the affidavits.

PS: I know I am violating the advice I gave in Avoiding Imperial (Computer Fraud and Abuse Act (CFAA)) Entanglement – Identification but:

  • I never claimed to be perfectly consistent, and
  • The public is the ultimate arbiter of the conduct of its government. Concealment of information by government serves only to breed mistrust and a lack of confidence in outcomes.

Look for my analysis of the public documents vs. the affidavits next Monday, 6 June 2016.

June 1, 2016

Guide to Figuring Out the Age of an Undated World Map (xkcd)

Filed under: Mapping,Maps — Patrick Durusau @ 9:18 pm

This is precious! The original.

map_age_guide_large

Four Horsemen Of Internet Censorship + One

Filed under: Censorship,Free Speech,Government — Patrick Durusau @ 8:41 pm

Facebook, Twitter, YouTube, Microsoft back EU hate speech rules by Julia Fioretti and Foo Yun Chee.

From the post:

Facebook (FB.O), Twitter (TWTR.N), Google’s (GOOGL.O) YouTube and Microsoft (MSFT.O) on Tuesday agreed to an EU code of conduct to tackle online hate speech within 24 hours in Europe.

EU governments have been trying in recent months to get social platforms to crack down on rising online racism following the refugee crisis and terror attacks, with some even threatening action against the companies.

As part of the pledge agreed with the European Commission, the web giants will review the majority of valid requests for removal of illegal hate speech in less than 24 hours and remove or disable access to the content if necessary.

They will also strengthen their cooperation with civil society organizations who help flag hateful content when it goes online and promote “counter-narratives” to hate speech.

(original story dated 31 May 2016)

Reading the four horsemen of internet censorship, Facebook, Twitter, YouTube and Microsoft had joined with the EU to further censorship of the internet, I did try for due diligence.

I was able to find the original press release: European Commission and IT Companies announce Code of Conduct on illegal online hate speech

Along with the CODE OF CONDUCT ON COUNTERING ILLEGAL HATE SPEECH ONLINE, to take away some of the hand-waving about what is to be censored.

Not to mention the Framework Decision on combating certain forms and expressions of racism and xenophobia by means of criminal law.

None of the co-conspirators in this league of censors seems to recall there are alternatives to affirmative censorship by the four horsemen of internet censorship plus the EU.

Consider this image from 1999 (assuming 3 months = 1 internet year, that’s 68 years ago in internet time):

pics-460

That image appeared in Paul Resnick’s PICS, Censorship, & Intellectual Freedom FAQ. For a variety of reasons, PICS failed, but the principle of filtering by the user, remains sound.

The major obstacle to PICS was the lack of labeling of content by content providers, who quite naturally don’t want any obstruction to the content they seek to deliver.

It’s reasonable to assume that would be the same today. Except that we don’t need to rely on content providers to label content in order to filter it.

You may have heard about the rapid advances in neural networks and deep learning. I suspect the four horsemen of internet censorship have but haven’t considered their use for user-side filtering of content.

Perhaps I’m deeply offended by some variation on “hate speech” (a euphemism for “speech I don’t like”) plus insults about Erdogan.

I rather doubt, at least at present, the four horsemen of internet censorship are going to protect me from the combination.

Or conjure up your own combination of speech from which you desire protection.

The sensible alternative to censorship is to empower users, not the four horsemen, not the EU nor anyone else, to filter their own content.

Let’s keep free speech and empower users, not the four horsemen of internet censorship in their bid to curry favor with the EU.

PS: The EU is always attempting to grow a cottage IT industry, creating adaptive deep learning censors for users is an open market for users who fear content.

“Panama” Papers As Mis-Direction

Filed under: Government,Journalism,News,Panama Papers — Patrick Durusau @ 7:46 pm

Panama Papers May Inspire More Big Leaks, if Not Reform by Scott Shane.

Gabriel Zucman (Berkeley economist) spots the mis-direction inherent in the “Panama Papers” moniker.


In fact, some experts believe the “Panama” label is misleading, obscuring the central role of several states, including Delaware, Wyoming and Nevada, in registering companies with hidden ownership. Mossack Fonseca probably represents just 5 or 10 percent of the industry creating anonymous companies, said Mr. Zucman of Berkeley, so the disclosures have left the vast majority hidden.

And no matter where shell companies may be registered, he said, much of the wealth they own is invested in the United States, in real estate, stocks and bonds. “The U.S. could find out who the true owners are,” Mr. Zucman said.

But the United States may illustrate the difficulty of moving from splashy revelations to serious change. States with a stake in the lucrative corporate registration business are likely to resist serious changes, and Congress appears unlikely to act anytime soon on comprehensive reform bills.

For all of the hooting about the “Panama Papers” consisting of 11.5 million documents, weighing in at 2.6 terabytes, a moment’s consideration carries the sobering realization, this is from a single law firm.

If you consider all the documents held by corporate law firms in the states mentioned by Zucman, plus a few others: Delaware, District of Columbia, Florida, Nevada, New York, Wyoming, the amount of data may exceed multiple zettabytes.

Shane generously remarks: “…and Congress appears unlikely to act anytime soon on comprehensive reform bills.

Unlikely? Unlikely?

I assume you agree that the laws that enable the hiding of wealth are not accidental.

Don’t be distracted by reform side-shows, presented by the people responsible for the problem.

Let’s go big-leak hunting.

Yes?

« Newer Posts

Powered by WordPress