Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

March 11, 2016

Some Department of “Justice” – Beg, Borrow, or Steal

Filed under: Cybersecurity,FBI,Government — Patrick Durusau @ 8:39 pm

Government hints it may demand iOS source code, signing key by Gregg Keizer.

Gregg points out that the latest brief by the DOJ in the San Bernardino case, at footnote 9, page 22, the government says:

9 For the reasons discussed above, the FBI cannot itself modify the software on Farook’s iPhone without access to the source code and Apple’s private electronic signature. The government did not seek to compel Apple to turn those over because it believed such a request would be less palatable to Apple. If Apple would prefer that course, however, that may provide an alternative that requires less labor by Apple programmers. See In re Under Seal, 749 F.3d 276, 281-83 (4th Cir. 2014) (affirming contempt sanctions imposed for failure to comply with order requiring the company to assist law enforcement with effecting a pen register on encrypted e-mail content which included producing private SSL encryption key).

The DOJ hints that it will enlist the courts to assist it in the theft of Apple’s property and as Gregg further points out, may still want to force Apple to assist it:

Even if the Court ordered Apple to provide the government with Apple’s cryptographic keys and source code, Apple itself has implied that the government could not disable the requisite features because it “would have insufficient knowledge of Apple’s software and design protocols to be effective.” (Neuenschwander Decl. ¶ 23.) (at page 28 of the latest government brief)

Powerful briefs have been filed in support of Apple but its time to take the factual gloves off.

The expansive claims of the government are based solely on the entirely fictional notion that it is in hot pursuit of a terrorist threat.

If there were any terrorist threat to speak of, one would expect the TSA to have found a terrorist, at least one, this many years after 9/11. Some fifteen years this next September 11th. But it hasn’t.

Yes, fourteen people died during the San Bernardino attack, which the government has yet to show was anything more than a work place related dispute that erupted at a holiday party.

No government, not even the United States government, is entitled to declare facts not in evidence and expect others, especially fact finders such as courts, to meekly accept them as true.

The DOJ keeps posturing about the government’s interest. Apple and others should put that interest to a factual test.

Government agents are engaging in budget justifying behavior is a far less compelling reason to violate Apple’s constitutional rights than an actual terrorist threat.

But the so-called terrorist threat doesn’t exist. One suspects that is why the DOJ has omitted any factual basis for its claims. It could easily whistle up all the FBI arrests for terrorism, but that would expose the recruitment of the mentally ill people who are then supplied by the FBI to make those arrests.

I’m guessing that would diminish the government’s case in the eyes of the fact finder.

In some cases the government has a compelling interest, but fictional compelling interests don’t count.

The DOJ should be challenged at every step of the process, building a factual record that consists of everyone who had any part in the San Bernardino investigation, conversations with Apple, staff of the various DOJ offices, along with office notes, records, and phone logs.

If the interest of the government is so compelling, then it should not be reluctant to make a factual demonstration for the record.

Will the DOJ build a factual record? Will Apple demand it?

How-To Defeat Analysis of Seized Cellphones

Filed under: Government,Privacy — Patrick Durusau @ 5:51 pm

Jason Hernandez commented on my post FOIA Confirms Lawless Nature of FBI Sky Spies [Dark Art/Dark Clouds] and the use of Cellebrite to extract data from seized cellphones.

Here is a visualization of you and the data on your “regular” cellphone:

An Austin parcels tractor (6196934300)

Unsolicited advice:

Never appear public with anything but a clean, unused burner phone.

Call forwarding and burner phones are too cheap to drag a parcel tractor load of electronic baggage around with you.

That doesn’t help with cellphones you use in secure locations but it’s a start in the right direction.

Islamic State Document Leak – Does This Make It A Government?

Filed under: Government,Politics — Patrick Durusau @ 10:58 am

Thousands Of ISIS Documents Apparently Leaked To British Media by Bill Chapell.

From the post:

Names, addresses and phone numbers of some 22,000 ISIS recruits — and information about the network that recruited them — are reportedly part of a trove of data that Sky News says it received from a former member of the extremist group.

The identities of people from more than 50 countries, including Britain, European nations, the U.S. and Canada, are purportedly in the data, which Sky says it has shared with government authorities.

“Some of the telephone numbers on the list are still active,” Sky reports, “and it is believed that although many will be family members, a significant number are used by the jihadis themselves.”

It’s not yet clear how many of the thousands of people named in the documents might have traveled to join ISIS, and how many may still be in their home countries.

I don’t know if having a document leak makes the Islamic State more like a government or just the typical group with poor security practices.

The interview with the leaker offers an effective counter to Islamic extremists:


Sky News said it obtained the data via a memory stick provided by a former member of the Free Syrian Army using the name Abu Hamed, who joined ISIS but then quit because he believes the group stopped following Islamic law and is too heavily influenced by former members of Iraq’s Baath party.

Urging Muslims to be more observant isn’t going to put money in the pockets of anti-terror profiteers so don’t look for any sudden policy changes in the “war on terrorism.”

March 10, 2016

Government Source Code Policy

Filed under: Government,Open Source,Software — Patrick Durusau @ 9:28 pm

Government Source Code Policy

From the webpage:

The White House committed to adopting a Government-wide Open Source Software policy in its Second Open Government National Action Plan that “will support improved access to custom software code developed for the Federal Government,” emphasizing that using and contributing back to open source software can fuel innovation, lower costs, and benefit the public.[1] In support of that commitment, today the White House Office of Management and Budget (OMB) is releasing a draft policy to improve the way custom-developed Government code is acquired and distributed moving forward. This policy is consistent with the Federal Government’s long-standing policy of ensuring that “Federal investments in IT (information technology) are merit-based, improve the performance of our Government, and create value for the American people.”[2]

This policy requires that, among other things: (1) new custom code whose development is paid for by the Federal Government be made available for reuse across Federal agencies; and (2) a portion of that new custom code be released to the public as Open Source Software (OSS).

We welcome your input on this innovative draft policy. We are especially interested in your comments on considerations regarding the release of custom code as OSS. The draft policy proposes a pilot program requiring covered agencies to release at least 20 percent of their newly-developed custom code, in addition to the release of all custom code developed by Federal employees at covered agencies as part of their official duties, subject to certain exceptions as noted in the main body of the policy.[3]

In some absolute sense this is a step forward from the present practices of the government with regard to source code that it develops or pays to have developed.

On the other hand, what’s difficult about saying that all code (not 20%) developed by or at the direction of the federal government is deposited under an Apache license within 90 days of its posting to any source code repository. Subject to national security exceptions and then notice has to be given with the decision to be reviewed in the local DC federal court.

Short, simple, clear time constraints and a defined venue for review.

Anytime someone dodges the easy, obvious solution, there is a reason for that dodging. Not a reason or desire to benefit you. Unless you are the person orchestrating the dodge.

Technology Adoption – Nearly A Vertical Line (To A Diminished IQ)

Filed under: Filters,Topic Maps — Patrick Durusau @ 9:11 pm

tech-adoption

From: There’s a major long-term trend in the economy that isn’t getting enough attention by Rick Rieder.

From the post:

As the chart above shows, people in the U.S. today are adopting new technologies, including tablets and smartphones, at the swiftest pace we’ve seen since the advent of the television. However, while television arguably detracted from U.S. productivity, today’s advances in technology are generally geared toward greater efficiency at lower costs. Indeed, when you take into account technology’s downward influence on price, U.S. consumption and productivity figures look much better than headline numbers would suggest.

Hmmm, did you catch that?

…while television arguably detracted from U.S. productivity, today’s advances in technology are generally geared toward greater efficiency at lower costs.

Really? Rick must have missed the memo on how multitasking (one aspect of smart phones, tablets, etc.) lowers your IQ by 15 points. About what you would expect from smoking a joint.

If technology encourages multitasking, making us dumber, then we are becoming less efficient. Yes?

Imagine if instead of scrolling past tweets with images of cats, food, irrelevant messages, every time you look at your Twitter time line, you got the two or three tweets relevant to your job function.

Each of those not-on-task tweets chips away at the amount of attention span you have to spend on the two or three truly important tweets.

Apps that consolidate, filter and diminish information flow are the path to greater productivity.

Topic maps anyone?

Growthverse

Filed under: Marketing,Topic Maps — Patrick Durusau @ 8:41 pm

Growthverse

From the webpage:

Growthverse was built for marketers, by marketers, with input from more than 100 CMOs.

Explore 800 marketing technology companies (and growing).

I originally arrived at this site here.

Interesting visualization that may result in suspects (their not prospects until you have serious discussions) for topic map based tools.

The site says the last update was in September 2015 so take heed that the data is stale by about six months.

That said, its easier than hunting down the 800+ companies on your own.

Good hunting!

1,000 Hours of Early Jazz

Filed under: History,Music — Patrick Durusau @ 8:28 pm

1,000 Hours of Early Jazz Recordings Now Online: Archive Features Louis Armstrong, Duke Ellington & Much More

From the post:

David W. Niven spent his life amassing a vast record collection, all dedicated to the sounds of Early Jazz. As a kid during the 1920s, he started buying jazz records with money earned from his paper route. By World War II, Niven, now a college student, had thousands of LPs. “All the big names of jazz, along with lesser legends, were included,” Niven later said, and “I found myself with a first class treasure of early jazz music.” Louis Armstrong, Bix Beiderbecke, Duke Ellington, and much, much more.

For the sake of his children, Niven started transferring his record collection to cassette tapes during the 1980s and prefacing them with audio commentaries that offer background information on each recording. Now, years after his death (1991), his collection of “Early Jazz Legends” has made its way to the web, thanks to archivist Kevin J. Powers. If you head over to Archive.org, you can stream/download digitized versions of 650 cassette tapes, featuring over 1,000 hours of early jazz music. There’s also scans of liner cards for each recording.

Every recitation of history is incomplete but some are more incomplete than others.

Imagine trying to create a recitation about the mid to late 1960’s without examples of the music, posters, incense, body counts, napalm, etc.

Here’s one slice of the early 20th century for your listening enjoyment.

FOIA Confirms Lawless Nature of FBI Sky Spies [Dark Art/Dark Clouds]

Filed under: Cybersecurity,FBI,Government,Privacy — Patrick Durusau @ 5:32 pm

FOIA Confirms Lawless Nature of FBI Sky Spies

From the post:

The Electronic Frontier Foundation (EFF) released documents received in response to a Freedom of Information Act lawsuit that confirm the use of cell-site simulators in surveillance aircraft and the shocking disregard for oversight or regulatory policy what-so-ever. The federal government is flying spy-planes over US soil as the EFF put it, “without any policies or legal guidance.” North Star Post has been reporting on these activities since our founding following the independent disclosure of FBI operated domestic aerial surveillance on May 26th, 2015.

The EFF reports: the FBI’s “first successful airborne geolocation mission involving cellular technology” apparently occurred sometime in 2009, even as late as April 2014 lawyers from the FBI’s Office of General Counsel were discussing the need to develop a “coordinated policy” and “determine any legal concerns.”

NSP most prominently reported on the FBI evasion of established policy in regards to warrants for the use of cell-site simulator deployment in October of last year.

Aircraft have been identified as part of the FBI, DEA, DHS and other fleets, with many aircraft flying on a daily basis. The fleet is predominantly single-engine Cessna aircraft, with most flying 4-5 hours in looped patterns and circles with a radius of 2-2.5 miles. The 2+ mile figure is most likely the range of the DRT box although this has yet to be substantiated by government documents.

More details at the post will help you with tracking these planes and other details.

Security Syllogism:

All software/hardware have vulnerabilities.

DRT boxes are hardware and software.

Therefore, DRT boxes have vulnerabilities.

Yes? It’s been a while but I think that works.

While tracking airplanes and complaining about illegal law enforcement activity is useful, how much more useful would be vulnerabilities in DRT boxes?

DRT boxes promiscuously accept input, always a bad starting point for any hardware/software.

It could be as simple as building a directional “fake” cellphone that overloads the DRT box with noise.

Experts who have access to or who liberate DRT boxes can no doubt provide better advice than I can.

But on the whole, I’m not included to trust law breakers who later plead having been caught, they can now be trusted to follow the rules, but without any oversight.

That just strikes me as wholly implausible if not idiotic. The best defense is a good offense.

North Star Post has started a series on aerial surveillance: Part 1.

If you don’t know North Star Post (I didn’t), you should check them out. Follow @NStarPost.

I have no connections with North Star Post but consider it a public service to recommend you follow useful accounts, even ones that aren’t mine.

PS: If you do run across hacking information for DRT boxes, please post and/or re-post prominently. It’s not so much a matter that I become aware of it but that the public at large is enabled to defend itself.

Defense against the dark arts…

Filed under: Cybersecurity,Journalism,News,Reporting — Patrick Durusau @ 4:01 pm

Defense against the dark arts: Basic cyber-security for journalists by Prof. dr. Dariusz Jemielniak.

One of the better non-technical guides I have seen for basic cyber-security.

One big point in its favor: it’s realistic. You can make yourself “less vulnerable” but against the resources of a nation state, you are only less vulnerable.

Even with the solutions mentioned in this guide, good cyber-security requires effort on your part. It is better to choose a lesser level of cyber-security that you practice every day than to be sloppy and create a false sense of security (IMHO).

From the post:

This guide aims to provide basic cyber-hygiene for journalists. When we talk about this, participating journalists often tell us: we have nothing to hide. Or: I don’t write about anything sensitive. But we’re not per se worried that journalists get into fights with the NSA or army divisions. It could happen of course, but consider this: will you ever write about something that can possibly make someone upset? Because what happens much more often is that adversaries will try to intimidate you or disgrace you by throwing your personal data on the street, photos of you holiday, photos of your children, the school address of your children, your financial information and so on. It’s hard to completely avoid it, but you can make it damn hard.

And then there’s the revelations of Snowden and the current struggle between Apple and the FBI, that increased the overall public interest in the subject of privacy. All the while, most of us don’t have the knowledge, experience or time to really get into the technical details of our privacy. This guide goes over some of the more accessible measures and solutions that at least make you less vulnerable. Because privacy often works the same as locking your bike: just make sure you’re better locked than the bike next to you.

Beyond the software mentioned in this guide, always remember Rule No. 1:

Never put anything in writing or say before witnesses, anything you don’t want published on the front page of the New York Times or read to a federal grant jury.

Being journalists, however, the value of information is delivering it to others so Rule No. 1 may not always be useful.

In those cases, consider the first rule of regicide:

Leave no walking wounded.

“Speaking truth to power” is a great aphorism but prophets in the Hebrew Bible found that to be a dangerous occupation.

Informing power of truth is ok, but I like seeing information used to alter balances of power.

March 9, 2016

Exoplanet Visualization

Filed under: Astroinformatics,Visualization — Patrick Durusau @ 9:34 pm

Exoplanet Visualization

You can consider this remarkable eye-candy and/or as a challenge to your visualization skills.

Either way, you owe it to yourself to see this display of exoplanet data.

Quite remarkable.

Pay close attention because there are more planets than the ones near the center that catch your eye.

I first saw this in a tweet by MapD.

Program Derivation for Functional Languages – Tuesday, March 29, 2016 Utecht

Filed under: Functional Programming,Programming,Standards — Patrick Durusau @ 9:21 pm

Program Derivation for Functional Languages by Felienne Hermans.

From the webpage:

Program Derivation for Functional Languages

Program derivation of course was all the rage in the era of Dijkstra, but is it still relevant today in the age of TDD and model checking? Felienne thinks so!

In this session she will show you how to systematically and step-by-step derive a program from a specification. Functional languages especially are very suited to derive programs for, as they are close to the mathematical notation used for proofs.

You will be surprised to know that you already know and apply many techniques for derivation, like Introduce Parameter as supported by Resharper. Did you know that is actually program derivation technique called generalization?

I don’t normally post about local meetups but as it says in the original post, Felienne is an extraordinary speaker and the topic is an important one.

Personally I am hopeful that at least slides and/or perhaps even video will emerge from this presentation.

If you can attend, please do!

In the meantime, if you need something to tide you over, consider:

A Calculus of Functions for Program Derivation by Richard Bird (1987).

Lectures on Constructive Functional Programming by R.S. Bird (1988).

Richard Bird’s Publication page.

A brief introduction to the derivation of programs by Juris Reinfelds (1986).

Wandora – New Release – 2016-03-08

Filed under: Topic Map Software,Topic Maps,Wandora — Patrick Durusau @ 11:21 am

Wandora – New Release – 2016-03-08

The homepage reports:

New Wandora version has been released today (2016-03-08). The release adds Wandora support to MariaDB and PostgreSQL database topic maps. Wandora has now more stylish look, especially in Traditional topic map. The release fixes many known bugs.

I’m no style or UI expert but I’m not sure where I should be looking for the “…more stylish look….” 😉

From the main window:

wandora-main

If you select Tools and then Tools Manager (or Cntrl-t [lower case, contrary to the drop down menu]), you will see a list of all tools (300+) with the message:

All known tools are listed here. The list contains also unfinished, buggy and depracated tools. Running such tool may cause exceptions and unpredictable behavior. We suggest you don’t run the tools listed here unless you really know what you are doing.

It is a very impressive set of tools!

There is no lack of place to explore in Wandora and to explore with Wandora.

Enjoy!

OECD Gender Data Portal

Filed under: Census Data,Government — Patrick Durusau @ 9:59 am

OECD Gender Data Portal

From the webpage:

The OECD Gender Data Portal includes selected indicators shedding light on gender inequalities in education, employment, entrepreneurship, health and development, showing how far we are from achieving gender equality and where actions is most needed. The data cover OECD member countries, as well as Brazil, China, India, Indonesia, and South Africa. (emphasis in the original)

Data indicators are grouped into employment, education, entrepreneurship, health and development.

A major data source for those who feel the need to demonstrate gender discrimination but seems to be lacking for those of us who take gender discrimination as given.

That is those of us who acknowledge gender discrimination exists, that it should be changed and who are looking for solutions, not more documentation that gender discrimination exists.

Sadly, pointing out gender discrimination does exist has proven largely ineffectual in reducing it presence in American society for example.

Gender and gender discrimination being extremely complex human issues, can be summarized but not addressed by macro-level data.

Changing hearts and minds is more a matter of personal interaction at a micro-level. To that extent the macro-level data may motivate us to more personal interaction but isn’t the answer to what is a person-to-person issue.

March 8, 2016

Searching http://that1archive.neocities.org/ with Google?

Filed under: Government,Searching — Patrick Durusau @ 10:11 pm

Not critical but worth mentioning.

I saw:

(March 08, 2016) | 28,882 Hillary Clinton Emails

today and wanted to make sure it wasn’t a duplicate upload.

What better to do than invoke Google Advanced Search with:

all these words: Hillary emails

site or domain: http://that1archive.neocities.org/

Here are my results:

hillary-email-search

My first assumption was that Google simply had not updated for this “new” content. Happens. Unexpected for an important site like http://that1archive.neocities.org/, but mistakes do happen.

So I skipped back to search for:

(March 05, 2016) | FBI file Langston Hughes | via F.B. Eyes

Search request:

all these words: Langston Hughes

site or domain: http://that1archive.neocities.org/

My results:

langston-search

The indexed files end with March 05, 2016 and files after March 06, 2016 are not indexed, as of 08 March 2016.

Here’s what the listing at That 1 Archive looked like 08 March 2016:

a1-march-08

Google is of course free to choose the frequency of its indexing of any site.

Just a word to the wise if you have scripted advanced searches, check the frequency of Google indexing updates for sites of interest.

It may not be the indexing updates you would expect. (I would have expected That 1 Archive to be nearly simultaneous with uploading. Apparently not.)

Ethical Wednesdays:…Eyewitness Footage

Filed under: Ethics,Journalism,News,Reporting — Patrick Durusau @ 8:30 pm

Ethical Wednesdays: Additional Resources on Ethics and Eyewitness Footage by Madeleine Bair.

From the post:

For the past three months, WITNESS has been sharing work from our new Ethical Guidelines for Using Eyewitness Videos for Human Rights and Advocacy. We wrap up our blog series by sharing a few of the resources that provided us with valuable expertise and perspectives in our work to develop guidelines (the full series can be found here).

Not all of the resources below are aimed at human rights documentation, and not all specifically address eyewitness footage. But the challenge ensuring that new forms of information gathering and data management are implemented safely and ethically affects many industries, and the following guidance from the fields of crisis response, journalism, and advocacy is relevant to our own work using eyewitness footage for human rights. (For a full list of the resources we referred to in our Ethical Guidelines, download the guide for a complete list in the appendix.)

ICRC’s Professional Standards for Protection Work Carried out by Humanitarian and human rights actors in armed conflict and other situations of violence – The 2nd Edition of the International Committee of the Red Cross’s manual includes new chapters developed to address the ethics of new technologies used to collect information and manage data. While not specific to video footage, its chapter on Managing Sensitive Protection Information provides a relevant discussion on the assessing informed of information found online. “It is often very difficult or even impossible to identify the original source of the information found on the Internet and to ascertain whether the information obtained has been collected fairly/lawfully with the informed consent of the persons to whom this data relates. In other words, personal data accessible on the Internet is not always there as a result of a conscious choice of the individuals concerned to share information in the public domain.”

Quite a remarkable series of posts and additional resources.

There are a number of nuances to the ethics of eyewitness footage that caught me unawares.

My prior experience was shaped around having a client and other than my client, all else was acceptable collateral damage.

That isn’t the approach taken in these posts so you will have to decide which approach, or some mixture of the two works for you.

I agree it is unethical to cause needless suffering, but if you have a smoking gun, you should be prepared to use it to maximum effectiveness.

If You Are Going To Lie… [iPhone Case]

Filed under: Cybersecurity,FBI,Government — Patrick Durusau @ 8:03 pm

A recent post about the FBI claims in the iPhone case reminded me of this Dilbert cartoon, where Dogbert advises Dilbert:

Maybe for your first crime you shouldn’t put your name and address on it and distribute ti to ten thousand strangers.

There is a similar rule for lying to judges, the first rule being never to lie to a judge, its just bad practice but especially never lie when it can be proven that you are lying.

Representatives of law enforcement broke that second rule and compounded the error by lying in written pleadings.

In the iPhone case, Daniel Kahn Gillmor makes an air-tight case for the FBI having lied to the court both in its initial application and in subsequent pleadings about the “auto-erase” feature of the iPhone in question. One of the FBI’s Major Claims in the iPhone Case is Fraudulent.

In a nutshell, the FBI has attempted to mis-lead the court into thinking the “auto-erase” feature dooms the FBI to ten tries at guessing the password. A lay person might think so but with just a little thought, you make a backup of the NAND flash memory where the encrypted data is stored. Try to your heart’s content, but only ten attempts before you have to restore the memory.

In addition to lying to the court, the FBI gambled on the court possibly lacking the technical background to realize the falseness of the FBI’s claims.

That’s more rank than outright lying to the court. All parties to litigation have an affirmative obligation to see that the court is fully and accurately informed of the facts before it.

After denying the government’s claims in the iPhone case, the court should refer the attorneys for the government to the Califoria Bar association for disciplinary hearings.

Patent Sickness Spreads [Open Source Projects on Prior Art?]

Filed under: Intellectual Property (IP),Natural Language Processing,Patents,Searching — Patrick Durusau @ 7:31 pm

James Cook reports a new occurrence of patent sickness in Facebook has an idea for software that detects cool new slang before it goes mainstream.

The most helpful part of James’ post is the graphic outline of the “process” patented by Facebook:

facebook-patent

I sure do hope James has not patented that presentation because it make the Facebook patent, err, clear.

Quick show of hands on originality?

While researching this post, I ran across Open Source as Prior Art at the Linux Foundation. Are there other public projects that research and post prior art with regard to particular patents?

An armory of weapons for opposing ill-advised patents.

The Facebook patent is: 9,280,534 Hauser, et al. March 8, 2016, Generating a social glossary:

Its abstract:

Particular embodiments determine that a textual term is not associated with a known meaning. The textual term may be related to one or more users of the social-networking system. A determination is made as to whether the textual term should be added to a glossary. If so, then the textual term is added to the glossary. Information related to one or more textual terms in the glossary is provided to enhance auto-correction, provide predictive text input suggestions, or augment social graph data. Particular embodiments discover new textual terms by mining information, wherein the information was received from one or more users of the social-networking system, was generated for one or more users of the social-networking system, is marked as being associated with one or more users of the social-networking system, or includes an identifier for each of one or more users of the social-networking system. (emphasis in original)

FBI Has More Privacy Than Average US Citizen

Filed under: FBI,Government,Privacy — Patrick Durusau @ 4:20 pm

FBI quietly changes its privacy rules for accessing NSA data on Americans by Spencer Ackerman.

From the post:

The FBI has quietly revised its privacy rules for searching data involving Americans’ international communications that was collected by the National Security Agency, US officials have confirmed to the Guardian.

The classified revisions were accepted by the secret US court that governs surveillance, during its annual recertification of the agencies’ broad surveillance powers. The new rules affect a set of powers colloquially known as Section 702, the portion of the law that authorizes the NSA’s sweeping “Prism” program to collect internet data. Section 702 falls under the Foreign Intelligence Surveillance Act (Fisa), and is a provision set to expire later this year.

Spender’s report is marred by what it can’t state:


But the PCLOB’s new compliance report, released on Saturday, found that the administration has submitted “revised FBI minimization procedures” that address at least some of the group’s concerns about “many” FBI agents who use NSA-gathered data.

“Changes have been implemented based on PCLOB recommendations, but we cannot comment further due to classification,” said Christopher Allen, a spokesman for the FBI.

Sharon Bradford Franklin, a spokesperson for the PCLOB, said the classification prevented her from describing the rule changes in detail, but she said they move to enhance privacy. She could not say when the rules actually changed – that, too, is classified.

“They do apply additional limits” to the FBI, Franklin said.

Timothy Barrett, a spokesman for the office of the director of national intelligence, also confirmed the change to FBI minimization rules.

We know how trustworthy government has proven itself to be, Pentagon Papers, Watergate, Iran-Contra, the Afghan War Diaries, the Snowden leaks, just to hit the highlights.

Here is what Snowden said was being collected:

PRISM_Collection_Details

By National Security Agencyoriginal image | source, Public Domain, https://commons.wikimedia.org/w/index.php?curid=26526602

So where is the danger of the FBI being limited (picking one at random) from monitoring all chats from New York state to overseas locations? That only means it has to have some cause to invade the privacy of a given individual.

Doesn’t say what cause, don’t say which individual.

What privacy for the FBI does do is conceal incompetence, waste of resources and perpetuate a lack of effective outside oversight over the FBI.

Otherwise the FBI would not have to recruiting the mentally ill to carry out terrorist preparations at the behest of the FBI. They would have real, non-FBI sponsored terrorists to arrest.

Now there’s a category for terrorists: non-FBI sponsored terrorists.

Is anyone doing data mining on FBI “terrorist” arrests?

V Sign Biometrics [Building Privacy Zones a/k/a Unobserved Spaces]

Filed under: Biometrics,Identity,Privacy — Patrick Durusau @ 2:58 pm

Machine-Learning Algorithm Aims to Identify Terrorists Using the V Signs They Make

From the post:

Every age has its iconic images. One of the more terrifying ones of the 21st century is the image of a man in desert or army fatigues making a “V for victory” sign with raised arm while standing over the decapitated body of a Western victim. In most of these images, the perpetrator’s face and head are covered with a scarf or hood to hide his identity.

That has forced military and law enforcement agencies to identify these individuals in other ways, such as with voice identification. This is not always easy or straightforward, so there is significant interest in finding new ways.

Today, Ahmad Hassanat at Mu’tah University in Jordan and a few pals say they have found just such a method. These guys say they have worked out how to distinguish people from the unique way they make V signs; finger size and the angle between the fingers is a useful biometric measure like a fingerprint.

The idea of using hand geometry as a biometric indicator is far from new. Many anatomists have recognized that hand shape varies widely between individuals and provides a way to identify them, if the details can be measured accurately. (emphasis in original)

The review notes this won’t give you personal identity but would have to be combined with other data.

Overview of: Victory Sign Biometric for Terrorists Identification by Ahmad B. A. Hassanata, Mahmoud B. Alhasanat, Mohammad Ali Abbadi, Eman Btoush, Mouhammd Al-Awadi.

Abstract:

Covering the face and all body parts, sometimes the only evidence to identify a person is their hand geometry, and not the whole hand- only two fingers (the index and the middle fingers) while showing the victory sign, as seen in many terrorists videos. This paper investigates for the first time a new way to identify persons, particularly (terrorists) from their victory sign. We have created a new database in this regard using a mobile phone camera, imaging the victory signs of 50 different persons over two sessions. Simple measurements for the fingers, in addition to the Hu Moments for the areas of the fingers were used to extract the geometric features of the shown part of the hand shown after segmentation. The experimental results using the KNN classifier were encouraging for most of the recorded persons; with about 40% to 93% total identification accuracy, depending on the features, distance metric and K used.

All of which makes me suspect that giving a surveillance camera the “finger,” indeed, your height, gait, any physical mannerism, are fodder for surveillance systems.

Hotels and businesses need to construct privacy zones for customers to arrive and depart free from surveillance.

So You Want To Visualize Data? [Nathan Yau’s Toolbox]

Filed under: Graphics,Visualization — Patrick Durusau @ 11:53 am

What I Use to Visualize Data by Nathan Yau.

From the post:

“What tool should I learn? What’s the best?” I hesitate to answer, because I use what works best for me, which isn’t necessarily the best for someone else or the “best” overall.

If you’re familiar with a software set already, it might be better to work off of what you know, because if you can draw shapes based on numbers, you can visualize data. After all, this guy uses Excel to paint scenery.

It’s much more important to just get started already. Work with as much data as you can.

Nevertheless, this is the set of tools I use in 2016, which converged to a handful of things over the years. It looks different from 2009, and will probably look different in 2020. I break it down by place in my workflow.

As Nathan says up front, these may not be the best tools for you but it is a great starting place. Add and subtract from this set as you develop your own workflow and habits.

Enjoy!

PS: Nathan Yau tweeted a few hours later: “Forgot to include this:”

yau-tablet

Avoiding Fake News [Fake Source, Not Content]

Filed under: Journalism,News,Reporting — Patrick Durusau @ 11:26 am

Lessons from The New York Times Super Tuesday hoax: Five ways to spot fake news by Josh Sterns.

From the post:

On the eve of Super Tuesday, a New York Times article made the rounds on social media reporting that Massachusetts Senator Elizabeth Warren had endorsed Sen. Bernie Sanders for president. The only problem: It was fake.

The New York Times released a statement and others debunked the fake on Tuesday, as people were headed to the polls, but by that point the fake article “had been viewed more than 50,000 times, with 15,000 shares on Facebook,” the Times reported.

This is just the another in a long line of fake news reports which have swept through social media in recent years. Last year Twitter’s share price spiked after a fake Bloomberg article claimed that Google was considering buying the social media platform. In 2012, Wikileaks created a fake New York Times op-ed from then-Times-editor Bill Keller defending Wikileaks in what appeared to be a change of position from his earlier statements about the group. The fake was so convincing that even New York Times journalists were sharing it on Twitter.

Josh mixes “fake” news as in being factually false with “fake” news that originates from a fake source.

The “New York Times” (fake source) article about Elizabeth Warren endorsing Bernie Sanders (factually false) is an example of combining the two types of fakes.

Josh’s five steps will help you avoid fake sources, not helpful on avoiding factually false stories.

If At First You Don’t Deceive, Try, Try…

Filed under: Humor,Journalism,News,Reporting — Patrick Durusau @ 10:47 am

Kremlin Falls for Its Own Fake Satellite Imagery by Ian Robertson.

From the post:

The Turkish downing of the Russian SU-24 jet last November saw a predictable series of statements from each side claiming complete innocence and blaming the other entirely. Social media was a key battleground for both sides—the Turkish and Russian governments, along with their supporters—as each tried to establish a dominant narrative explanation for what had just happened.

In the midst of the online competition, a little-observed, funhouse mirror of an online hoax was brilliantly perpetrated, one with consequences likely exceeding the expectation of the hoaxster. The Russian Ministry of Defense was duped by a fake image that Russian state media itself had circulated more than a year earlier, as a way to deny Moscow’s involvement in the downing of Malaysia Airlines Flight 17.

A great read about a failed attempt at deception that when used by others, deceives the original source.

Another illustration why it is important to verify images. 😉

March 7, 2016

Preserving Unobserved Spaces (Privacy) For American Teenagers

Filed under: Free Speech,Government — Patrick Durusau @ 8:35 pm

The privacy of American teenagers is under full scale assault by the FBI.

From: Preventing Violent Extremism In Schools (2016):


Unaccountable or unobserved space provides a window of opportunity for students engaging in activities contrary to their family norms or desires, thus creating additional vulnerabilities and opportunities for exposure to violent extremists or violent rhetoric. Students in unobserved space may contact or be contacted by a known violent extremist, who assesses the youth for possible future recruitment. Students’ consumption of violent propaganda while in unobserved space may ignite the radicalization and mobilization process. Limiting idle times and unobserved space provides less time to engage in negative activities. Replacing idle times with positive social interactions may reduce activities in unobserved space.

Idle time or unobserved space wasn’t my only priority as a teenager but I certainly enjoyed both when the opportunities arose.

Without writing them down, think about the top ten things you remember from high school that occurred in unobserved spaces.

Physical intimacy, your first drink, cigarette, marijuana, pranks, general teenage mischief, all happened in unobserved spaces.

How many of those experiences would you want to give up now?

Same here.

The FBI has decided that teenagers should not have “unobserved spaces,” as reported in: The FBI Has a New Plan to Spy on High School Students Across the Country by Sarah Lazare.

From the post:

Under new guidelines, the FBI is instructing high schools across the country to report students who criticize government policies and “western corruption” as potential future terrorists, warning that “anarchist extremists” are in the same category as ISIS and young people who are poor, immigrants or travel to “suspicious” countries are more likely to commit horrific violence.

Based on the widely unpopular British “anti-terror” mass surveillance program, the FBI’s “Preventing Violent Extremism in Schools” guidelines, released in January, are almost certainly designed to single out and target Muslim-American communities. However, in its caution to avoid the appearance of discrimination, the agency identifies risk factors that are so broad and vague that virtually any young person could be deemed dangerous and worthy of surveillance, especially if she is socio-economically marginalized or politically outspoken.

This overwhelming threat is then used to justify a massive surveillance apparatus, wherein educators and pupils function as extensions of the FBI by watching and informing on each other.

The FBI’s justification for such surveillance is based on McCarthy-era theories of radicalization, in which authorities monitor thoughts and behaviors that they claim to lead to acts of violent subversion, even if those people being watched have not committed any wrongdoing. This model has been widely discredited as a violence prevention method, including by the U.S. government, but it is now being imported to schools nationwide as official federal policy.

Sarah’s post will leave you convinced the FBI has gone completely insane.

American teenagers, not to mention the rest of us, deserve unobserved spaces in which to grow, explore and be different from the thought police lemmings at the FBI.

If you see an FBI agent at any school, post their picture and school name, city, state, with #IspyFBI.

If nothing else, it will be a way to see how FBI agents like living in a fish bowl.

March 6, 2016

Dormant Cyber Pathogen – Warning Labels

Filed under: Cybersecurity,Government,Humor — Patrick Durusau @ 10:01 am

A well-known search engine this morning failed to find warning labels for “dormant cyber pathogen.”

To help you with labeling your phone, I am re-posting these images from Twitter:

Posted by Rob Graham.

cyber-pathogen

Posted by davi.

cyber-pathogen-davi

An insane member of you-have-no-rights community claims that the San Bernardino cellphone may contain “dormant cyber pathogen(s)” and so Apple must prove it does not in order to defeat the order to hack the phone.

Demanding proof of a negative is absurd enough, but adding an object that exists only in the mind of a madman captures the essence of the state’s position in this case.

Your rights, all of them, are subordinate to the whims, caprices and possibly diseased imaginations of local law enforcement officials.

Looking forward to these images or variants as stickers for cellphones in conference swag.

March 5, 2016

Network Measures of the United States Code

Filed under: Citation Analysis,Citation Indexing,Law,Visualization — Patrick Durusau @ 5:45 pm

Network Measures of the United States Code by Alexander Lyte, Dr. David Slater, Shaun Michel.

Abstract:

The U.S. Code represents the codification of the laws of the United States. While it is a well-organized and curated corpus of documents, the legal text remains nearly impenetrable for non-lawyers. In this paper, we treat the U.S. Code as a citation network and explore its complexity using traditional network metrics. We find interesting topical patterns emerge from the citation structure and begin to interpret network metrics in the context of the legal corpus. This approach has potential for determining policy dependency and robustness, as well as modeling of future policies.​

The citation network is quite impressive:

uscode-network

I have inquired about an interactive version of the network but no response as of yet.

4 [5] ways to misinterpret your measurement

Filed under: Communication,Measurement — Patrick Durusau @ 4:07 pm

4 ways to misinterpret your measurement by Katie Paine.

I mention this primarily because of the great graphic and a fifth way to misinterpret data that Katie doesn’t mention.

messages-mom

The misinterpretations Katie mentions are important and see her post for those.

The graphic, on the other hand, illustrates misinterpretation by not understanding the data.

The use of English, integers, etc., provides no assurances you will “understand” the data.

Not “understanding” the data, you are almost certain to misinterpret it.

I first saw this in a tweet by Kirk Borne.

Data Mining Patterns in Crossword Puzzles [Patterns in Redaction?]

Filed under: Crossword Puzzle,Data Mining,Pattern Matching,Pattern Recognition,Security — Patrick Durusau @ 12:06 pm

A Plagiarism Scandal Is Unfolding In The Crossword World by Oliver Roeder.

From the post:

A group of eagle-eyed puzzlers, using digital tools, has uncovered a pattern of copying in the professional crossword-puzzle world that has led to accusations of plagiarism and false identity.

Since 1999, Timothy Parker, editor of one of the nation’s most widely syndicated crosswords, has edited more than 60 individual puzzles that copy elements from New York Times puzzles, often with pseudonyms for bylines, a new database has helped reveal. The puzzles in question repeated themes, answers, grids and clues from Times puzzles published years earlier. Hundreds more of the puzzles edited by Parker are nearly verbatim copies of previous puzzles that Parker also edited. Most of those have been republished under fake author names.

Nearly all this replication was found in two crosswords series edited by Parker: the USA Today Crossword and the syndicated Universal Crossword. (The copyright to both puzzles is held by Universal Uclick, which grew out of the former Universal Press Syndicate and calls itself “the leading distributor of daily puzzle and word games.”) USA Today is one of the country’s highest-circulation newspapers, and the Universal Crossword is syndicated to hundreds of newspapers and websites.

On Friday, a publicity coordinator for Universal Uclick, Julie Halper, said the company declined to comment on the allegations. FiveThirtyEight reached out to USA Today for comment several times but received no response.

Oliver does a great job setting up the background on crossword puzzles and exploring the data that underlies this story. A must read if you are interested in crossword puzzles or know someone who is.

I was more taken with “how” the patterns were mined, which Oliver also covers:


Tausig discovered this with the help of the newly assembled database of crossword puzzles created by Saul Pwanson [1. Pwanson changed his legal name from Paul Swanson] a software engineer. Pwanson wrote the code that identified the similar puzzles and published a list of them on his website, along with code for the project on GitHub. The puzzle database is the result of Pwanson’s own Web-scraping of about 30,000 puzzles and the addition of a separate digital collection of puzzles that has been maintained by solver Barry Haldiman since 1999. Pwanson’s database now holds nearly 52,000 crossword puzzles, and Pwanson’s website lists all the puzzle pairs that have a similarity score of at least 25 percent.

The .xd futureproof crossword format page reads in part:

.xd is a corpus-oriented format, modeled after the simplicity and intuitiveness of the markdown format. It supports 99.99% of published crosswords, and is intended to be convenient for bulk analysis of crosswords by both humans and machines, from the present and into the future.

My first thought was of mining patterns in government redacted reports.

My second thought was that an ASCII format that specifies line length (to allow for varying font sizes) in characters, plus line breaks and lines composed of characters, whitespace and markouts as single characters should fit the bill. Yes?

Surely such a format exists now, yes? Pointers please!

There are those who merit protection by redacted documents, but children are more often victimized by spy agencies than employed by them.

Overlay Journal – Discrete Analysis

Filed under: Discrete Structures,Mathematics,Publishing,Topic Maps,Visualization — Patrick Durusau @ 10:45 am

The arXiv overlay journal Discrete Analysis has launched by Christian Lawson-Perfect.

From the post:

Discrete Analysis, a new open-access journal for articles which are “analytical in flavour but that also have an impact on the study of discrete structures”, launched this week. What’s interesting about it is that it’s an arXiv overlay journal founded by, among others, Timothy Gowers.

What that means is that you don’t get articles from Discrete Analysis – it just arranges peer review of papers held on the arXiv, cutting out almost all of the expensive parts of traditional journal publishing. I wasn’t really prepared for how shallow that makes the journal’s website – there’s a front page, and when you click on an article you’re shown a brief editorial comment with a link to the corresponding arXiv page, and that’s it.

But that’s all it needs to do – the opinion of Gowers and co. is that the only real value that journals add to the papers they publish is the seal of approval gained by peer review, so that’s the only thing they’re doing. Maths papers tend not to benefit from the typesetting services traditional publishers provide (or, more often than you’d like, are actively hampered by it).

One way the journal is adding value beyond a “yes, this is worth adding to the list of papers we approve of” is by providing an “editorial introduction” to accompany each article. These are brief notes, written by members of the editorial board, which introduce the topics discussed in the paper and provide some context, to help you decide if you want to read the paper. That’s a good idea, and it makes browsing through the articles – and this is something unheard of on the internet – quite pleasurable.

It’s not difficult to imagine “editorial introductions” with underlying mini-topic maps that could be explored on their own or that as you reach the “edge” of a particular topic map, it “unfolds” to reveal more associations/topics.

Not unlike a traditional street map for New York which you can unfold to find general areas but can then fold it up to focus more tightly on a particular area.

I hesitate to say “zoom” because in the application I have seen (important qualification), “zoom” uniformly reduces your field of view.

A more nuanced notion of “zoom,” for a topic map and perhaps for other maps as well, would be to hold portions of the current view stationary, say a starting point on an interstate highway and to “zoom” only a portion of the current view to show a detailed street map. That would enable the user to see a particular location while maintaining its larger context.

Pointers to applications that “zoom” but also maintain different levels of “zoom” in the same view? Given the fascination with “hairy” presentations of graphs that would have to be real winner.

Making the most of The National Archives Library (webinar 29 March 2016)

Filed under: Archives,Library — Patrick Durusau @ 7:49 am

Making the most of The National Archives Library

From the webpage:

This webinar will help you to make the most of The National Archives’ Library, with published works dating from the 16th century onwards. Among other topics, it will cover what the Library contains, why it is useful to use published sources before accessing archive records and how to access the Library catalogue.

Webinars are online only events.

The Library at The National Archives is holding a series of events to mark National Libraries Day. The National Archives’ Library is a rich resource that is accessible to all researchers.

We run an exciting range of events and exhibitions on a wide variety of topics. For more details, visit nationalarchives.gov.uk/whatson.

Entrance to The National Archives is free and there is no need to book, see nationalarchives.gov.uk/visit for more information.

WHEN

Tuesday, 29 March 2016 from 16:00 to 17:00 (BST)

Assuming that 16:00 to 17:00 GMT was intended, that would be starting at 11 AM EST.

I have pinged the national archive on using BST, British Summer Time, in March. 😉

March 4, 2016

Announcing the Structr Knowledge Graph

Filed under: Government,Graphs,structr — Patrick Durusau @ 7:01 pm

Announcing the Structr Knowledge Graph by Alex Morgner.

From the post:

The Structr Knowledge Graph is the new one-stop resource base where all information about and around Structr are connected.

Besides the official manual, you will find Getting Started articles, FAQ, guides and tutorials, as well as links to external resources like StackOverflow questions, GitHub issues, or discussion threads.

The Knowledge Graph isn’t just another static content platform where information is stored once and then outdates over time. It is designed and built as a living structure, being updated not only by the Structr team but also semi-automatically by user activities in the support system.

By using a mixture of manual categorization and natural language processing, information is being extracted from the content origins to update and extend the graph. The SKG will replace our old documentation site docs.structr.org..

And of course, the SKG provides an interactive graph browser, full-text search and an article tree.

I was confused when I first saw this because I think of Structr as a knowledge graph so why the big splash? Then I saw a tweet saying 386 articles on support.structr.com/knowledge-graph and it suddenly made sense.

This is a knowledge graph about the knowledge graph software known as Structr.

OK, I’m straight now. I think. 😉

With a topic map it would be trivial to distinguish between “Structr Knowledge Graph” in the sense of using the Structr software versus a knowledge graph about Structr, which is also known as the Structr Knowledge Graph.

Momentary cognitive dissonance, well, not so momentary but I wasn’t devoting a lot of effort to it, but not a serious problem.

More serious when the cognitive dissonance is confusion a child’s name in transliterated Arabic with that of a sanctioned target being sought by a U.S. drone.

« Newer PostsOlder Posts »

Powered by WordPress