Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

August 27, 2016

Linux debugging tools you’ll love: the zine

Filed under: Linux OS,Programming — Patrick Durusau @ 3:16 pm

Linux debugging tools you’ll love: the zine by Julia Evans.

From the website:

There are a ton of amazing debugging tools for Linux that I love. strace! perf! tcpdump! wireshark! opensnoop! I think a lot of them aren’t as well-known as they should be, so I’m making a friendly zine explaining them.

Donate, subscribe (PDF or paper)!

If you follow Julia’s blog (http://jvns.ca) or twitter (@b0rk), you know what a treat the zine will be!

If you don’t (correct that error now) and consider the following sample:

julia-sample-460

It’s possible there are better explanations than Julia’s, so if and when you see one, sing out!

Until then, get the zine!

“…without prior written permission…” On a Public Website? Calling BS!

Filed under: Government,Privacy — Patrick Durusau @ 3:00 pm

I mentioned in Your assignment, should you choose to accept it…. that BAE Systems has been selling surveillance technology to the United Arab Emirate, the nice people behind the attempted hack of Ahmed Mansoor, a prominent human rights activist.

Since then, Joseph Cox posted: British Companies Are Selling Advanced Spy Tech to Authoritarian Regimes.

From his post:

Since early 2015, over a dozen UK companies have been granted licenses to export powerful telecommunications interception technology to countries around the world, Motherboard has learned. Many of these exports include IMSI-catchers, devices which can monitor large numbers of mobile phones over broad areas.

Some of the UK companies were given permission to export their products to authoritarian states such as Saudi Arabia, the United Arab Emirates, Turkey, and Egypt; countries with poor human rights records that have been well-documented to abuse surveillance technology.

“At a time when the use of these surveillance tools is still highly controversial in the UK, it is completely unacceptable that companies are allowed to export the same equipment to countries with atrocious human rights records or which lack rule of law altogether. There is absolutely a clear risk that these products can be used for repression and abuses,” Edin Omanovic, research officer at Privacy International, told Motherboard in an email.

Joseph’s report explains the technology and gives examples of some of the sales to the worst offenders. He also includes a link to the dataset of export sales.

Joseph obtained a list of the exporters from the UK Department for International Trade. But that list is included as an image. I created this HTML list from that image:

In an attempt to seem fierce, Cellxion Ltd has this unfriendly greeting at the bottom of their public homepage:

Your IP address, [**.**.**.**], has been recorded and all activity on this system is actively monitored. Under US Federal Law (18 U.S.C. 1030), United Kingdom Law (Computer Misuse Act 1990) and other international law it is a criminal offence to access or attempt to access this computer system without prior written authorisation from cellXion ltd. Any unauthorised attempt to access this system will be reported to the appropriate authorities and prosecuted to the full extent of the law. Your IP address has been recorded and all activity on this system is actively monitored. Under US Federal Law (18 U.S.C. 1030), United Kingdom Law (Computer Misuse Act 1990) and other international law it is a criminal offence to access or attempt to access this computer system without prior written authorisation from cellXion ltd. Any unauthorised attempt to access this system will be reported to the appropriate authorities and prosecuted to the full extent of the law. (emphasis added, I obscured my IP number)

What does Dogbert say? Oh, yeah,

Cellxion, kiss my wager!

As you already know, use TAILS, Tor and VPN as you pursue these leads.

Good hunting!

Shield laws and journalist’s privilege: … [And Beyond]

Filed under: Journalism,News,Reporting — Patrick Durusau @ 1:26 pm

Jonathan Peters‘s Shield laws and journalist’s privilege: The basics every reporter should know is a must read … before a subpoena arrives.

From his post:

COMPELLED DISCLOSURE is in the air.

A federal judge has ordered Glenn Beck to disclose the names of confidential sources he used in his reporting that a Saudi Arabian man was involved in the Boston Marathon bombing. The man sued Beck for defamation after he was cleared of any involvement.

Journalist and filmmaker Mark Boal, who wrote and produced The Hurt Locker and Zero Dark Thirty, has asked a judge to block a subpoena threatened by military prosecutors who want to obtain his confidential or unpublished interviews with US Army Sgt. Bowe Bergdahl, accused of being a deserter.

A state judge has ruled that a New York Times reporter must testify at a murder trial about her jailhouse interview with the man accused of killing Anjelica Castillo, the toddler once known as Baby Hope. The judge said the interview included the only statements the man made about the crime other than those in his police confession.

If my inbox is any indication, those cases have prompted a surge of interest in shield laws and the practice of compelled disclosure. What is a shield law, exactly? When can a government official require a reporter to disclose sources or information? Who counts as a journalist under a shield law? What types of sources or information are protected? Is there a big difference between a subpoena and a search warrant?

Those are the questions I’ve been asked most often in this area, as a First Amendment lawyer and scholar, and this post will try to answer them. (Please keep in mind that I’m a lawyer, not your lawyer, and these comments shouldn’t be construed as legal advice.)

As useful as Jonathan’s advice, in conjunction with advice from your own lawyer, I would point out by the time a subpoena arrives, you have already lost.

Because of circumstances, a jail house interview where you are the only possible source, or bad OpSec, you have been identified as possessing information state authorities want.

As Jonathan points out, there are governments with shield laws and notions of journalist privilege, but even those have fallen on hard times.

Outside of single source situations, consider anonymous posting of information needed for your story.

You can cite the public posting, as can others, which leaves the authorities without a target for their “name of the source” subpoena. It’s public information.

No one will be able to duplicate months of research and writing with a week or two and public posting may keep the you out of the cross-hairs of local government.

Posting unpublished information is an anathema to some, who think hoarding is the only path to readers. They are the best judges of whether they are read because they hoard or because of their skills as story tellers and analysts.

As an additional precaution, I assume you have a documented story development trail that you can fight tooth and nail to keep, which when disclosed shows your reliance on the publicly posted data. Yes?

PS: Wikilinks is one example of a public posting venue. Dark web sites for states (or other administrative divisions) or cities might be more appropriate. My suggestion is to choose one that doesn’t censor data dumps. Ever.

August 26, 2016

A Reproducible Workflow

Filed under: Science,Workflow — Patrick Durusau @ 7:07 pm

The video is 104 seconds and highly entertaining!

From the description:

Reproducible science not only reduce errors, but speeds up the process of re-runing your analysis and auto-generate updated documents with the results. More info at: www.bit.ly/reprodu

How are you making your data analysis reproducible?

Enjoy!

Germany and France declare War on Encryption to Fight Terrorism

Filed under: Cryptography,Encryption,Government,Privacy — Patrick Durusau @ 4:11 pm

Germany and France declare War on Encryption to Fight Terrorism by Mohit Kumar.

From the post:

Yet another war on Encryption!

France and Germany are asking the European Union for new laws that would require mobile messaging services to decrypt secure communications on demand and make them available to law enforcement agencies.

French and German interior ministers this week said their governments should be able to access content on encrypted services in order to fight terrorism, the Wall Street Journal reported.
(emphasis in original)

On demand decryption? For what? Rot-13 encryption?

The Franco-German text transmitted to the European Commission.

The proposal wants to extend current practices of Germany and France with regard to ISPs but doesn’t provide any details about those practices.

In case you have influence with the budget process at the EU, consider pointing out there is no, repeat no evidence that any restriction on encryption will result in better police work combating terrorism.

But then, what government has ever pushed for evidence-based policies?

New Virus Breaks The Rules Of Infection – Cyber Analogies?

Filed under: Biomedical,Cybersecurity,Security — Patrick Durusau @ 3:20 pm

New Virus Breaks The Rules Of Infection by Michaeleen Doucleff.

From the post:

Human viruses are like a fine chocolate truffle: It takes only one to get the full experience.

At least, that’s what scientists thought a few days ago. Now a new study published Thursday is making researchers rethink how some viruses could infect animals.

A team at the U.S. Army Medical Research Institute of Infectious Diseases has found a mosquito virus that’s broken up into pieces. And the mosquito needs to catch several of the pieces to get an infection.

“It’s the most bizarre thing,” says Edward Holmes, a virologist at the University of Sydney, who wasn’t involved in the study. It’s like the virus is dismembered, he says.

“If you compare it to the human body, it’s like a person would have their legs, trunk and arms all in different places,” Holmes says. “Then all the pieces come together in some way to work as one single virus. I don’t think anything else in nature moves this way.”

Also from the post:

These are insect cells infected with the Guaico Culex virus. The different colors denote cells infected with different pieces of the virus. Only the brown-colored cells are infectious, because they contain the complete virus. Michael Lindquist/Cell Press

new-virus-pieces-460

The full scale image.

How very cool!

Any known analogies in computer viruses?

Your assignment, should you choose to accept it….

Filed under: Government,Privacy,TeX/LaTeX,Unicode — Patrick Durusau @ 2:59 pm

You may (may not) remember the TV show, Mission Impossible. It had a cast of regulars who formed a spy team to undertake “impossible” tasks that could not be traced back to the U.S. government.

Stories like: BAE Systems Sells Internet Surveillance Gear to United Arab Emirates make me wish for a non-nationalistic, modern equivalent of the Mission Impossible team.

You may recall the United Arab Emirates (UAE) were behind the attempted hack of Ahmed Mansoor, a prominent human rights activist.

So much for the UAE needing spyware for legitimate purposes.

From the article:


In a written statement, BAE Systems said, “It is against our policy to comment on contracts with specific countries or customers. BAE Systems works for a number of organizations around the world, within the regulatory frameworks of all relevant countries and within our own responsible trading principles.”

The Danish Business Authority told Andersen it found no issue approving the export license to the Ministry of the Interior of the United Arab Emirates after consulting with the Danish Ministry of Foreign Affairs, despite regulations put in place by the European Commission in October 2014 to control exports of spyware and internet surveillance equipment out of concern for human rights. The ministry told Andersen in an email it made a thorough assessment of all relevant concerns and saw no reason to deny the application.

It doesn’t sound like any sovereign government is going to restrain BAE Systems and/or the UAE.

Consequences for their mis-deeds will have to come from other quarters.

Like the TV show started every week:

Your assignment, should you choose to accept it….

Restricted U.S. Army Geospatial Intelligence Handbook

Restricted U.S. Army Geospatial Intelligence Handbook

From the webpage:

This training circular provides GEOINT guidance for commanders, staffs, trainers, engineers, and military intelligence personnel at all echelons. It forms the foundation for GEOINT doctrine development. It also serves as a reference for personnel who are developing doctrine; tactics, techniques, and procedures; materiel and force structure; and institutional and unit training for intelligence operations.

1-1. Geospatial intelligence is the exploitation and analysis of imagery and geospatial information to describe, assess, and visually depict physical features and geographically referenced activities on the Earth. Geospatial intelligence consists of imagery, imagery intelligence, and geospatial information (10 USC 467).

Note. TC 2-22.7 further implements that GEOINT consists of any one or any combination of the following components: imagery, IMINT, or GI&S.

1-2. Imagery is the likeness or presentation of any natural or manmade feature or related object or activity, and the positional data acquired at the same time the likeness or representation was acquired, including: products produced by space-based national intelligence reconnaissance systems; and likenesses and presentations produced by satellites, aircraft platforms, unmanned aircraft vehicles, or other similar means (except that such term does not include handheld or clandestine photography taken by or on behalf of human intelligence collection organizations) (10 USC 467).

1-3. Imagery intelligence is the technical, geographic, and intelligence information derived through the interpretation or analysis of imagery and collateral materials (10 USC 467).

1-4. Geospatial information and services refers to information that identifies the geographic location and characteristics of natural or constructed features and boundaries on the Earth, including: statistical data and information derived from, among other things, remote sensing, mapping, and surveying technologies; and mapping, charting, geodetic data, and related products (10 USC 467).

geospatial-intel-1-460

You may not have the large fixed-wing assets described in this handbook, the “value-added layers” are within your reach with open data.

geospatial-intel-2-460

In localized environments, your value-added layers may be more current and useful than those produced on longer time scales.

Topic maps can support geospatial collations of information along side other views of the same data.

A great opportunity to understand how a modern military force understands and uses geospatial intelligence.

Not to mention testing your ability to recreate that geospatial intelligence without dedicated tools.

Hair Ball Graphs

Filed under: Cybersecurity,Graphs,Visualization — Patrick Durusau @ 1:32 pm

An example of a non-useful “hair ball” graph visualization:

hairball-01-460

That image is labeled as “standard layout” at a site that offers this cohesion adapted layout alternative:

hairball-alternative-B02b-460

The full-size image is quite impressive.

If you were attempting to visualize vulnerabilities, which one would you pick?

The Hanselminutes Podcast

Filed under: Computer Science,Programming — Patrick Durusau @ 1:07 pm

The Hanselminutes Podcast: Fresh Air for Developers by Scott Hanselman.

I went looking for Felienne’s podcast on code smells and discovered along with it, The Hanselminutes Podcast: Fresh Air for Developers!

Felienne’s podcast is #542 so there is a lot of content to enjoy! (I checked the archive. Yes, there really are 542 episodes as of today.)

Exploring Code Smells in code written by Children

Filed under: Computer Science,Programming — Patrick Durusau @ 10:52 am

Exploring Code Smells in code written by Children (podcast) by Dr. Felienne

From the description:

Felienne is always learning. In exploring her PhD dissertation and her public speaking experience it’s clear that she has no intent on stopping! Most recently she’s been exploring a large corpus of Scratch programs looking for Code Smells. How do children learn how to code, and when they do, does their code “smell?” Is there something we can do when teaching to promote cleaner, more maintainable code?

Felienne discusses a paper due to appear in September on analysis of 250K Scratch programs for code smells.

Thoughts on teaching programmers to detect bug smells?

Apple/NSO Trident 0days – Emergency or Another Day of 0days?

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 10:31 am

For an emergency view of the Apple/NSO Trident 0days issues, you can read Apple tackles iPhone one-tap spyware flaws (BBC), Apple issues security update to prevent iPhone spyware (USATODAY), or IPhone Users Urged to Update Software After Security Flaws Are Found (NYT).

On the other hand, Robert Graham, @ErrataRob, says its just another day of 0days:


Press: it’s news to you, it’s not news to us

I’m seeing breathless news articles appear. I dread the next time that I talk to my mom that she’s going to ask about it (including “were you involved”). I suppose it is new to those outside the cybersec community, but for those of us insiders, it’s not particularly newsworthy. It’s just more government malware going after activists. It’s just one more set of 0days.

I point this out in case press wants to contact for some awesome sounding quote about how exciting/important this is. I’ll have the opposite quote.

Don’t panic: all patches fix 0days

We should pay attention to context: all patches (for iPhone, Windows, etc.) fix 0days that hackers can use to break into devices. Normally these 0days are discovered by the company itself or by outside researchers intending to fix (and not exploit) the problem. What’s different here is that where most 0days are just a theoretical danger, these 0days are an actual danger — currently being exploited by the NSO Group’s products. Thus, there’s maybe a bit more urgency in this patch compared to other patches.

Don’t panic: NSA/Chinese/Russians using secret 0days anyway

It’s almost certain the NSA, the Chinese, and the Russian have similar 0days. That means applying this patch makes you safe from the NSO Group (for a while, until they find new 0days), but it’s unlikely this patch makes you safe from the others.
… (Notes on the Apple/NSO Trident 0days)

Taking all communication systems as insecure, digital ones in particular, ErrataRob’s position has merit.

However, the consequences of a lapse of security for someone like Ahmed Mansoor, are far from trivial.

Consider this passage from the executive summary in The Million Dollar Dissident: NSO Group’s iPhone Zero-Days used against a UAE Human Rights Defender:

Ahmed Mansoor is an internationally recognized human rights defender, based in the United Arab Emirates (UAE), and recipient of the Martin Ennals Award (sometimes referred to as a “Nobel Prize for human rights”). On August 10 and 11, 2016, Mansoor received SMS text messages on his iPhone promising “new secrets” about detainees tortured in UAE jails if he clicked on an included link. Instead of clicking, Mansoor sent the messages to Citizen Lab researchers. We recognized the links as belonging to an exploit infrastructure connected to NSO Group, an Israel-based “cyber war” company that sells Pegasus, a government-exclusive “lawful intercept” spyware product. NSO Group is reportedly owned by an American venture capital firm, Francisco Partners Management.

The ensuing investigation, a collaboration between researchers from Citizen Lab and from Lookout Security, determined that the links led to a chain of zero-day exploits (“zero-days”) that would have remotely jailbroken Mansoor’s stock iPhone 6 and installed sophisticated spyware. We are calling this exploit chain Trident. Once infected, Mansoor’s phone would have become a digital spy in his pocket, capable of employing his iPhone’s camera and microphone to snoop on activity in the vicinity of the device, recording his WhatsApp and Viber calls, logging messages sent in mobile chat apps, and tracking his movements.

We are not aware of any previous instance of an iPhone remote jailbreak used in the wild as part of a targeted attack campaign, making this a rare find.

ErrataBob’s point that 0days are everywhere and all governments have them, doesn’t diminish the importance of the patch for iPhone users or provide a sense of direction for what’s next?

Here’s a 0day policy question:

Does disclosure of 0days to vendors disarm citizens while allowing governments to retain more esoteric 0days?

Governments are not going to dis-arm themselves of 0days so I see no reason for “responsible disclosure” to continue to disarm the average citizen.

Technical analysis of the NSO Trident 0days: The Million Dollar Dissident: NSO Group’s iPhone Zero-Days used against a UAE Human Rights Defender, and Technical Analysis of Pegasus Spyware.

Both of those reports will give you insight into this attack and hopefully spur ideas for analysis and attack.

BTW, the Apple software update.

A Tiny Whiff Of Freedom – But Only A Tiny One

Filed under: Censorship,Government — Patrick Durusau @ 9:03 am

No guarantees that it will last but CNN reports: French court suspends burkini bans.

Just in case you haven’t defamed the French police, recently, do use the image from that article or from my post: Defame the French Police Today!

I am sickened anyone finds it acceptable for men to force women to disrobe.

It is even more disturbing no one in the immediate area intervened on her behalf.

Police abuse will continue and escalate until average citizens step up and intervene.

August 25, 2016

Category Theory 1.1

Filed under: Category Theory,Mathematical Reasoning,Mathematics — Patrick Durusau @ 7:33 pm

Motivation and philosophy.

Bartosz Milewski is the author of the category series: Category Theory for Programmers.

Enjoy!

Terrorism “Lite?”

Filed under: Government,Security — Patrick Durusau @ 3:54 pm

A cricket and worm attack caused delay and confusion on the D train, Wednesday evening in New York.

Not as much disruption as a suicide bomber but the reaction reported by Danielle Furfaro and Melkorka Licea in Straphangers go berserk after woman tosses bugs in subway car was quite impressive.

From the post:


A group of teenagers pushed her, prompting her to freak out and toss the box of pests into the air, said witnesses. Straphangers then started screaming and crying, and all ran down to one end of the car.

“It was pandemonium,” said Chris Calabrese, 29, who was on the train with his girlfriend. “It was the craziest thing I’ve ever seen on a train.”

Someone then pulled the emergency brake and the train skidded to a stop on the Manhattan Bridge.

The air conditioning shut off and the screaming passengers were all stuck inside the sweltering car with the woman, who then treated them to antics for half an hour as the crickets jumped on passengers. The worms just wriggled on the floor.

The story doesn’t say if DHS has been notified of this new attack vector.

You laugh.

What if instead of crickets and worms the woman had a suitcase full of “killer bees” or angry hornets?

Laughing now?

Developer Liability For Egregiously Poor Software

Filed under: Cybersecurity,NIST,Security — Patrick Durusau @ 3:37 pm

Earlier today, Cryptome tweeted:

cryptome-nist-460

I’m assuming that Cryptome added the highlighting to:

Software developers should be liable for egregiously poor software…

I don’t consider that suggestion to be, as Cryptome puts it:

NIST BS

Presently, there is no liability for software developers.

How’s that working out for you?

One indication of the “success” of the no liability model is Hackmageddon which relies on reported hacks and has hack timelines back to 2011.

A summary of the “success” of the no liability model is the Internet Security Threat Report, April 2016, by Symantec.

Both of those reviews rely on “reported” hacks, which omits those yet to be discovered (thinking NSA or Sony hacks).

By any reasonable measure of “success,” the no liability model is an absolute disaster.

We can debate how “egregious” software has to be for liability, but consider SQL injection attacks.

Here are five SQL injection “cheat sheets” and a listing of SQL injection scanners:

SQL Injection Cheat Sheet

MySQL SQL Injection Cheat Sheet

SQL Injection Prevention Cheat Sheet

Full SQL Injections Cheatsheet

SQL Injection Cheat Sheet & Tutorial: Vulnerabilities & How to Prevent SQL Injection Attacks

SQL Injection Scanner List

How difficult was that?

You have to be able to type “sql injection cheatsheet” and “sql injection scanner” into an internet search engine. (Rating: Easy)

Curious, is there a show of hands by developers who don’t think they can avoid SQL injection attacks?

FYI, if all developers avoided SQL injection attacks, it would kill the #1 cybersecurity hack on the top #10 list maintained by the Open Web Application Security Project.

We aren’t talking about obscure 0-day bugs that no one has ever seen. SQL injection was first noticed in 1998.

Liability for an 18 year old vulnerability isn’t too much to ask.

Yes?

PS: The NIST quote is from the Information Technology Laboratory Newsletter, September—October 2016, page 1.

From Preaching to Meddling – Censors, I-94s, Usage of Social Media

Filed under: Censorship,Free Speech,Government — Patrick Durusau @ 10:31 am

Tony Romm has a highly amusing account of how internet censors, Google, Facebook and Twitter, despite their own censorship efforts, object to social media screening on I-94 (think international arrival and departure) forms.

Tony writes Tech slams Homeland Security on social media screening:

Internet giants including Google, Facebook and Twitter slammed the Obama administration on Monday for a proposal that would seek to weed out security threats by asking foreign visitors about their social media accounts.

The Department of Homeland Security for months has weighed whether to prompt foreign travelers arriving on visa waivers to disclose the social media websites they use — and their usernames for those accounts — as it seeks new ways to spot potential terrorist sympathizers. The government unveiled its draft plan this summer amid widespread criticism that authorities aren’t doing enough to monitor suspicious individuals for signs of radicalization, including the married couple who killed 14 people in December’s mass shooting in San Bernardino, Calif.

But leading tech companies said Monday that the proposal could “have a chilling effect on use of social media networks, online sharing and, ultimately, free speech online.”
….

Google, Facebook and Twitter casually censor hundreds of thousands of users every year so their sudden concern for free speech is puzzling.

Until you catch the line:

have a chilling effect on use of social media networks, online sharing

Translation: chilling effect on market share of social media, diminished advertising revenues and online sales.

The reaction of Google, Facebook and Twitter reminds me of the elderly woman in church who would shout “Amen!” when the preacher talked about the dangers of alcohol, “Amen!” when he spoke against smoking, “Amen! when he spoke of the shame of gambling, but was curiously silent when the preacher said that dipping snuff was also sinful.

After the service, as the parishioners left the church, the preacher stopped the woman to ask about her change in demeanor. The woman said, “Well, but you went from preaching to meddling.”

😉

Speaking against terrorism, silencing users by the hundred thousand, is no threat to the revenue streams of Google, Facebook and Twitter. Easy enough and they benefit from whatever credibility that buys with governments.

Disclosure of social media use, which could have some adverse impact on revenue, the government has gone from preaching to meddling.

The revenue stream impacts imagined by Google, Facebook and Twitter are just that, imagined. Its impact in fact is unknown. But fear of an adverse impact is so great that all three have swung into frantic action.

That’s a good measure of their commitment to free speech versus their revenue streams.

Having said all that, the Agency Information Collection Activities: Arrival and Departure Record (Forms I-94 and I-94W) and Electronic System for Travel Authorization is about as lame and ineffectual of an anti-terrorist proposal as I have seen since 9/11.

You can see the comments on the I-94 farce, which I started to collect but then didn’t.

I shouldn’t say this for free but here’s one insight into “radicalization:”

Use of social media to exchange “radical” messages is a symptom of “radicalization,” not its cause.

You can convince yourself of that fact.

Despite expensive efforts to stamp out child pornography (radical messages), sexual abuse of children (radicalization) continues. The consumption of child pornography doesn’t cause sexual abuse of children, rather it is consumed by sexual abusers of children. The market is driving the production of the pornography. No market, no pornography.

So why the focus on child pornography?

It’s visible (like social media), it’s easy to find (like tweets), it’s abhorrent (ditto for beheadings), and cheap (unlike uncovering real sexual abuse of children and/or actual terrorist activity).

The same factors explain the mis-guided and wholly ineffectual focus on terrorism and social media.

August 24, 2016

Secret Cameras Recording Baltimore’s…. [Watching the Watchers?])

Filed under: Government,Privacy,Video — Patrick Durusau @ 4:29 pm

Secret Cameras Recording Baltimore’s Every Move From Above by Monte Reel.

Unknown to the citizens of Baltimore, they have been under privately funded, plane-based video surveillance since the beginning of 2016.

The pitch to the city:

“Imagine Google Earth with TiVo capability.”

You need to read Monte’s article in full and there are names you will recognize if you watch PBS:

Last year the public radio program Radiolab featured Persistent Surveillance in a segment about the tricky balance between security and privacy. Shortly after that, McNutt got an e-mail on behalf of Texas-based philanthropists Laura and John Arnold. John is a former Enron trader whose hedge fund, Centaurus Advisors, made billions before he retired in 2012. Since then, the Arnolds have funded a variety of hot-button causes, including advocating for public pension rollbacks and charter schools. The Arnolds told McNutt that if he could find a city that would allow the company to fly for several months, they would donate the money to keep the plane in the air. McNutt had met the lieutenant in charge of Baltimore’s ground-based camera system on the trade-show circuit, and they’d become friendly. “We settled in on Baltimore because it was ready, it was willing, and it was just post-Freddie Gray,” McNutt says. The Arnolds donated the money to the Baltimore Community Foundation, a nonprofit that administers donations to a wide range of local civic causes.

I find the mention of Freddie Gray ironic, considering how truthful and forthcoming the city and its police officers were in that case.

If footage exists for some future Freddie Gray-like case, you can rest assured the relevant camera failed, the daily data output failed, a Rose Mary Wood erasure accident happened, etc.

From Monte’s report, we aren’t at facial recognition, yet, assuming his sources were being truthful. But we all know that’s coming, if not already present.

Many will call for regulation of this latest intrusion into your privacy, but regulation depends upon truthful data upon which to judge compliance. The routine absence of truthful data about police activities, both digital and non-digital, makes regulation difficult to say the least.

In the absence of truthful police data, it is incumbent upon citizens to fill that gap, both for effective regulation of police surveillance and for the regulation of police conduct.

The need for an ad-hoc citizen-based surveillance system is clear.

What isn’t clear is how such a system would evolve?

Perhaps a server that stitches together cellphone video based on GPS coordinates and orientation? From multiple cellphones? Everyone can contribute X seconds of video from any given location?

Would not be seamless but if we all target known police officers and public officials…, who knows how complete a record could be developed?

Crowdsourced-Citizen-Surveillance anyone?

Tor 0.2.8.7 is released, with important fixes

Filed under: Privacy,Tor — Patrick Durusau @ 3:31 pm

Tor 0.2.8.7 is released, with important fixes

From the post:

Tor 0.2.8.7 fixes an important bug related to the ReachableAddresses option in 0.2.8.6, and replaces a retiring bridge authority. Everyone who sets the ReachableAddresses option, and all bridges, are strongly encouraged to upgrade.

You can download the source from the Tor website. Packages should be available over the next week or so.

For some reason, a link to the Tor website was omitted.

Upgrade and surf somewhat more securely. (Security never being absolute.)

Defame the French Police Today!

Filed under: Government,Privacy — Patrick Durusau @ 3:20 pm

Nice Officials Say They’ll Sue Internet Users Who Share Photos Of French Fashion Police Fining Women In Burkinis by Mike Masnick.

From the post:

This seems pretty ridiculous on all sorts of levels, but never think things are so ridiculous that some politicians can’t make them worse. Guillaume Champeau from the excellent French site Numerama alerts me to the news that the deputy mayor of Nice, Christian Estrosi is threatening to sue those who share these images over social media. Yup, France, a country that claims to pride itself on freedom is not just telling women that they can’t cover themselves up too much on the beach, but that it’s also illegal to report on the police following through on that. Here’s is the awkward Google translation of the French report:

Christian Estrosi … has published a press release by the city of Nice, to announce that he would file a complaint against those who would broadcast pictures of municipal police verbalize women guilty of exercising what they believed to be their freedom to dress from head to feet on the beaches.

” Photos showing municipal police of Nice in the exercise of their functions have been circulating this morning on social networks and raise defamation and threats against these agents ,” the statement said.

Wait. Showing accurate photos creates defamation against the police? How’s that work? Estrosi apparently says that legal actions have already been filed, though Numerama was unable to confirm any legal actions as yet. The article also notes that despite Estrosi implying otherwise, police do not have any sort of special protections that say they cannot be photographed while in public.

It’s not clear if you have to take the picture or merely share the picture.

Just in case sharing is enough, here is the picture from Mike’s post:

nice-burkini-01-460

There are a number of variations on this image. I suppose all of them count as far as “defamation” of the police.

If reposting isn’t sufficient to defame the French police enforcing the burkiki ban, please consider this post an active request for images of French police enforcing that ban.

DATNAV: …Navigate and Integrate Digital Data in Human Rights Research [Ethics]

Filed under: Ethics,Human Rights,Humanities — Patrick Durusau @ 2:54 pm

DATNAV: New Guide to Navigate and Integrate Digital Data in Human Rights Research by Zara Rahman.

From the introduction in the Guide:

From online videos of rights violations, to satellite images of environmental degradation, to eyewitness accounts disseminated on social media, we have access to more relevant data today than ever before. When used responsibly, this data can help human rights professionals in the courtroom, when working with governments and journalists, and in documenting historical record.

Acquiring, disseminating and storing digital data is also becoming increasingly affordable. As costs continue to decrease and new platforms are
developed, opportunities for harnessing these data sources for human rights work increase.

But integrating data collection and management into the day to day work of human rights research and documentation can be challenging, even overwhelming, for individuals and organisations. This guide is designed to help you navigate and integrate new data forms into your human rights work.

It is the result of a collaboration between Amnesty International, Benetech, and The Engine Room that began in late 2015. We conducted a series of interviews, community consultations, and surveys to understand whether digital data was being integrated into human rights work. In the vast majority of cases, we found that it wasn’t. Why?

Mainly, human rights researchers appeared to be overwhelmed by the possibilities. In the face of limited resources, not knowing how to get started or whether it would be worthwhile, most people we spoke to refrained from even attempting to strengthen their work with digital data.

To support everyone in the human rights field in navigating this complex environment, we convened a group of 16 researchers and technical experts in a castle outside Berlin, Germany in May 2016 to draft this guide over four days of intense reflection and writing.

There are additional reading resources at: https://engn.it/datnav.

The issue of ethics comes up quickly in human rights research and here the authors write:

Seven things to consider before using digital data for human rights

  1. Would digital data genuinely help answer your research questions? What are the pros and cons of the particular source or medium? What might you learn from past uses of similar technology?
  2. What sources are likely to be collecting or capturing the kinds of information you need? What is the context in which it is being produced and used? Will the people or organisations on which your work is focused be receptive to these types of data?
  3. How easily will new forms of data integrate into your existing workflow? Do you realistically have the time and money to collect, store, analyze and especially to verify this data? Can anyone on your team comfortably support the technology?
  4. Who owns or controls the data you will be using? Companies, government, or adversaries? How difficult is it to get? Is it a fair or legal collection method? What is the internal stance on this? Do you have true informed consent from individuals?
  5. How will digital divides and differences in local access to online platforms, computers or phones, affect representation of different populations? Would conclusions based on the data reinforce inequalities, stereotypes or blind spots?
  6. Are organisational protocols for confidentiality and security in digital communication and data handling sufficiently robust to deal with risks to you, your partners and sources? Are security tools and processes updated frequently enough?
  7. Do you have safeguards in place to prevent and deal with any secondary trauma from viewing digital content that you or your partners may experience at personal and organisational levels?

(Page 15)

Before I reveal my #0 consideration, consider the following story as setting the background.

At a death penalty seminar (certainly a violation of human rights), a practitioner reported a case where the prosecuting attorney said a particular murder case was a question of “good versus evil.” In the course of preparing for that case, it was discovered that while teaching a course for paralegals, the prosecuting attorney had a sexual affair with one of his students. Affidavits were obtained, etc., and a motion was filed in the pending criminal case entitled: Motion To Define Good and Evil.

There was a mix of opinions on whether blind-siding the prosecuting attorney with his personal failings, with the fallout for his family, was a legitimate approach?

My question was: Did they consider asking the prosecuting attorney to take the death penalty off the table, in exchange for not filing the Motion To Define Good and Evil? A question of effective use of the information and not about the legitimacy of using it.

For human rights violations, my #0 Question would be:

0. Can the information be used to stop and/or redress human rights violations without harming known human rights victims?

The other seven questions, like “…all deliberate speed…,” are a game played by non-victims.

How to Navigate Wikileak Torrents (wlstorage.net)?

Filed under: Wikileaks — Patrick Durusau @ 2:10 pm

How to Navigate Wikileak Torrents (wlstorage.net)?. A query I posted earlier today at Open Data on Stack Exchange.

From the query:

I can download Wikileak files from either wlstorage.net or file.wikileaks.org but I’m having difficulty identifying the files of interest.

For example, at http://wikileaks.org, you see “DNC Email Archive,” and “AKP Email Archive,” but I have been unable to match those with any entry for the Wikileaks archives. Dates don’t help because the archives all list as 01-Jan-1984.

Am I missing a well known mapping file to the archives? Thanks!

A mapping from common names for collections to the archives would be a very useful thing.

Pointers? Suggestions?

Best Onion Links – Deep Web [188 Links]

Filed under: Deep Web,Tor — Patrick Durusau @ 10:41 am

Best Onion Links – Deep Web

One hundred and eighty-eight (188) Deep Web links arranged in these categories:

  • Audio – Music / Streams
  • Blogs / Essays
  • Books
  • Commercial Services
  • Digital Goods / Commercial Links
  • Domain Services
  • Drugs
  • Email / Messaging
  • Financial Services
  • Forums / Boards / Chans
  • Hosting / Web / File / Image
  • Introduction Points
  • Other
  • Physical Goods
  • Political Advocacy
  • Social Networks
  • WikiLeaks

The list is updated every 24 hours.

Words of caution: Your safety is always your responsibility but even more so on the Deep Web.

For example, there are “hit man” service links. Most contract killing reports begin: “X approached an undercover police officer in a bar, seeking to hire a contract killer.”

Use caution appropriate to the goods/services you are requesting.

August 23, 2016

Debugging

Filed under: Linux OS,Profiling,Programming — Patrick Durusau @ 7:19 pm

Julia Evans tweeted:

evans-debugging-460

It’s been two days without another suggestion.

Considering Brendan D. Gregg’s homepage, do you have another suggestion?

Too rich of a resource to not write down.

Besides, for some subjects and their relationships, you need specialized tooling to see them.

Not to mention that if you can spot patterns in subjects, detecting an unknown 0-day may be easier.

Of course, you can leave USB sticks at popular eateries near Fort Meade, MD 20755-6248, but some people prefer to work for their 0-day exploits.

😉

Eloquent JavaScript

Filed under: Javascript,Programming — Patrick Durusau @ 7:03 pm

Eloquent JavaScript by by Marijn Haverbeke.

From the webpage:

This is a book about JavaScript, programming, and the wonders of the digital. You can read it online here, or get your own paperback copy of the book.

javascript-cover

Embarrassing that authors post free content for the betterment of others, but wealthy governments play access games.

This book is also available in Български (Bulgarian), Português (Portuguese), and Русский (Russian).

Enjoy!

“Why Should I Trust You?”…

Filed under: Artificial Intelligence,Machine Learning — Patrick Durusau @ 6:35 pm

“Why Should I Trust You?”: Explaining the Predictions of Any Classifier by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin.

Abstract:

Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one.

In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.

LIME software at Github.

For a quick overview consider: Introduction to Local Interpretable Model-Agnostic Explanations (LIME) (blog post).

Or what originally sent me in this direction: Trusting Machine Learning Models with LIME at Data Skeptic, a podcast described as:

Machine learning models are often criticized for being black boxes. If a human cannot determine why the model arrives at the decision it made, there’s good cause for skepticism. Classic inspection approaches to model interpretability are only useful for simple models, which are likely to only cover simple problems.

The LIME project seeks to help us trust machine learning models. At a high level, it takes advantage of local fidelity. For a given example, a separate model trained on neighbors of the example are likely to reveal the relevant features in the local input space to reveal details about why the model arrives at it’s conclusion.

Data Science Renee finds deeply interesting material such as this on a regular basis and should follow her account on Twitter.

I do have one caveat on a quick read of these materials. The authors say in the paper, under 4. Submodular Pick For Explaining Models:


Even though explanations of multiple instances can be insightful, these instances need to be selected judiciously, since users may not have the time to examine a large number of explanations. We represent the time/patience that humans have by a budget B that denotes the number of explanations they are willing to look at in order to understand a model. Given a set of instances X, we define the pick step as the task of selecting B instances for the user to inspect.

The pick step is not dependent on the existence of explanations – one of the main purpose of tools like Modeltracker [1] and others [11] is to assist users in selecting instances themselves, and examining the raw data and predictions. However, since looking at raw data is not enough to understand predictions and get insights, the pick step should take into account the explanations that accompany each prediction. Moreover, this method should pick a diverse, representative set of explanations to show the user – i.e. non-redundant explanations that represent how the model behaves globally.

The “judicious” selection of instances, in models of any degree of sophistication, based upon large data sets seems problematic.

The focus on the “non-redundant coverage intuition” is interesting but based on the assumption that changes in factors don’t lead to “redundant explanations.” In the cases presented that’s true, but I lack confidence that will be true in every case.

Still, a very important area of research and an effort that is worth tracking.

[Free] Cyber Security Courses for Officials and Veterans [And Contractors, But Not Citizens]

Filed under: Cybersecurity,Security — Patrick Durusau @ 3:56 pm

Cryptome posted Cyber Security Courses for Officials and Veterans

When you visit the Federal Virtual Training Environment (FedVTE) homepage, the FAQ for Spring 2016 (PDF) advises:

Who can take FedVTE training?
FedVTE courses are offered at no cost to government personnel, including contractors, and to U.S. veterans.

Can the general public register on this site and take courses?
No, these courses are not available to the general public.

Cybersecurity is in the news on a daily basis, citizens being victimized right and left, yet the National Initiative for Cybersecurity Careers and Studies denies those same citizens the ability to develop the skills necessary to protect themselves.

While at the same time offering free training to government personnel and contractors, who operated the Office of Personnel Management like a sieve (21.5 million victims). Not to mention the NSA, which seems to have a recurrent case of USB-disease.

For reasons known only to the U.S. government, it lacks the ability or interest in protecting its citizens from repeated cyber-attacks.

The least it can do is open up the Federal Virtual Training Environment (FedVTE) to all citizens.

Or as Randy Newman almost said:

…if you won’t take care of us
Won’t you please, please let us do [it ourselves]?”

From “God’s Song (That’s Why I Love Mankind)

Enough freebies for contractors at the federal teat. How about a benefit or two for ordinary citizens?

Spatial Module in OrientDB 2.2

Filed under: Geographic Data,Geography,Geospatial Data,GIS,Mapping,Maps,OrientDB — Patrick Durusau @ 2:51 pm

Spatial Module in OrientDB 2.2

From the post:

In versions prior to 2.2, OrientDB had minimal support for storing and retrieving GeoSpatial data. The support was limited to a pair of coordinates (latitude, longitude) stored as double in an OrientDB class, with the possibility to create a spatial index against those 2 coordinates in order to speed up a geo spatial query. So the support was limited to Point.
In OrientDB v.2.2 we created a brand new Spatial Module with support for different types of Geometry objects stored as embedded objects in a user defined class

  • Point (OPoint)
  • Line (OLine)
  • Polygon (OPolygon)
  • MultiPoint (OMultiPoint)
  • MultiLine (OMultiline)
  • MultiPolygon (OMultiPlygon)
  • Geometry Collections

Along with those data types, the module extends OrientDB SQL with a subset of SQL-MM functions in order to support spatial data.The module only supports EPSG:4326 as Spatial Reference System. This blog post is an introduction to the OrientDB spatial Module, with some examples of its new capabilities. You can find the installation guide here.

Let’s start by loading some data into OrientDB. The dataset is about points of interest in Italy taken from here. Since the format is ShapeFile we used QGis to export the dataset in CSV format (geometry format in WKT) and import the CSV into OrientDB with the ETL in the class Points and the type geometry field is OPoint.

The enhanced spatial functions for OrientDB 2.2 reminded me of this passage in “Silences and Secrecy: The Hidden Agenda of Cartography in Early Modern Europe:”

Some of the most clear-cut cases of an increasing state concern with control and restriction of map knowledge are associated with military or strategic considerations. In Europe in the sixteenth and seventeenth centuries hardly a year passed without some war being fought. Maps were an object of military intelligence; statesmen and princes collected maps to plan, or, later, to commemorate battles; military textbooks advocated the use of maps. Strategic reasons for keeping map knowledge a secret included the need for confidentiality about the offensive and defensive operations of state armies, the wish to disguise the thrust of external colonization, and the need to stifle opposition within domestic populations when developing administrative and judicial systems as well as the more obvious need to conceal detailed knowledge about fortifications. (reprinted in: The New Nature of Maps: Essays in the History of Cartography, by J.B. Harley: Paul Laxton, John Hopkins, 2001. page 89)

I say “reminded me,” better to say increased my puzzling over the widespread access to geographic data that once upon a time had military value.

Is it the case that “ordinary maps,” maps of streets, restaurants, hotels, etc., aren’t normally imbued (merged?) with enough other information to make them “dangerous?”

If that’s true, the lack of commonly available “dangerous maps” is a disadvantage to emergency and security planners.

You can’t plan for the unknown.

Or to paraphrase Dibert: “Ignorance is not a reliable planning guide.”

How would you cure the ignorance of “ordinary” maps?

PS: While hunting for the quote, I ran across The Power of Maps by Denis Wood; with John Fels. Which has been up-dated: Rethinking the power of maps by Denis Wood; with John Fels and John Krygier. I am now re-reading the first edition and awaiting for the updated version to arrive.

Neither book is a guide to making “dangerous” maps but may awaken in you a sense of the power of maps and map making.

A Whirlwind Tour of Python (Excellent!)

Filed under: Programming,Python — Patrick Durusau @ 12:35 pm

A Whirlwind Tour of Python by Jake VanderPlas.

From the webpage:

To tap into the power of Python’s open data science stack—including NumPy, Pandas, Matplotlib, Scikit-learn, and other tools—you first need to understand the syntax, semantics, and patterns of the Python language. This report provides a brief yet comprehensive introduction to Python for engineers, researchers, and data scientists who are already familiar with another programming language.

Author Jake VanderPlas, an interdisciplinary research director at the University of Washington, explains Python’s essential syntax and semantics, built-in data types and structures, function definitions, control flow statements, and more, using Python 3 syntax.

You’ll explore:

  • Python syntax basics and running Python code
  • Basic semantics of Python variables, objects, and operators
  • Built-in simple types and data structures
  • Control flow statements for executing code blocks conditionally
  • Methods for creating and using reusable functions
  • Iterators, list comprehensions, and generators
  • String manipulation and regular expressions
  • Python’s standard library and third-party modules
  • Python’s core data science tools
  • Recommended resources to help you learn more

Jake VanderPlas is a long-time user and developer of the Python scientific stack. He currently works as an interdisciplinary research director at the University of Washington, conducts his own astronomy research, and spends time advising and consulting with local scientists from a wide range of fields.

A Whirlwind Tour of Python, can be recommended without reservation.

In addition to the book, the Jupyter notebooks behind the book have been posted.

Enjoy!

Add Tor Nodes For 2 White Chocolate Mochas (Venti) Per Month

Filed under: Cybersecurity,Tor — Patrick Durusau @ 9:39 am

I don’t have enough local, reliable bandwidth to run a Tor relay node so I cast about for a remote solution.

David Huerta details in How You Can Help Make Tor Faster for $10 a Month, how you can add a Tor relay node for the cost of 2 White Chocolate Mochas (Venti) per month.

Chris Morran gives the annual numbers as close to $1,100 per year by American workers.

How much privacy does your $1,100 coffee habit buy? None.

Would you spend $1,000/year to sponsor a Tor relay node? Serious question.

Do you have a serious answer?

« Newer PostsOlder Posts »

Powered by WordPress