Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

September 21, 2017

99% of UK Law Firms Ripe For Email Fraud

Filed under: Cybersecurity,Email,Phishing for Leaks,Security — Patrick Durusau @ 8:50 pm

The actual title of the report is: Addressing Cyber Risks Identified in the SRA Risk Outlook Report 2016/17. Yawn. Not exactly an attention grabber.

The report does have this nifty graphic:

The Panama Papers originated from a law firm.

Have you ever wondered what the top 100 law firms in the UK must be hiding?

Or any of the other 10,325 law firms operating in the UK? (Total number of law firms: 10,425.)

If hackers feasting on financial fraud develop a sense of public duty, radical transparency will not be far behind.

September 20, 2017

Testing Next-Gen Onions!

Filed under: Cybersecurity,Government,Security,Tor — Patrick Durusau @ 9:53 pm

Please help us test next-gen onions! by George Kadianakis.

From the webpage:

this is an email for technical people who want to help us test next-gen onion services.

The current status of next-gen onion services (aka prop224) is that they have been fully merged into upstream tor and have also been released as part of tor-0.3.2.1-alpha: https://blog.torproject.org/tor-0321-alpha-released-support-next-gen-onion-services-and-kist-scheduler

Unfortunately, there is still no tor browser with tor-0.3.2.1-alpha so these instructions are for technical users who have no trouble building tor on their own.

We are still in a alpha testing phase and when we get more confident about the code we plan to release a blog post (probs during October).

Until then we hope that people can help us test them. To do so, we have setup a *testing hub* in a prop224 IRC server that you can and should join (ideally using a VPS so that you stick around).

Too late for me to test the instructions today but will tomorrow!

The security you help preserve may be your own!

Enjoy!

W3C’s EME/DRM: Standardizing Abuse and Evasion

Filed under: Cybersecurity,DRM,Security — Patrick Durusau @ 9:44 pm

Among the bizarre arguments in favor of Encrypted Media Extensions (EME), this one stuck with me:

Standardizing an API for Abuse of Users.

The argument runs something like this:

DRM is already present on the Web using plugins for browsers, each with a different API. EME, standardizing a public API, enables smaller browsers to compete in offering DRM. Not to mention avoiding security nightmares like Flash.

As a standards geek, I often argue the advantages of standardization. Claiming standardizing an API for abuse of users as beneficial, strikes me as odd.

Conceptually DRM systems don’t have to infringe on the rights of users to fair use, first sale, modification for accessibility, but I don’t have an example of one from a commercial content provider that doesn’t. Do you?

Moreover, confessed corporate behavior, false bank accounts (Wells Fargo), forged mortgage documents (Ally (formerly known as GMAC), Bank of America, Citi, JPMorgan Chase, Wells Fargo), etc., leave all but the most naive certain user rights will be abused via the EME API.

A use of the EME API that does not violate user rights would be a man bites dog story. Sing out in the unlikely event you encounter such a case.

(I got to this point and my post ran away from me.)

Is there an upside to ending the crazy quilt of DRM plugins and putting encrypted media delivery directly into browsers for users?

With EME as the single interface for delivery of encrypted web content, what else must be true?

Ah, there is a single point of failure for encrypted web content, meaning if the security of EME is broken, then it is broken for all encrypted web content.

There’s a pleasant thought. Over-reaching to gut user’s rights, the DRM crowd created a standardized, single point of failure. A single breach spells disaster on a large scale.

Looking forward to the back-biting and blame allocation sure to follow the failure of this plan to rain greed over the world. (Wasn’t some company named ContentGuard (sp?) involved in an earlier one?)

Not happy with a standardized API for abusing users but having a single API is like the Windows market share. Breach one and you have breached them all. I take some consolation from that fact.

September 19, 2017

An Honest Soul At The W3C? EME/DRM Secret Ballot

Filed under: Cybersecurity,DRM,Electronic Frontier Foundation,Leaks,Security,W3C — Patrick Durusau @ 9:49 am

Billions of current and future web users have been assaulted and robbed in what Jeff Jaffe (W3C CEO) calls a “respectful debate.” Reflections on the EME Debate.

Odd sense of “respectful debate.”

A robber demands all of your money and clothes, promises to rent you clothes to get home, but won’t tell you how to make your own clothes. You are now and forever a captive of the robber. (That’s a lay persons summary but accurate account of what the EME crowd wanted and got.)

Representatives for potential victims, the EFF and others, pointed out the problems with EME at length, over years of debate. The response of the robbers: “We want what we want.

Consistently, for years, the simple minded response of EME advocates continued to be: “We want what we want.

If you think I’m being unkind to the EME advocates, consider the language of the Disposition of Comments for Encrypted Media Extensions and Director’s decision itself:


Given that there was strong support to initially charter this work (without any mention of a covenant) and continued support to successfully provide a specification that meets the technical requirements that were presented, the Director did not feel it appropriate that the request for a covenant from a minority of Members should block the work the Working Group did to develop the specification that they were chartered to develop. Accordingly the Director overruled these objections.

The EME lacks a covenant protecting researchers and others from anti-circumvention laws, enabling continued research on security and other aspects of EME implementations.

That covenant was not in the original charter, the director’s “(without any mention of a covenant),” aka, “We want what we want.

There wasn’t ever any “respectful debate,” but rather EME supporters repeating over and over again, “We want what we want.

A position which prevailed, which bring me to the subject of this post. A vote, a secret vote was conducted by the W3C seeking support for the Director’s cowardly and self-interested support for EME, the result of which as been reported as:


Though some have disagreed with W3C’s decision to take EME to recommendation, the W3C determined that the hundreds of millions of users who want to watch videos on the Web, some of which have copyright protection requirements from their creators, should be able to do so safely and in a Web-friendly way. In a vote by Members of the W3C ending mid September, 108 supported the Director’s decision to advance EME to W3C Recommendation that was appealed mid-July through the appeal process, while 57 opposed it and 20 abstained. Read about reflections on the EME debate, in a Blog post by W3C CEO Jeff Jaffe.

(W3C Publishes Encrypted Media Extensions (EME) as a W3C Recommendation)

One hundred and eight members took up the cry of “We want what we want.” rob billions of current and future web users. The only open question being who?

To answer that question, the identity of these robbers, I posted this note to Jeff Jaffe:

Jeff,

I read:

***

In a vote by Members of the W3C ending mid September, 108 supported the Director’s decision to advance EME to W3C Recommendation that was appealed mid-July through the appeal process, while 57 opposed it and 20 abstained.

***

at: https://www.w3.org/2017/09/pressrelease-eme-recommendation.html.en

But I can’t seem to find a link to the vote details, that is a list of members and their vote/abstention.

Can you point me to that link?

Thanks!

Hope you are having a great week!

Patrick

It didn’t take long for Jeff to respond:

On 9/19/2017 9:38 AM, Patrick Durusau wrote:
> Jeff,
>
> I read:
>
> ***
>
> In a vote by Members of the W3C ending mid September, 108 supported the
> Director’s decision to advance EME to W3C Recommendation that was
> appealed mid-July through the appeal process, while 57 opposed it and 20
> abstained.
>
> ***
>
> at: https://www.w3.org/2017/09/pressrelease-eme-recommendation.html.en
>
> But I can’t seem to find a link to the vote details, that is a list of
> members and their vote/abstention.
>
> Can you point me to that link?

It is long-standing process not to release individual vote details publicly.

I wonder about a “long-standing process” for the only vote on an appeal in W3C history but there you have it, the list of robbers isn’t public. No need to search the W3C website for it.

If there is an honest person at the W3C, a person who stands with the billions of victims of this blatant robbery, then we will see a leak of the EME vote.

If there is no leak of the EME vote, that is a self-comment on the staff of the W3C.

Yes?

PS: Kudos to the EFF and others for delaying EME this long but the outcome was never seriously in question. Especially in organizations where continued membership and funding are more important than the rights of individuals.

EME can only be defeated by action in the trenches as it were, depriving its advocates of any perceived benefit and imposing ever higher costs upon them.

You do have your marker pens and sticky tape ready. Yes?

September 18, 2017

Darkening the Dark Web

Filed under: Cybersecurity,Privacy,Security,Tor — Patrick Durusau @ 8:47 pm

I encountered Andy Greenberg‘s post, It’s About to Get Even Easier to Hide on the Dark Web (20 January 2017), and was happy to read:

From the post:


The next generation of hidden services will use a clever method to protect the secrecy of those addresses. Instead of declaring their .onion address to hidden service directories, they’ll instead derive a unique cryptographic key from that address, and give that key to Tor’s hidden service directories. Any Tor user looking for a certain hidden service can perform that same derivation to check the key and route themselves to the correct darknet site. But the hidden service directory can’t derive the .onion address from the key, preventing snoops from discovering any secret darknet address. “The Tor network isn’t going to give you any way to learn about an onion address you don’t already know,” says Mathewson.

The result, Mathewson says, will be darknet sites with new, stealthier applications. A small group of collaborators could, for instance, host files on a computer known to only to them. No one else could ever even find that machine, much less access it. You could host a hidden service on your own computer, creating a way to untraceably connect to it from anywhere in the world, while keeping its existence secret from snoops. Mathewson himself hosts a password-protected family wiki and calendar on a Tor hidden service, and now says he’ll be able to do away with the site’s password protection without fear of anyone learning his family’s weekend plans. (Tor does already offer a method to make hidden services inaccessible to all but certain Tor browsers, but it involves finicky changes to the browser’s configuration files. The new system, Mathewson says, makes that level of secrecy far more accessible to the average user.)

The next generation of hidden services will also switch from using 1024-bit RSA encryption keys to shorter but tougher-to-crack ED-25519 elliptic curve keys. And the hidden service directory changes mean that hidden service urls will change, too, from 16 characters to 50. But Mathewson argues that change doesn’t effect the dark web addresses’ usability since they’re already too long to memorize.

Your wait to test these new features for darkening the dark web are over!

Tor 0.3.2.1-alpha is released, with support for next-gen onion services and KIST scheduler

From the post:

And as if all those other releases today were not enough, this is also the time for a new alpha release series!

Tor 0.3.2.1-alpha is the first release in the 0.3.2.x series. It includes support for our next-generation (“v3”) onion service protocol, and adds a new circuit scheduler for more responsive forwarding decisions from relays. There are also numerous other small features and bugfixes here.

You can download the source from the usual place on the website. Binary packages should be available soon, with an alpha Tor Browser likely by the end of the month.

Remember: This is an alpha release, and it’s likely to have more bugs than usual. We hope that people will try it out to find and report bugs, though.

The Vietnam War series by Ken Burns and Lynn Novick makes it clear the United States government lies and undertakes criminal acts for reasons hidden from the public. To trust any assurance by that government of your privacy, freedom of speech, etc., is an act of madness.

Will you volunteer to help with the Tor project or place your confidence in government?

It really is that simple.

Upsides of W3C’s Embrace of DRM

Filed under: Cybersecurity,DRM,Intellectual Property (IP),Security — Patrick Durusau @ 4:23 pm

World Wide Web Consortium abandons consensus, standardizes DRM with 58.4% support, EFF resigns by Cory Doctorow.

From the post:

In July, the Director of the World Wide Web Consortium overruled dozens of members’ objections to publishing a DRM standard without a compromise to protect accessibility, security research, archiving, and competition.

EFF appealed the decision, the first-ever appeal in W3C history, which concluded last week with a deeply divided membership. 58.4% of the group voted to go on with publication, and the W3C did so today, an unprecedented move in a body that has always operated on consensus and compromise. In their public statements about the standard, the W3C executive repeatedly said that they didn’t think the DRM advocates would be willing to compromise, and in the absence of such willingness, the exec have given them everything they demanded.

This is a bad day for the W3C: it’s the day it publishes a standard designed to control, rather than empower, web users. That standard that was explicitly published without any protections — even the most minimal compromise was rejected without discussion, an intransigence that the W3C leadership tacitly approved. It’s the day that the W3C changed its process to reward stonewalling over compromise, provided those doing the stonewalling are the biggest corporations in the consortium.

EFF no longer believes that the W3C process is suited to defending the open web. We have resigned from the Consortium, effective today. Below is our resignation letter:

In his haste to outline all the negatives, all of which are true, about the W3C DRM decision, Cory forgets to mention there are several upsides to this decision.

1. W3C Chooses IP Owners Over Web Consumers

The DRM decision reveals the W3C as a shill for corporate IP owners. Rumors have it that commercial interests were ready to leave the W3C for the DRM work, rumors made credible by Tim Berners-Lee’s race to the head of the DRM parade.

We are fortunate the Stasi faded from history before the W3C arrived, lest we have Tim Berners-Lee leading a march for worldwide surveillance on the web.

The only value being advanced by the Director (Tim Berners-Lee) is the relevance of the W3C for the web. Consumers aren’t just expendable, but irrelevant. Best you know than now rather than later.

2. DRM Creates “unauditable attack-surface” (for vendors too)

Cory lists the “unauditable attack surface” for browsers like it was a bad thing. That’s true for consumers, but who else is that true for?

Oh, yes, IP owners who plan on profiting from DRM. Their DRM efforts will be easy to circumvent, the digital equivalent of a erasable marker no doubt and offer the advantage of access to their systems.

Take the recent Equifax breach as an example. What is the one mission critical requirement for Equifax customers?

Easy and reliable access. You could have any number of enhanced authentication schemes for access to Equifax, but that conflicts with the mission-critical need for customers to have ready access to its data.

Content vendors dumb enough to invest in W3C DRM, which will be easy to circumvent, have a similar mission critical requirement. Easy and reliable approval. Quite often as the result of a purchase at any number of web locations.

So we have N vendors sites, selling N products, for N IP owners, to N users, using N browsers, from N countries, err, can you say: “DRM opens truck sized security holes?”

I feel sorry for web consumers but not for any vendor that enriches DRM vendors (the only people who make money off of DRM).

DRM Promotes Piracy and Disrespect for IP

Without copyright and DRM, there would be few opportunities for digital piracy and little disrespect for intellectual property (IP). People can and do photocopy individual journal articles, violating the author’s and possibly the journal’s IP, but who cares? Fewer than twenty (20) people are likely to read it ever.

Widespread and browser-based DRM will be found on the most popular content, creating incentives for large numbers of users to engage in digital piracy. The more often they use pirated content, the less respect they will have for the laws that create the crime.

To paraphrase Princess Leia speaking to Governor Tarkin:

The more the DRM crowd tightens its grip, the more content that will slip through their fingers.

The W3C/Tim Berners-Lee handed IP owners the death star, but the similarity for DRM doesn’t stop there. No indeed.

Conclusion

Flying its true colors, the W3C/Tim Berners-Lee should be abandoned en masse by corporate sponsors and individuals alike. The scales have dropped from web users eyes and it’s clear they are commodities in the eyes of the W3C. Victims if you prefer that term.

The laughable thought of effective DRM will create cybersecurity consequences for both web users and the cretins behind DRM. I don’t see any difficulty in choosing who should suffer the consequences of DRM-based cybersecurity breeches. Do you?

I am untroubled by the loss of respect for IP. That’s not surprising since I advocate only attribution and sale for commercial gain as IP rights. There’s no point in pursuing people who are spending their money to distribute your product for free. It’s cost free advertising.

As Cory points out, the DRM crowd was offered several unmerited compromises and rejected those.

Having made their choice, let’s make sure none of them escape the W3C/DRM death star.

September 17, 2017

Tax Phishing

Filed under: Cybersecurity,Government,Phishing for Leaks,Security — Patrick Durusau @ 7:57 pm

The standard security mantra is to avoid phishing emails.

That assumes your employer’s security interests coincide with your own. Yes?

If you are being sexually harassed at work, were passed over for a job position, your boss has found a younger “friend” to mentor, etc., there are an unlimited number of reasons for a differing view on your employer’s cybersecurity.

The cybersecurity training that enables you to recognize and avoid a phishing email, also enables you to recognize and accept a phishing email from “digital Somali pirates” (HT, Dilbert).

Acceptance of phishing emails in tax practices could result in recovery of tax returns for public officials (Trump?), financial documents similar to those in the Panama Papers, and other data (Google’s salary data?).

If you don’t know how to recognize phishing emails in the tax business, Jeff Simpson has adapted tips from the IRS in: 10 tips for tax pros to avoid phishing scams.

Just quickly (see Simpson’s post for the details):

  1. Spear itself.
  2. Hostile takeovers.
  3. Day at the breach.
  4. Ransom devil.
  5. Remote control.
  6. BEC to the wall.
  7. EFIN headache.
  8. Protect clients.
  9. Priority No. 1. (Are you the “…least informed employee…?)
  10. Speak up.

Popular terminology for phishing attacks varies by industry so the terminology for your area may differ from Simpson’s.

Acceptance of phishing emails may be the industrial action tool of the 21st century.

Thoughts?

September 16, 2017

Red Scare II (2016 – …) – Hacker Opportunities

Filed under: Cybersecurity,Security — Patrick Durusau @ 9:47 pm

I’m not old enough to remember the Red Scare of the 1950s, but it was a time where accusation, rumors actually, were enough to destroy careers and lives. Guilt was assumed and irrefutable.

The same tactics are being used against Kaspersky Lab today. I won’t dignify those reports with citation but we share the experience that none of them cite facts or evidence, only the desired conclusion, that Kaspersky Lab is suspect.

Neil J. Rubenking routs Kaspersky Lab critics with expert opinions and facts in: Should You Believe the Rumors About Kaspersky Lab?.

From the post:

If you accuse me of stealing your new car, I have a lot of options to prove my innocence. I was out of the country at the time of the alleged theft. I don’t have the car. Security cameras show it’s sitting in a garage. And so on.

But if you accuse me of hacking in and stealing the design documents for your new car, things get dicey, especially if you start a whispering campaign. Neil sometimes consorts with known hackers (true). Neil regularly meets with representatives of foreign companies (true). Neil maintains a collection of all kinds of malware, including ransomware and data-stealing Trojans (true). Neil has the programming skills to pull off this hack (I wish!).

After a while the original accusation doesn’t even matter; you’ve successfully damaged my reputation. And that’s exactly what seems to be happening with antivirus maker Kaspersky Lab.

You can find any number of news articles suggesting improper activities by Kaspersky Lab. The US government removed Kaspersky from its list of approved programs and, more recently, added it to a list of banned programs. Best Buy dropped Kaspersky products from its stores. Kaspersky has hired security experts who previously worked for the Russian government. Kaspersky is a Russian company, darn it!

The list goes on, but what’s impressively absent is any factual evidence of security-related misbehavior. To get a handle on this situation, I asked for thoughts from security experts I know, both in the US and around the world.

A moment of disclosure, first. While I wouldn’t say I know him well, I have certainly met Eugene Kaspersky and been impressed by his knowledge. I follow him on Twitter, and he follows me. I’ve even ridden a tour boat with Eugene (and others) into McCovey Cove during a Giants game. Go Giants!

It’s a great post and one you should forward to Kaspersky critics, repeatedly.

As Rubenking mentions in his post, the Department of Homeland Security (sic): US government bans agencies from using Kaspersky software over spying fears:


On Wednesday, the Department of Homeland Security (DHS) issued a directive, first reported by the Washington Post, calling on departments and agencies to identify any use of Kaspersky antivirus software and develop plans to remove them and replace them with alternatives within the next three months.

Which sets a deadline of December 12, 2017 for federal agencies to abandon Kaspersky software.

That’s not a serious/realistic date but moving from known and poorly used software (Kaspersky) to unknown and poorly used software (to replace Kaspersky), can’t help but create opportunities for hackers.

The United States federal government maybe the first government to become completely transparent in fact, if not by intent.

Enjoy!

September 10, 2017

“Should We Talk About Security Holes? An Old View”

Filed under: Cybersecurity,Malware,Security — Patrick Durusau @ 7:15 pm

Michael Sikorski, @mikesiko, tweeted a quote forwarded by @SteveBellovin in a discussion about open sharing and discussion of malware.

The quote was an image and didn’t reduce well for display. I located the source of the quote and quote the text below.

Rudimentary Treatise on the Construction of Door Locks: For Commercial and Domestic Purposes : with Mr. Smyth’s Letter on the Bramah Locks by J. Weale (by the book’s pagination, starting on page 2 and ending on page 4).


A commercial, and in some respects a social, doubt has been started within the last year or two, whether or not is it right to discuss so openly the security or insecurity of locks. Many well-meaning persons suppose that discussion respecting the means for baffling the supposed safety of locks offers a premium for dishonesty, by shewing others how to be dishonest. This is a fallacy. Rogues are very keen in their profession, and know already much more than we can teach them respecting their several kinds of roguery. Rogues knew a good deal about lock-picking long before locksmiths discussed it among themselves, as they have lately done. If a lock—let it have been made in whatever country, or by whatever maker—is not so inviolate as it has hitherto been deemed to be, surely it is in the interest of honest persons to know this fact, because the dishonest are tolerably certain to be the first to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance. It cannot be too earnestly urged, that an acquaintance with real facts will, in the end, be better for all parties. Some time ago, when the reading public was alarmed at being told how London milk is adulterated, timid persons deprecated the exposure, on the plea that it would give instructions in the art of adulterating milk; a vain fear—milkmen knew all about it before, whether they practiced it or not; and the exposure only taught purchasers the necessity of a little scrutiny and caution, leaving them to obey this necessity or not, as they pleased. So likewise in respect to bread, sugar, coffee, tea, wine, beer, spirits, vinegar, cheap silks, cheap wollens—all such articles are susceptible of debasement by admixture with cheaper substances—much more good than harm is effected by stating candidly and scientifically the various methods by which debasement has been, or can be produced. The unscrupulous have the command of much of this kind of knowledge without our aid; and there is moral and commercial justice in placing on their guard those who might possibly suffer therefrom. We employ these stray expressions concerning adulteration, debasement, roguery, and so forth, simply as a mode of illustrating a principle—the advantage of publicity. In respect to lock-making there can scarcely be such a thing as dishonesty of intention: the inventor produces a lock which he honestly thinks will possess such and such qualities; and he declares his belief to the world. If others differ from him in opinion concerning those qualities, it is open for them to say so; and the discussion, truthfully conducted, must lead to public advantage: the discussion stimulates curiosity, and the curiosity stimulates invention. Nothing but a partial and limited view of the question could lead to the opinion that harm can result: if there be harm, it will be much more than counterbalanced by good.

More to follow but here’s a question to ponder:

Can you name one benefit that white hats gain by not sharing vulnerability information?

September 8, 2017

Unpatched Windows Vulnerability – Cost of Closed Source Software

Filed under: Cybersecurity,Microsoft,Open Source,Security — Patrick Durusau @ 3:40 pm

Bug in Windows Kernel Could Prevent Security Software From Identifying Malware by Catalin Cimpanu.

From the post:

Malware developers can abuse a programming error in the Windows kernel to prevent security software from identifying if, and when, malicious modules have been loaded at runtime.

Continue on with Cimpanu for a good overview or catch Windows’ PsSetLoadImageNotifyRoutine Callbacks: the Good, the Bad and the Unclear (Part 1).

Symantec says proactive security includes:

  • Inventory of Authorized and Unauthorized Devices
  • Inventory of Authorized and Unauthorized Software
  • Secure Configurations for Hardware & Software
  • Constant Vulnerability Assessment and Remediation
  • Malware Defense

But since Windows is closed source software, you can’t remedy the vulnerability. Whatever your cyberdefenses, closed source MS Windows leaves you vulnerable.

Eternal (possibly) vulnerability – the cost of closed source software.

It’s hard to think of a better argument for open source software.

Open source software need not be free, just open source so you can fix it if broken.

PS: Open source enables detection of government malware.

September 5, 2017

Chess Captcha (always legal moves?)

Filed under: Games,Security — Patrick Durusau @ 7:00 pm

I saw this on Twitter. Other games you would use for a captcha?

Graham Cluley says chess captchas aren’t hard to defeat in: Chess CAPTCHA – a serious defence against spammers?

But Cluley, like most users, is assuming a chess captcha has a chess legal solution.

What if the solution is an illegal move? Or more than one illegal move?

An illegal move would put the captcha beyond any standard chess program.

Yes?

Reserving access to those told of the solution.

DACA: 180 Days to Save 800,000 : Whose Begging Bowl to Choose? (Alternative)

Filed under: Cybersecurity,Government,Politics,Security — Patrick Durusau @ 3:47 pm

Trump administration ending DACA program, which protected 800,000 children of immigrants by Jacob Pramuk | @jacobpramuk.

From the post:

  • President Trump is ending DACA, the Obama-era program that protects hundreds of thousands of “dreamers.”
  • Attorney General Jeff Sessions says there will be a six-month delay in terminating it to give Congress time to act.
  • Sessions says the immigration program was an unlawful overreach by Obama that cannot be defended.

Check out Pramuk’s post if you are interested in Attorney General Sessions’ “reasoning” on this issue. I refuse to repeat it from fear of making anyone who reads it dumber.

Numerous groups have whipped out their begging bowls and more are on the way. All promising opposition, not success, but opposition to ending Deferred Action for Childhood Arrivals (DACA).

Every group has its own expenses, lobbyists, etc., before any of your money goes to persuading Congress to save all 800,000 children of immigrants protected by the DACA.

Why not create:

  • low-over head fund
  • separate funds for house and senate
  • divided and contributed to the campaigns* of all representatives and senators who vote for replacement to DACA within 180 days
  • where replacement for DACA protects everyone now protected
  • and where replacement DACA becomes law (may have to override veto)

*The contribution to a campaign, as opposed to the senator or representative themselves, is important as it avoids the contributions being a “gratuity” for passage of the legislation, which is illegal. 2041. Bribery Of Public Officials.

Such funds would avoid the overhead of ongoing organizations and enable donors to see the results of their donations more directly.

I’m not qualified to setup such funds but would contribute to both.

You?

PS: You do the math. If some wealthy donor contributed 6 $million to the Senate fund, then sixty (60) senatorial campaigns would each get $600,000 in cash. Nothing to sneeze at.

September 3, 2017

Charity Based CyberSecurity For Mercenaries?

Filed under: Cybersecurity,Government,Protests,Security — Patrick Durusau @ 4:54 pm

That was my question when I read: Insecure: How A Private Military Contractor’s Hiring Files Leaked by Dan O’Sullivan.

The UpGuard Cyber Risk Team can now disclose that a publicly accessible cloud-based data repository of resumes and applications for employment submitted for positions with TigerSwan, a North Carolina-based private security firm, were exposed to the public internet, revealing the sensitive personal details of thousands of job applicants, including hundreds claiming “Top Secret” US government security clearances. TigerSwan has recently told UpGuard that the resumes were left unsecured by a recruiting vendor that TigerSwan terminated in February 2017. If that vendor was responsible for storing the resumes on an unsecured cloud repository, the incident again underscores the importance of qualifying the security practices of vendors who are handling sensitive information.

The exposed documents belong almost exclusively to US military veterans, providing a high level of detail about their past duties, including elite or sensitive defense and intelligence roles. They include information typically found on resumes, such as applicants’ home addresses, phone numbers, work history, and email addresses. Many, however, also list more sensitive information, such as security clearances, driver’s license numbers, passport numbers and at least partial Social Security numbers. Most troubling is the presence of resumes from Iraqi and Afghan nationals who cooperated with US forces, contractors, and government agencies in their home countries, and who may be endangered by the disclosure of their personal details.

While the process errors and vendor practices that result in such cloud exposures are all too common in the digital landscape of 2017, the month-long period during which the files remained unsecured after UpGuard’s Cyber Risk Team notified TigerSwan is troubling.

Amazing story isn’t it? Even more amazing is that UpGuard sat on the data for a month, waiting for TigerSwan to secure it. Not to mention UpGuard not publicly posting the data upon discovery.

In case you don’t recognize “TigerSwan,” let me refresh your memory:

UpGuard finds 9,402 resumes, applicants seeking employment with TigerSwan/Blackwater type employers.

Did they expose these resumes to the public?

Did they expose these resumes to the press?

Did they expose these resumes to prosecutors?

None of the above.

UpGuard spends a month trying to keep the data hidden from the public, the press and potential prosecutors!

Unpaid charity work so far as I know.

Thousands of mercenaries benefit from this charity work by UpGuard. Their kind can continue to violate the rights of protesters, murder civilians, etc., all the while being watched over by UpGuard. For free.

Would you shield torturers and murderers from their past or future victims?

Don’t be UpGuard, choose no.

September 2, 2017

Sharing Mis-leading Protest Data – Raspberry Pi PirateBox

Filed under: Cybersecurity,Protests,Security — Patrick Durusau @ 3:51 pm

Police surveillance of cellphone and Wi-Fi access points is standard procedure at all protests.

The Raspberry Pi PirateBox enables protesters to re-purpose that surveillance to share mis-leading data with police officers, anonymously.

Using prior protests as a model, even re-using audio/video footage, create “fake” reports and imagery for posting to your “My Little Protest News Site.” (Pick a less obvious name.)

With any luck, news media reps will be picking up stories your news site, which will increase the legitimacy of your “fake” reports. Not to mention adding to the general confusion.

Mix in true but too late to be useful news and even some truthful, prior to happening calls for movement so your reports are deemed mostly credible.

Predicting flare gun attacks on reserve formations, only moments before it happens, will go a long way to earning your site credibility with its next prediction of an uptick in action.

The legality of distributing fake reports and use of flare guns at protests varies from jurisdiction to jurisdiction. Always consult with legal counsel about such conduct.

September 1, 2017

US Labor Day (sic) Security Reading

Filed under: Cybersecurity,Government,Privacy,Security — Patrick Durusau @ 9:16 pm

I know, for the US to have a “labor day” holiday is a jest too cruel for laughter.

But, many people will have a long weekend, starting tomorrow, so suggested reading is in order.

Surveillance Self-Defense, a project of the EFF, has security “playlists” for:

Academic researcher? Learn the best ways to minimize harm in the conduct of your research.

Activist or protester? How to keep you and your communications safe wherever your campaigning takes you.

Human rights defender? Recipes for organizations who need to keep safe from government eavesdroppers.

Journalism student? Lessons in security they might not teach at your j-school.

Journalist on the move? How to stay safe online anywhere without sacrificing access to information.

LGBTQ Youth Tips and tools to help you more safely access LGBTQ resources, navigate social networks, and avoid snoopers.

Mac user? Tips and tools to help you protect your data and communications.

Online security veteran? Advanced guides to enhance your surveillance self-defense skill set.

Want a security starter pack? Start from the beginning with a selection of simple steps.

Have a great weekend!

August 31, 2017

Secure Data Deletion on Windows (Or Not)

Filed under: Cybersecurity,Security — Patrick Durusau @ 8:03 pm

How to: Delete Your Data Securely on Windows

From the post:

Most of us think that a file on our computer is deleted once we put the file in our computer’s trash folder and empty the trash; in reality, deleting the file does not completely erase it. When one does this, the computer just makes the file invisible to the user and marks the part of the disk that the file was stored on as “available”—meaning that your operating system can now write over the file with new data. Therefore, it may be weeks, months, or even years before that file is overwritten with a new one. Until this happens, that “deleted” file is still on your disk; it’s just invisible to normal operations. And with a little work and the right tools (such as “undelete” software or forensic methods), you can even still retrieve the “deleted” file. The bottom line is that computers normally don’t “delete” files; they just allow the space those files take up to be overwritten by something else some time in the future.

The best way to delete a file forever, then, is to make sure it gets overwritten immediately, in a way that makes it difficult to retrieve what used to be written there. Your operating system probably already has software that can do this for you—software that can overwrite all of the “empty” space on your disk with gibberish and thereby protect the confidentiality of deleted data.

Note that securely deleting data from solid state drives (SSDs), USB flash drives, and SD cards is very hard! The instructions below apply only to traditional disk drives, and not to SSDs, which are becoming standard in modern laptops, USB keys/USB thumb drives, or SD cards/flash memory cards.

This is because these types or drives use a technique called wear leveling. (You can read more about why this causes problems for secure deletion here.)

If you’re using an SSD or a USB flash drive, you can jump to the section below.

On Windows, we currently suggest using BleachBit. BleachBit is a free/open source secure deletion tool for Windows and Linux, and is much more sophisticated than the built-in Cipher.exe.

BleachBit can be used to quickly and easily target individual files for secure deletion, or to implement periodic secure deletion policies. It is also possible to write custom file deletion instructions. Please check the documentation for further information.

The EFFs reminder:


Time required: 10 minutes to several hours (depending on size of files/disks to be securely deleted)

is reassurance that most drives retired from government and industry may be loaded with goodies.

If in doubt, share this EFF resource with office level decision makers. It’s almost certain they will not tax their users with secure data deletion duties.

Monitoring Malware Sinkhole Traffic

Filed under: Cybersecurity,Malware,Security — Patrick Durusau @ 5:01 pm

Consolidated Malware Sinkhole List by Lesley Carhart, Full Spectrum Cyber-Warrior Princess.

From the post:

A common practice of researchers studying a piece of malware is to seize control of its malicious command and control domains, then redirect traffic to them to benign research servers for analysis and victim notification. I always highly recommend monitoring for traffic to these sinkholes – it is frequently indicative of infection.

I’ve found no comprehensive public list of these sinkholes. There have been some previous efforts to compile a list, for instance by reverse engineering Emerging Threats Signatures (mikesxrs – I hope this answers your questions, a little late!). Some sinkholes are documented on the vendors’ sites, while others are clearly labeled in whois data, but undocumented. Still others are only detectable through behavior and hearsay.

Below, I share my personal list of publicly-noted sinkholes only. Please understand that with few exceptions I have not received any of this information from the vendors or organizations mentioned. It is possible there is some misattribution, and addresses in use do change over time. This is merely intended as a helpful aid for threat hunting, and there are no guarantees whatsoever.

An incomplete malware sinkhole list by her own admission but an interesting starting point for data collection/analysis.

When I read Carhart’s:

I always highly recommend monitoring for traffic to these sinkholes – it is frequently indicative of infection.

I had to wonder, at what level will you be monitoring traffic “…to these sinkholes?”

Sysadmins monitor their own networks, but traffic monitoring at higher levels is possible as well.

Above network level traffic monitoring for sinkhole would give a broader picture of possible “infections.”

Upon discovery, a system already infected by one type of malware, may be found to be vulnerable to other malware with a similar attack vector.

It certainly narrows the hunt for vulnerable systems.

If you don’t already, follow Lesley Carhart, @hacks4pancakes, or visit her blog, tisiphone.net.

FCC Supports Malware Distribution!

Filed under: Cybersecurity,Journalism,News,Security — Patrick Durusau @ 9:50 am

Well, not intentionally.

FCC “apology” shows anything can be posted to agency site using insecure API by Sean Gallagher

Gallagher reports that with an API key (use gmail account) you can post malicious Word documents to the FCC site.

Not formal support for malware distribution but then next best thing.

The FCC has been given notice so this is probably a time limited opportunity.

Don’t despair!

Knowing what to look for, you can begin scanning other government websites for a similar weakness.

Journalist tip: As APIs with this weakness are uncovered, trace them back to the contractors who built them. Then run forward to see who the contractors are afflicting now.

August 29, 2017

Inspiring Female Hackers – Kronos Malware

Filed under: Cybersecurity,Malware,Security — Patrick Durusau @ 7:00 pm

Hasherezade authored a two part series:

Inside the Kronos malware – part 1

Inside the Kronos malware – part 2,

an in depth examination of the Kronos Malware.

It’s heavy sledding but is one example of current work being done by a female hacker. If it seems alien now, return to it after you learn some hacking skills to be properly impressed.

BTW, Hasherezade has a blog at: hasherezade’s 1001 nights

PS: There’s a lot of talk about white-hats and black-hats in the cybersecurity community.

My question would be: “What color hat are you paying me to wear? Otherwise, it’s really none of your concern.”

August 28, 2017

Drop-n-Retrieve Honeypots, Portals, Deception

Filed under: Cybersecurity,Security — Patrick Durusau @ 8:14 pm

A low-cost drop-n-retrieve WiFi device, suitable use in public, private, commercial and governmental locations.

YouTube has a series of videos on WiNX under the playlist Hacker Arsenal.

You don’t want to search using “WiNX” at YouTube. The most popular results are for Winx Club. Not related.

What Being a Female Hacker Is Really Like

Filed under: Cybersecurity,Malware,Security — Patrick Durusau @ 3:55 pm

What Being a Female Hacker Is Really Like by Amanda Rousseau.

I never imagined citing a TeenVogue post on my blog but this one is a must read!

Amanda Rousseau is a white-hat malware expert and co-founder of the blog, VanitySec.

I won’t attempt to summarize her four (4) reasons why women should consider careers as hackers, thinking you need to read the post in full, not my highlights.

Looking forward to more hacker oriented posts in TeenVogue and off now to see what’s up at VanitySec. (Today’s top post: Fall Bags to Conceal Your RFID Reader. Try finding that at your tech feed.)

Hacking For Government Transparency

Filed under: Cybersecurity,Government,Journalism,News,Reporting,Security — Patrick Durusau @ 3:37 pm

The 2017 U.S. State and Federal Government Cybersecurity Report by SecurityScorecard lacks details of specific vulnerabilities for identified government units, but paints an encouraging picture for hackers seeking government transparency.

Coverage of the report:


In August 2017, SecurityScorecard leveraged its proprietary platform to analyze and grade the current security postures of 552 local, state, and federal government organizations, each with more than 100 public-facing IP addresses, to determine the strongest and weakest security standards based on security hygiene and security reaction time compared to their peers.

Security Rankings by Industry

Out of eighteen (18) ranked industries, best to worst security, government comes in at a tempting number sixteen (16):

Financial services, with the fifth (5th) best security, is routinely breached, making it curious the government (#16) has any secrets at all.

Why Any Government Has Secrets

Possible reasons any government has secrets:

  • 1. Lack of interest?
  • 2. Lack of effort by the news media?
  • 3. Habituation to press conferences?
  • 4. Habituation to “leaks?”
  • N. Cybersecurity?

You can wait for governments to embarrass themselves (FOIA and its equivalents), wait for leakers to take a risk for your benefit, or, you could take the initiative in obtaining government secrets.

The SecurityScorecard report makes it clear the odds are in your favor. Your call.

August 25, 2017

Good News For Transparency Phishers

Filed under: Cybersecurity,Government,Phishing for Leaks,Security,Transparency — Patrick Durusau @ 4:45 pm

If you are a transparency phisher, Shaun Waterman has encouraging news for you in: Most large companies don’t use standard email security to combat spoofing.

From the post:

Only a third of Fortune 500 companies deploy DMARC, a widely-backed best-practice security measure to defeat spoofing — forged emails sent by hackers — and fewer than one-in-10 switch it on, according to a new survey.

The survey, carried out by email security company Agari via an exhaustive search of public Internet records, measured the use of Domain-based Message Authentication, Reporting and Conformance, or DMARC.

“It is unconscionable that only eight percent of the Fortune 500, and even fewer [U.S.] government organizations, are protecting the public against email domain spoofing,” said Patrick Peterson, founder and executive chairman, Agari. A similar survey of federal government agencies earlier this month, by the Global Cyber Alliance, found fewer than five percent of federal domains were protected by switched-on DMARC.

The Agari survey found adoption rates similarly low among companies in the United Kingdom’s FTSE and Australia’s ASX 100.

DMARC is the industry standard measure to prevent hackers from spoofing emails — making their messages appear as if they’re sent by someone else. Spoofing is the basis of phishing, a major form of both cybercrime and cyber-espionage, in which an email appearing to a come from a trusted company like a bank or government agency contains malicious links, directing readers to a fake site which will steal their login and password when they sign on.

Only eight (8) percent of the Fortune 500 and less than five (5) percent of federal (US) domains have DMARC protection.

I expect DMARC protection rates fall rapidly outside the Fortune 500 and non-federal government domains.

If you are interested in transparency, for private companies or government agencies, the lack of DMARC adoption and use presents a golden opportunity to obtain otherwise hidden information.

As always, who you are and who you are working for, determines the legality of any phishing effort. Consult with an attorney concerning your legal rights and obligations.

FBI As Unpaid Cybersecurity Ad Agency

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 3:51 pm

Despite its spotty record on cybersecurity expertise, the FBI is promoting competitors of Kaspersky Lab.

Patrick O’Neill‘s account of the FBI’s efforts, FBI pushes private sector to cut ties with Kaspersky:


In the briefings, FBI officials give companies a high-level overview of the threat assessment, including what the U.S. intelligence community says are the Kaspersky’s deep and active relationships with Russian intelligence. FBI officials point to multiple specific accusations of wrongdoing by Kaspersky, such as a well-known instance of allegedly faking malware.

In a statement to CyberScoop, a Kaspersky spokesperson blamed those particular accusations on “disgruntled, former company employees, whose accusations are meritless” while FBI officials say, in private and away from public scrutiny, they know the incident took place and was blessed by the company’s leadership.

The FBI’s briefings have seen mixed results. Companies that utilize ISC and SCADA systems have been relatively cooperative, one government official told CyberScoop, due in large part to what’s described as exceptional sense of urgency that dwarfs most other industries. Several of these companies have quietly moved forward on the FBI’s recommendations against Kaspersky by, for example, signing deals with Kaspersky competitors.

The firms the FBI have briefed include those that deal with nuclear power, a predictable target given the way the electric grid is increasingly at the center of catastrophic cybersecurity concerns.

The traditional tech giants have been less receptive and cooperative to the FBI’s pitch.

leaves the impression Kaspersky competitors are not compensating the FBI for the additional business.

That’s just wrong! If the FBI drives business to vendors, the public merits a cut of those contracts for services rendered. Members of Congress pushing for the exclusion of Kaspersky are no doubt being compensated but that doesn’t benefit the general public.

The only known validation of the FBI’s nationalistic fantasy is the relationship between the US government and US software vendors. Microsoft says it’s already patched flaws exposed in leak of NSA hacks What motive does the NSA have to withhold flaws from US vendors other than to use them against other nations?

Expecting other governments act like the US government and software vendors to be spineless as US vendors makes the FBI Kaspersky fantasy consistent with its paranoia. Consistency, however, isn’t the same as a factual basis.

Free tip for Kaspersky Lab: Starting with your competitors and likely competitors, track their campaign contributions, contacts with the U.S. government, news placements, etc. No small task as acceptance of the FBI’s paranoid delusions didn’t happen overnight. Convictions of incautious individuals for suborning the government for commercial gain would go a long way to countering that tale.

Air Gapping USB Sticks For Journalists (Or Not! For Others)

Filed under: Cybersecurity,Ethics,Malware,Security — Patrick Durusau @ 12:34 pm

CIRCLean – USB key sanitizer

Journalists are likely to get USB sticks from unknown and/or untrustworthy sources. CIRCLean copies potentially dangerous files on an untrustworthy USB stick, converts those files to a safe format and saves them to your trusted USB stick. (Think of it as not sticking a potentially infected USB into your computer.)

Visual instructions on using CIRCLean:

Written instructions based on those for CIRCLean, without illustrations:

  1. Unplug the device.
  2. Plug the untrusted USB stick into the top usb slot.
  3. Plug your own, trusted USB stick into the bottom usb slot.
  4. Note: Make sure your USB stick is bigger than the untrusted one. The extracted documents are sometimes bigger than the original ones.

  5. Connect the power to the device.
  6. If your device has a diode, wait until the blinking stops.
  7. Otherwise, plug a headset and listen to the music that is played during the conversion. When the music stops, the conversion is finished.

  8. Unplug the device and remove the USB keys

Label all untrusted USB sticks. “Untrusted” means it has an origin other than you. Unicode U+2620 ‘skull and crossbones” works, ☠. Or a bit larger:


(Image from http://graphemica.com/)

It’s really that easy!

On The Flip Side

Modifying the CIRCLean source to maintain its present capabilities but adding your malware to the “trusted” USB stick offers a number of exciting possibilities.

Security is all the rage in the banking industry, making a Raspberry Pi (with diode), an attractive case, and your USB malware great banking convention swag.

Listing of banking conferences are maintained by the American Bankers Association, the European Banking Association, and Asian Banking & Finance, to name just a few.

A low-cost alternative to a USB cleaning/malware installing Raspberry Pi would to use infected USB sticks as sway. “Front Office Staff: After Hours” or some similar title. If that sounds sexist, it is, but traps use bait based on their target’s proclivities, not yours.

PS: Ethics/legality:

The ethics of spreading malware to infrastructures based on a “white, cisheteropatriarchal*” point of view, I leave for others to discuss.

The legality of spreading malware depends on who’s doing the spreading and who’s being harmed. Check with legal counsel.

* A phrase I stole from: Women’s Suffrage Leaders Left Out Black Women. A great read.

August 10, 2017

DNA Injection Attack (Shellcode in Data)

Filed under: Bioinformatics,DNA,Security — Patrick Durusau @ 8:36 pm

BioHackers Encoded Malware in a String of DNA by Andy Greenberg.

From the post:

WHEN BIOLOGISTS SYNTHESIZE DNA, they take pains not to create or spread a dangerous stretch of genetic code that could be used to create a toxin or, worse, an infectious disease. But one group of biohackers has demonstrated how DNA can carry a less expected threat—one designed to infect not humans nor animals but computers.

In new research they plan to present at the USENIX Security conference on Thursday, a group of researchers from the University of Washington has shown for the first time that it’s possible to encode malicious software into physical strands of DNA, so that when a gene sequencer analyzes it the resulting data becomes a program that corrupts gene-sequencing software and takes control of the underlying computer. While that attack is far from practical for any real spy or criminal, it’s one the researchers argue could become more likely over time, as DNA sequencing becomes more commonplace, powerful, and performed by third-party services on sensitive computer systems. And, perhaps more to the point for the cybersecurity community, it also represents an impressive, sci-fi feat of sheer hacker ingenuity.

“We know that if an adversary has control over the data a computer is processing, it can potentially take over that computer,” says Tadayoshi Kohno, the University of Washington computer science professor who led the project, comparing the technique to traditional hacker attacks that package malicious code in web pages or an email attachment. “That means when you’re looking at the security of computational biology systems, you’re not only thinking about the network connectivity and the USB drive and the user at the keyboard but also the information stored in the DNA they’re sequencing. It’s about considering a different class of threat.”

Very high marks for imaginative delivery but at its core, this is shellcode in data.

Shellcode in an environment the authors describe as follows:


Our results, and particularly our discovery that bioinformatics software packages do not seem to be written with adversaries in mind, suggest that the bioinformatics pipeline has to date not received significant adversarial pressure.

(Computer Security, Privacy, and DNA Sequencing: Compromising Computers with Synthesized DNA, Privacy Leaks, and More)

Question: Can you name any data pipelines that have been subjected to adversarial pressure?

The reading of DNA and transposition into machine format reminds me that a data pipeline could ingest apparently non-hostile data and as a result of transformations/processing, produce hostile data at some point in the data stream.

Transformation into shellcode, now that’s a very interesting concept.

August 9, 2017

Open Source Safe Cracking Robots

Filed under: Government,Security — Patrick Durusau @ 9:11 am

Live, robotic, safe cracking demo. No pressure, no pressure!

One of the most entertaining and informative presentations you are likely to see this year! It includes an opening tip for those common digital safes found in hotel rooms.

From the description:

We’ve built a $200 open source robot that cracks combination safes using a mixture of measuring techniques and set testing to reduce crack times to under an hour. By using a motor with a high count encoder we can take measurements of the internal bits of a combination safe while it remains closed. These measurements expose one of the digits of the combination needed to open a standard fire safe. Additionally, ‘set testing’ is a new method we created to decrease the time between combination attempts. With some 3D printing, Arduino, and some strong magnets we can crack almost any fire safe. Come checkout the live cracking demo during the talk!

Don’t miss their highly informative website, SparkFun Electronics.

Open source, part of the Maker community!

This won’t work against quality safes in highly secure environments but most government safes are low-bidder/low-quality and outside highly secure environments. Use tool appropriate for the security environment.

August 6, 2017

New spearphishing technique – Phishing for Leaks

Filed under: Cybersecurity,Journalism,News,Phishing for Leaks,Reporting,Security — Patrick Durusau @ 8:30 pm

Timo Steffens tweeted:

New spearphishing technique: Targeted mail contains no links or exploits, but mentions report title. Googling title leads to exploit site.

Good news for wannabe government/industry leakers.

This spearphishing technique avoids question about your cybersecurity competence in evaluating links in a phishing email.

You did a search relevant to your position/task and Google delivered an exploit site.

Hard to fault you for that!

The success of phishing for leaks depends on non-leak/spoon-fed journalists.

August 5, 2017

No Fault Leaking (Public Wi-Fi, File Sharing)

Filed under: Cybersecurity,Journalism,News,Reporting,Security — Patrick Durusau @ 10:51 am

Attorney General Sessions and his League of Small Minds (LSM) seek to intimidate potential leakers into silence. Leakers who are responsible for what transparency exists for unfavorable information about current government policies and actions.

FOIA requests can and do uncover unfavorable information about government policies and actions, but far too often after the principals have sought the safety of the grave.

It’s far better to expose and stop ill-considered, even criminal activities in real time, before government adds more blighted lives and deaths to its record.

Traditional leaking involves a leaker, perhaps you, delivering physical or digital copies of data/documents to a reporter. That is it requires some act on your part, copying, email, smail, etc., which offers the potential to trace the leak back to you.

Have you considered No Fault Leaking? (NFL)

No Fault Leaking requires only a public Wi-Fi and appropriate file sharing permissions on your phone, laptop, tablet.

Public Wi-Fi: Potential Washington, DC based leakers can consult Free Wi-Fi Hotspot Locations in Washington, DC by Rachel Cooper, updated 7/28/2017. Similar listings exist for other locations.

File Sharing Permissions: Even non-techies should be able to follow the screen shots in One mistake people make using public Wi-Fi that lets everyone see their files by Francis Navarro. (Pro tip: Don’t view this article on your device or save a copy there. Memorize the process of turning file sharing on and off.)

After arriving at a Public Wi-Fi location, turn file sharing on. It’s as simple as that. You don’t know who if anyone has copied any files. Before you leave the location, turn file sharing off. (This works best if you have legitimate reasons to have the files in question on your laptop, etc.)

No Fault Leaking changes the role of the media from spoon-fed recipients of data/documents into more active participants in the leaking process.

To that end, ask yourself: Am I a fair weather (no risk) advocate of press freedom or something more?

August 4, 2017

“This culture of leaking must stop.” Taking up Sessions’ Gage

Filed under: Cybersecurity,Government,Government Data,Security — Patrick Durusau @ 4:12 pm

Jeff Sessions, the current (4 August 2017) Attorney General of the United States, wants to improve on Barack Obama‘s legacy as the most secretive presidency of the modern era.

Sessions has announced a tripling Justice Department probes into leaks and a review of guidelines for subpoenas for members of the news media. Attorney General says Justice Dept. has tripled the number of leak probes. (Media subpoenas are an effort to discover media sources and hence to plug the “leaks.”)

Sessions has thrown down his gage, declaring war on occasional transparency from government leakers. Indirectly, that war will include members of the media as casualties.

Shakespeare penned the best response for taking up Sessions’ gage:

Cry ‘Havoc,’ and let slip the dogs of war;

In case you don’t know the original sense of “Havoc:”

The military order Havoc! was a signal given to the English military forces in the Middle Ages to direct the soldiery (in Shakespeare’s parlance ‘the dogs of war’) to pillage and chaos. Cry havoc and let slip the dogs of war

It’s on all of us to create enough chaos to protect leakers and members of the media who publish their leaks.

Observations – Not Instructions

Data access: Phishing emails succeed 33% of the time. Do they punish would-be leakers who fall for phishing emails?

Exflitration: Tracing select documents to a leaker is commonplace. How do you trace an entire server disk? The larger and more systematic the data haul, the greater the difficulty in pinning the leak on particular documents. (Back to school specials often include multi-terabyte drives.)

Protect the Media: Full drive leaks posted a Torrent or Dark Web server means media can answer subpoenas with: go to: https://some-location. 😉

BTW, full drive leaks provide transparency for the relationship between the leaked data and media reports. Accountability is as important for the media as the government.

One or more of my observations may constitute crimes depending upon your jurisdiction.

Which I guess is why Nathan Hale is recorded as saying:

Gee, that sounds like a crime. You know, I could get arrested, even executed. None for me please!

Not!

Nathan Hale volunteered to be a spy, was caught and executed, having said:

I only regret, that I have but one life to lose for my country.

Question for you:

Are you a ‘dog of war’ making the government bleed data?

PS: As a security measure, don’t write that answer down or tell anyone. When you read about leaks, you can inwardly smile and know you played your part.

« Newer PostsOlder Posts »

Powered by WordPress