Archive for the ‘Encryption’ Category

Comic Book Security

Wednesday, November 23rd, 2016

The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels in Comic Book Narratives by Mohit Iyyer, et al.


Visual narrative is often a combination of explicit information and judicious omissions, relying on the viewer to supply missing details. In comics, most movements in time and space are hidden in the “gutters” between panels. To follow the story, readers logically connect panels together by inferring unseen actions through a process called “closure”. While computers can now describe the content of natural images, in this paper we examine whether they can understand the closure-driven narratives conveyed by stylized artwork and dialogue in comic book panels. We collect a dataset, COMICS, that consists of over 1.2 million panels (120 GB) paired with automatic textbox transcriptions. An in-depth analysis of COMICS demonstrates that neither text nor image alone can tell a comic book story, so a computer must understand both modalities to keep up with the plot. We introduce three cloze-style tasks that ask models to predict narrative and character-centric aspects of a panel given n preceding panels as context. Various deep neural architectures underperform human baselines on these tasks, suggesting that COMICS contains fundamental challenges for both vision and language.

From the introduction:


Comics are fragmented scenes forged into full-fledged stories by the imagination of their readers. A comics creator can condense anything from a centuries-long intergalactic war to an ordinary family dinner into a single panel. But it is what the creator hides from their pages that makes comics truly interesting, the unspoken conversations and unseen actions that lurk in the spaces (or gutters) between adjacent panels. For example, the dialogue in Figure 1 suggests that between the second and third panels, Gilda commands her snakes to chase after a frightened Michael in some sort of strange cult initiation. Through a process called closure [40], which involves (1) understanding individual panels and (2) making connective inferences across panels, readers form coherent storylines from seemingly disparate panels such as these. In this paper, we study whether computers can do the same by collecting a dataset of comic books (COMICS) and designing several tasks that require closure to solve.

(emphasis in original)

Comic book security: A method for defeating worldwide data slurping and automated analysis.

The authors find that human results easily exceed automated analysis, raising the question of the use of a mixture of text and images as a means to evade widespread data sweeps.

Security based on a lack of human eyes to review content is chancy but depending upon your security needs, it may be sufficient.

For example, a cartoon in a local newspaper that designates a mission target and time, only needs to be secure from the time of its publication until the mission has finished. That it is discovered days, weeks or even months later, doesn’t impact the operational security of the mission.

The data set of cartoons is available at:

Guaranteed, algorithmic security is great, but hiding in gaps of computational ability may be just as effective.


ISIS Turns To Telegram App After Twitter Crackdown [Farce Alert + My Telegram Handle]

Monday, August 29th, 2016

ISIS Turns To Telegram App After Twitter Crackdown

From the post:

With the micro-blogging site Twitter coming down heavily on ISIS-sponsored accounts, the terrorist organisation and its followers are fast joining the heavily-encrypted messaging app Telegram built by a Russian developer.

On Telegram, the ISIS followers are laying out detailed plans to conduct bombing attacks in the west, reported on Monday.

France and Germany have issued statements that they now want a crackdown against them on Telegram.

“Encrypted communications among terrorists constitute a challenge during investigations. Solutions must be found to enable effective investigation… while at the same time protecting the digital privacy of citizens by ensuring the availability of strong encryption,” the statement said.


Oh, did you notice the source? “ reported on Monday.”

If you skip over to that post: IS Followers Flock to Telegram After being Driven from Twitter (I don’t want to shame the author so omitting their name), it reads in part:

With millions of IS loyalists communicating with one another on Telegram and spreading their message of radical Islam and extremism, France and Germany last week said that they want a continent wide effort to allow for a crackdown on Telegram.

“Encrypted communications among terrorists constitute a challenge during investigations,” France and Germany said in a statement. “Solutions must be found to enable effective investigation… while at the same time protecting the digital privacy of citizens by ensuring the availability of strong encryption.”

On private Telegram channels, IS followers have laid out detailed plans to poison Westerners and conduct bombing attacks, reports say.

What? “…millions of IS loyalists…?” IS in total is about 30K of active fighters, maybe. Millions of loyalists? Documentation? Citation of some sort? Being the Voice of America, I’d say they pulled that number out of a dark place.

Meanwhile, while complaining about the strong encryption, they are party to:

detailed plans to poison Westerners and conduct bombing attacks, reports say.

You do know wishing Westerners would choke on their Fritos doesn’t constitute a plan. Yes?

Neither does wishing to have an unspecified bomb, to be exploded at some unspecified location, at no particular time, constitute planning either.

Not to mention that “reports say” is a euphemism for: “…we just made it up.”

Get yourself to Telegram!



They left out my favorite:

Annoy governments seeking to invade a person’s privacy.

Reclaim your privacy today! Telegram!

Caveat: I tried using one device for the SMS to setup my smartphone. Nada, nyet, no joy. Had to use my cellphone number to setup the account on the cellphone. OK, but annoying.

BTW, on Telegram, my handle is @PatrickDurusau.

Yes, my real name. Which excludes this account from anything requiring OpSec. 😉

Germany and France declare War on Encryption to Fight Terrorism

Friday, August 26th, 2016

Germany and France declare War on Encryption to Fight Terrorism by Mohit Kumar.

From the post:

Yet another war on Encryption!

France and Germany are asking the European Union for new laws that would require mobile messaging services to decrypt secure communications on demand and make them available to law enforcement agencies.

French and German interior ministers this week said their governments should be able to access content on encrypted services in order to fight terrorism, the Wall Street Journal reported.
(emphasis in original)

On demand decryption? For what? Rot-13 encryption?

The Franco-German text transmitted to the European Commission.

The proposal wants to extend current practices of Germany and France with regard to ISPs but doesn’t provide any details about those practices.

In case you have influence with the budget process at the EU, consider pointing out there is no, repeat no evidence that any restriction on encryption will result in better police work combating terrorism.

But then, what government has ever pushed for evidence-based policies?

Law Enforcement Shouldn’t Be Omniscient

Monday, August 1st, 2016

Andy Greenberg’s introduction to the genius behind Signal, Meet Moxie Marlinspike, The Anarchist Bringing Encryption To All Of Us, is a great read.

Just a sample to get you going:

For any cypherpunk with an FBI file, it’s already an interesting morning. At the very moment the Cryptographers’ Panel takes the stage, Apple and the FBI are at the height of a six-week battle, arguing in front of the House Judiciary Commit­tee over the FBI’s demand that Apple help it access an encrypted ­iPhone 5c owned by San Bernardino killer Syed Rizwan Farook. Before that hearing ends, Apple’s general counsel will argue that doing so would set a dangerous legal precedent, inviting foreign govern­ments to make similar demands, and that the crypto-cracking software could be co-opted by criminals or spies.

The standoff quickly becomes the topic of the RSA panel, and Marlinspike waits politely for his turn to speak. Then he makes a far simpler and more radical argument than any advanced by Apple: Perhaps law enforcement shouldn’t be omniscient. “They already have a tremendous amount of information,” he tells the packed ballroom. He points out that the FBI had accessed Farook’s call logs as well as an older phone backup. “What the FBI seems to be saying is that we need this because we might be missing something. Obliquely, they’re asking us to take steps toward a world where that isn’t possible. And I don’t know if that’s the world we want to live in.”

Marlinspike follows this remark with a statement that practically no one else in the privacy community is willing to make in public: that yes, people will use encryption to do illegal things. And that may just be the whole point. “I actually think that law enforcement should be difficult,” Marlinspike says, looking calmly out at the crowd. “And I think it should actually be possible to break the law.”

I don’t find Marlinspike’s:

I think it should actually be possible to break the law.

surprising or shocking.

Nearly everyone in law enforcement and government agrees with Marlinspike, it all depends on whose laws and for what purpose?

Murder is against the law in North Korea but several governments would applaud anyone who used encryption to arrange slipping a knife between the ribs of Kim Jong-un.

Those same governments and their citizens use encryption to carry on industrial espionage, spying on military research, trade or government negotiations, etc.

I’m happy with non-omniscient law enforcement.

How about you?

Entropy Explained, With Sheep

Thursday, July 28th, 2016

Entropy Explained, With Sheep by Aatish Bhatia.

Entropy is relevant to information theory, encryption, Shannon, but I mention it here because of the cleverness of the explanation.

Aatish sets a very high bar for taking a difficult concept and creating a compelling explanation that does not involve hand-waving and/or leaps of faith on the part of the reader.

Highly recommended as a model for explanation!


Safe Sex and Safe Chat

Saturday, July 16th, 2016

Matthew Haeck repeats the old dodge for bothering with encrypted communications:

If I’m doing nothing wrong, it doesn’t matter

in Secure Messaging Apps for Encrypted Chat.

Most of us, outside of subscribers to the Linux Journal, never imagine that we are under surveillance by government agencies. And we may not be.

But, that doesn’t mean our friends and acquaintances aren’t under surveillance by domestic and foreign governments, corporations and others.

You should think of encrypted communications, chat in this case, just like you do safe sex.

It not only protects yourself, but your present partner and all future partners the both of you may have.

The same is true for use of encrypted chat. The immediate benefit is for your and your partner, but secure chat, denies the government and others, the use of your chats against unknown future chat partners.

If you practice safe sex, practice safe chat.

Secure Messaging Apps for Encrypted Chat is a great start towards practicing safe chat.

How Secure Are Emoji Ciphers?

Wednesday, June 29th, 2016

You Can Now Turn Messages Into Secret Code Using Emoji by Joon Ian Wong.

From the post:

Emoji are developing into their own language, albeit a sometimes impenetrable one. But they are about to become truly impenetrable. A new app from the Mozilla Foundation lets you use them for encryption.

The free web app, called Codemoji, lets users write a message in plain-text, then select an emoji “key” to mask the letters in that message with a series of emoji. To decrypt a message, the correct key must be entered in the app, turning emoji back into the alphabet.

Caesar ciphers (think letter substitution) are said to be “easy” to solve with modern computers.

Which is true, but the security of an Emoji cipher depends on how long the information must remain secret.

For example, you discover a smart phone at 11:00 AM (your local) and it has the following message:

Detonate at 12:15 P.M. (your local)

but that message is written in Emoji using the angry face as the key:


That Emoji coded message is as secure as a message encoded with the best the NSA can provide.


If you knew what the message said, detonation time, assuming that is today, is only 75 minutes away. Explosions are public events and knowing in hindsight that you had captured the timing message, but broke the code too late, isn’t all that useful.

The “value” of that message being kept secret expires at the same time as the explosion.

In addition to learning more about encryption, use Codemoji as a tool for thinking about your encryption requirements.

Some (conflicting) requirements: Ease of use, resistance to attack (how to keep the secret), volume of use, hardware/software requirements, etc.

Everyone would like to have brain-dead easy to use, impervious to even alien-origin quantum computers, scales linearly and runs on an Apple watch.

Not even the NSA is rumored to have such a system. Become informed so you can make informed compromises.

Anonymous Chat Service

Tuesday, April 12th, 2016

From the description:

The continued effort of governments around the globe to censor our seven sovereign seas has not gone unnoticed. This is why we, once again, raise our Anonymous battle flags to expose their corruption and disrupt their surveillance operations. We are proud to present our new chat service residing within the remote island coves of the deep dark web. The OnionIRC network is designed to allow for full anonymity and we welcome any and all to use it as a hub for anonymous operations, general free speech use, or any project or group concerned about privacy and security looking to build a strong community. We also intend to strengthen our ranks and arm the current and coming generations of internet activists with education. Our plan is to provide virtual classrooms where, on a scheduled basis, ‘teachers’ can give lessons on any number of subjects. This includes, but is not limited to: security culture, various hacking/technical tutorials, history lessons, and promoting how to properly utilize encryption and anonymity software. As always, we do not wish for anyone to rely on our signal alone. As such, we will also be generating comprehensible documentation and instructions on how to create your own Tor hidden-service chat network in order to keep the movement decentralized. Hackers, activists, artists and internet citizens, join us in a collective effort to defend the internet and our privacy.

Come aboard or walk the plank.

We are Anonymous,
we’ve been expecting you.

Protip: This is not a website, it’s an IRC chat server. You must use an IRC chat client to connect. You cannot connect simply through a browser.

Some popular IRC clients are: irssi, weechat, hexchat, mIRC, & many more…

Here is an example guide for connecting with Hexchat:

To access our IRC network you must be connecting through the Tor network!

Either download the Tor browser or install the Tor daemon, then configure your IRC client’s proxy settings to pass through Tor or ‘torify’ your client depending on your setup.

If you are connecting to Tor with the Tor browser, keep in mind that the Tor browser must be open & running for you to pass your IRC client through Tor.

How you configure your client to pass through Tor will vary depending on the client.
Hostname: onionirchubx5363.onion

Port: 6667 No SSL, but don’t worry! Tor connections to hidden-services are end-to-end encrypted already! Thank you based hidden-service gods!

In the near future we will be releasing some more extensive client-specific guides and how-to properly setup Tor for transparent proxying (…) & best use cases.

This is excellent news!

With more good news promised in the near future (watch the video).

Go dark, go very dark!

When back doors backfire [Uncorrected Tweet From Economist Hits 1.1K Retweets]

Sunday, January 3rd, 2016

When back doors backfire

From the post:


Push back against back doors

Calls for the mandatory inclusion of back doors should therefore be resisted. Their potential use by criminals weakens overall internet security, on which billions of people rely for banking and payments. Their existence also undermines confidence in technology companies and makes it hard for Western governments to criticise authoritarian regimes for interfering with the internet. And their imposition would be futile in any case: high-powered encryption software, with no back doors, is available free online to anyone who wants it.

Rather than weakening everyone’s encryption by exploiting back doors, spies should use other means. The attacks in Paris in November succeeded not because terrorists used computer wizardry, but because information about their activities was not shared. When necessary, the NSA and other agencies can usually worm their way into suspects’ computers or phones. That is harder and slower than using a universal back door—but it is safer for everyone else.

By my count on two (2) tweets from The Economist, they are running at 50% correspondence between their tweets and actual content.

You may remember my checking their tweet about immigrants yesterday, that got 304 retweets (and was wrong) in Fail at The Economist Gets 304 Retweets!.

Today I saw the When back doors backfire tweet and I followed the link to the post to see if it corresponded to the tweet.

Has anyone else been checking on tweet/story correspondence at The Economist (zine)? The twitter account is: @TheEconomist.

I ask because no correcting tweet has appeared in @TheEconomist tweet feed. I know because I just looked at all of its tweets in chronological order.

Here is the uncorrected tweet:


As of today, the uncorrected tweet on immigrants has 1.1K retweets and 707 likes.

From the Economist article on immigrants:

Refugee resettlement is the least likely route for potential terrorists, says Kathleen Newland at the Migration Policy Institute, a think-tank. Of the 745,000 refugees resettled since September 11th, only two Iraqis in Kentucky have been arrested on terrorist charges, for aiding al-Qaeda in Iraq.

Do retweets and likes matter more than factual accuracy, even as reported in the tweeted article?

Is this a journalism ethics question?

What’s the standard journalism position on retweet-bait tweets?

Is It End-To-End Encrypted?

Tuesday, December 8th, 2015

ZeroDB has kicked off the new question for all networked software:

Is It End-To-End Encrypted?, with a resounding YES!

From ZeroDB, an end-to-end encrypted database, is open source!:

We’re excited to release ZeroDB, an end-to-end encrypted database, to the world. ZeroDB makes it easy to develop applications with strong security and privacy guarantees by enabling applications to query encrypted data.

zerodb repo:
zerodb-server repo:

Now that it’s open source, we want your help to make it better. Try it, build awesome things with it, break it. Then tell us about it.

Today, we’re releasing a Python implementation. A JavaScript client will be following soon.

Questions? Ask us on Slack or Google Groups.

The post was authored by MacLane & Michael and you can find more information at

PS: The question Is It End-To-End Encrypted? is a yes or no question. If anyone gives you an answer other than an unqualified yes, it’s time to move along to the next vendor. Sometimes, under some circumstances, maybe, added feature, can be, etc., are all unacceptable answers.

Just like the question: Does it have any backdoors at all? What purpose the backdoor serves isn’t relevant. That a backdoor exists is also the time at which to move to another vendor.

The answers to both of those questions should be captured in contractual language with stipulated liability in the event of breach and minimal stipulated damages.

I first saw this in Four short links: 8 December 2015 by Nat Torkington.

How is NSA breaking so much crypto?

Thursday, October 15th, 2015

How is NSA breaking so much crypto? by Alex Halderman and Nadia Henniger.

From the post:

There have been rumors for years that the NSA can decrypt a significant fraction of encrypted Internet traffic. In 2012, James Bamford published an article quoting anonymous former NSA officials stating that the agency had achieved a “computing breakthrough” that gave them “the ability to crack current public encryption.” The Snowden documents also hint at some extraordinary capabilities: they show that NSA has built extensive infrastructure to intercept and decrypt VPN traffic and suggest that the agency can decrypt at least some HTTPS and SSH connections on demand.

However, the documents do not explain how these breakthroughs work, and speculation about possible backdoors or broken algorithms has been rampant in the technical community. Yesterday at ACM CCS, one of the leading security research venues, we and twelve coauthors presented a paper that we think solves this technical mystery.

The key is, somewhat ironically, Diffie-Hellman key exchange, an algorithm that we and many others have advocated as a defense against mass surveillance. Diffie-Hellman is a cornerstone of modern cryptography used for VPNs, HTTPS websites, email, and many other protocols. Our paper shows that, through a confluence of number theory and bad implementation choices, many real-world users of Diffie-Hellman are likely vulnerable to state-level attackers.

For the nerds in the audience, here’s what’s wrong: If a client and server are speaking Diffie-Hellman, they first need to agree on a large prime number with a particular form. There seemed to be no reason why everyone couldn’t just use the same prime, and, in fact, many applications tend to use standardized or hard-coded primes. But there was a very important detail that got lost in translation between the mathematicians and the practitioners: an adversary can perform a single enormous computation to “crack” a particular prime, then easily break any individual connection that uses that prime.

How enormous a computation, you ask? Possibly a technical feat on a scale (relative to the state of computing at the time) not seen since the Enigma cryptanalysis during World War II. Even estimating the difficulty is tricky, due to the complexity of the algorithm involved, but our paper gives some conservative estimates. For the most common strength of Diffie-Hellman (1024 bits), it would cost a few hundred million dollars to build a machine, based on special purpose hardware, that would be able to crack one Diffie-Hellman prime every year.

Whether you prefer the blog summary or the heavier sledding of Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice, this is a must read.

This paper should provide a significant push towards better encryption techniques but also serve as a warning that no encryption method is absolute.

Implementations, users, advances in technology and techniques, resources, all play roles in determining the security of any particular encryption technique.

Information theory and Coding

Saturday, October 10th, 2015

Information theory and Coding by Mathematicalmonk.

From the introduction video:

Overview of central topics in Information theory and Coding.

Compression (source coding) theory: Source coding theorem, Kraft-McMillan inequality, Rate-distortion theorem

Error-correction (channel coding) theory: Channel coding theorem, Channel capacity, Typicality and the AEP

Compression algorithms: Huffman codes, Arithmetic coding, Lempel-Ziv

Error-correction algorithms: Hamming codes, Reed-Solomon codes, Turbo codes, Gallager (LDPC) codes

There is a great deal of cross-over between information theory and coding, cryptography, statistics, machine learning and other topics. A grounding in information theory and coding will enable you to spot and capitalize on those commonalities.

Secure Cloud Computing – Very Secure

Friday, December 27th, 2013

Daunting Mathematical Puzzle Solved, Enables Unlimited Analysis of Encrypted Data

From the post:

IBM inventors have received a patent for a breakthrough data encryption technique that is expected to further data privacy and strengthen cloud computing security.

The patented breakthrough, called “fully homomorphic encryption,” could enable deep and unrestricted analysis of encrypted information — intentionally scrambled data — without surrendering confidentiality. IBM’s solution has the potential to advance cloud computing privacy and security by enabling vendors to perform computations on client data, such as analyzing sales patterns, without exposing or revealing the original data.

IBM’s homomorphic encryption technique solves a daunting mathematical puzzle that confounded scientists since the invention of public-key encryption over 30 years ago.

Invented by IBM cryptography Researcher Craig Gentry, fully homomorphic encryption uses a mathematical object known as an “ideal lattice” that allows people to interact with encrypted data in ways previously considered impossible. The breakthrough facilitates analysis of confidential encrypted data without allowing the user to see the private data, yet it will reveal the same detailed results as if the original data was completely visible.

IBM received U.S. Patent #8,565,435: Efficient implementation of fully homomorphic encryption for the invention, which is expected to help cloud computing clients to make more informed business decisions, without compromising privacy and security.

If that sounds a bit dull, consider this prose from the IBM Homomorphic Encryption page:

What if you want to query a search engine, but don’t want to tell the search engine what you are looking for? You might consider encrypting your query, but if you use an ordinary encryption scheme, the search engine will not be able to manipulate your ciphertexts to construct a meaningful response. What you would like is a cryptographic equivalent of a photograph developer’s “dark room”, where the search engine can process your query intelligently without ever seeing it.

Or, what if you want to store your data on the internet, so that you can access it at your convenience? You want your data to remain private, even from the server that is storing them; so, you store your data in encrypt form. But you would also like to be able to access your data intelligently — e.g., you would like the server to be able to return exactly those files containing the word `homomorphic’ within five words of `encryption’. Again, you would like the server to be able to “process” your data while it remains encrypted.

A “fully homomorphic” encryption scheme creates exactly this cryptographic dark room. Using it, anyone can manipulate ciphertexts that encrypt data under some public key pk to construct a ciphertext that encrypts *any desired function* of that data under pk. Such a scheme is useful in the settings above (and many others).

The key sentence is:

“Using it, anyone can manipulate ciphertexts that encrypt data under some public key pk to construct a ciphertext that encrypts *any desired function* of that data under pk.”

Wikipedia has a number of references under: Homomorphic encryption.

You may also be interested in: A fully homographic encryption scheme (Craig Gentry’s PhD thesis.

One of the more obvious use cases of homomorphic encryption with topic maps being the encryption of topic maps as deliverables.

Purchasers could have access to the results of merging but not the grist that was ground to produce the merging.

The antics of the NSA, 2013’s poster boy for better digital security, such as subversion of security standards and software vendors, out-right theft, and perversion of governments, will bring other use cases to mind.

Take The Money And Run (RSA)

Tuesday, December 24th, 2013

I think David Meyer’s headline captures the essence of the RSA story: Security firm denies knowingly including NSA backdoor — but not taking NSA cash.

RSA posts in its defense:

We made the decision to use Dual EC DRBG as the default in BSAFE toolkits in 2004, in the context of an industry-wide effort to develop newer, stronger methods of encryption. At that time, the NSA had a trusted role in the community-wide effort to strengthen, not weaken, encryption.

When concern surfaced around the algorithm in 2007, we continued to rely upon NIST as the arbiter of that discussion.

RSA, as a security company, never divulges details of customer engagements, but we also categorically state that we have never entered into any contract or engaged in any project with the intention of weakening RSA’s products, or introducing potential ‘backdoors’ into our products for anyone’s use.

So, if I had given the RSA $10 million on a contract, would that give me “a trusted role in the community-wide effort to strengthen, not weaken, encryption?”

Given the NSA mission to break encryption used by others, it isn’t clear how the NSA could ever have a “trusted role” in public encryption efforts.

To be sure, the NSA also has an interest in robust encryption for the U.S. government, but it has no interest in making those methods publicly available.

Quite the contrary, the only sensible goal of the NSA is to have breakable encryption used by everyone but the NSA and its clients. Yes?

The NSA was pursuing a rational strategy for a government spy agency and RSA was simply naive to believe otherwise.

As usual, cui bono (“to whose benefit?”), is the relevant question.

PS: If you need help asking that question, I was professionally trained in a hermeneutic of suspicion tradition that was centuries old when the feminists “discovered” it.

…all the people all the time.

Wednesday, September 11th, 2013

NIST has proven Lincoln’s adage:

You can fool some of the people all of the time, and all of the people some of the time, but you can not fool all of the people all of the time. (emphasis added)

Frank Konkel writes in: NIST reopens NSA-altered standards that:

The National Institute of Standards and Technology reopened the public comment period for already-adopted encryption standards that, according to leaked top-secret documents, were deliberately weakened by the National Security Agency.

Reopening the standards in question – Special Publication 800-90A and draft Special Publications 800-90B and 800-90C – gives the public a chance to weigh in again on encryption standards that were approved by NIST in 2006 for federal and worldwide use.

The move came Sept. 10, a swift response from NIST after several media outlets, including FCW, published articles that questioned the agency’s cryptographic standards development process after the leaks surfaced.

For your convenience:

Special Publication 800-90A

Draft SP 800-90 A Rev. 1

Draft SP 800-90 B

Draft SP 800-90 C

Disclaimer: I am reporting these links as they appear on the website. The content they return may or may not be true and correct copies of the documents listed.

On the topic of reopened public comments, the following was posted at:

In light of recent reports, NIST is reopening the public comment period for Special Publication 800-90A and draft Special Publications 800-90B and 800-90C.

NIST is interested in public review and comment to ensure that the recommendations are accurate and provide the strongest cryptographic recommendations possible.

The public comments will close on November 6, 2013. Comments should be sent to

In addition, the Computer Security Division has released a supplemental ITL Security Bulletin titled “NIST Opens Draft Special Publication 800-90A, Recommendation for Random Number Generation Using Deterministic Random Bit Generators, For Review and Comment (Supplemental ITL Bulletin for September 2013)” to support the draft revision effort.

If NIST got fooled, a pretty big if, rather than hide that possibility, NIST wants more public examination and comment to uncover it.

If you have the time and expertise, please contribute to this reexamination of these important encryption standards.

The NSA can corrupt the standards process if and only if enough of us stay home. Let’s disappoint them.

Subject Identity Obfuscation?

Tuesday, July 30th, 2013

Computer Scientists Develop ‘Mathematical Jigsaw Puzzles’ to Encrypt Software

From the post:

UCLA computer science professor Amit Sahai and a team of researchers have designed a system to encrypt software so that it only allows someone to use a program as intended while preventing any deciphering of the code behind it. This is known in computer science as “software obfuscation,” and it is the first time it has been accomplished.

It was the line “…and this is the first time it has been accomplished.” that caught my attention.

I could name several popular scripting languages, at the expense of starting a flame war, that would qualify as “software obfuscation.” 😉

Further from the post:

According to Sahai, previously developed techniques for obfuscation presented only a “speed bump,” forcing an attacker to spend some effort, perhaps a few days, trying to reverse-engineer the software. The new system, he said, puts up an “iron wall,” making it impossible for an adversary to reverse-engineer the software without solving mathematical problems that take hundreds of years to work out on today’s computers — a game-change in the field of cryptography.

The researchers said their mathematical obfuscation mechanism can be used to protect intellectual property by preventing the theft of new algorithms and by hiding the vulnerability a software patch is designed to repair when the patch is distributed.

“You write your software in a nice, reasonable, human-understandable way and then feed that software to our system,” Sahai said. “It will output this mathematically transformed piece of software that would be equivalent in functionality, but when you look at it, you would have no idea what it’s doing.”

The key to this successful obfuscation mechanism is a new type of “multilinear jigsaw puzzle.” Through this mechanism, attempts to find out why and how the software works will be thwarted with only a nonsensical jumble of numbers.

The paper has this title: Candidate Indistinguishability Obfuscation and Functional Encryption for all circuits by Sanjam Garg and Craig Gentry and Shai Halevi and Mariana Raykova and Amit Sahai and Brent Waters.


In this work, we study indistinguishability obfuscation and functional encryption for general circuits:

Indistinguishability obfuscation requires that given any two equivalent circuits C_0 and C_1 of similar size, the obfuscations of C_0 and C_1 should be computationally indistinguishable.

In functional encryption, ciphertexts encrypt inputs x and keys are issued for circuits C. Using the key SK_C to decrypt a ciphertext CT_x = Enc(x), yields the value C(x) but does not reveal anything else about x. Furthermore, no collusion of secret key holders should be able to learn anything more than the union of what they can each learn individually.

We give constructions for indistinguishability obfuscation and functional encryption that supports all polynomial-size circuits. We accomplish this goal in three steps:

  • We describe a candidate construction for indistinguishability obfuscation for NC1 circuits. The security of this construction is based on a new algebraic hardness assumption. The candidate and assumption use a simplified variant of multilinear maps, which we call Multilinear Jigsaw Puzzles.
  • We show how to use indistinguishability obfuscation for NC1 together with Fully Homomorphic Encryption (with decryption in NC1) to achieve indistinguishability obfuscation for all circuits.
  • Finally, we show how to use indistinguishability obfuscation for circuits, public-key encryption, and non-interactive zero knowledge to achieve functional encryption for all circuits. The functional encryption scheme we construct also enjoys succinct ciphertexts, which enables several other applications.

When a paper has a table of contents following the abstract, you know it isn’t a short paper. Forty-three (43) pages counting the supplemental materials. Most of it very heavy sledding.

I think this paper has important implications for sharing topic map based data.

In general as with other data but especially with regard to subject identity and merging rules.

It may well be the case that a subject of interest to you exists in a topic map but if you can’t access its subject identity sufficient to create merging, it will not exist for you.

One can even imagine that a subject may be accessible for screen display but not for copying to a “Snowden drive.” 😉

BTW, I have downloaded a copy of the paper. Suggest you do the same.

Just in case it goes missing several years from now when government security agencies realize its potential.

Searching an Encrypted Document Collection with Solr4, MongoDB and JCE

Sunday, December 16th, 2012

Searching an Encrypted Document Collection with Solr4, MongoDB and JCE by Sujit Pal.

From the post:

A while back, someone asked me if it was possible to make an encrypted document collection searchable through Solr. The use case was patient records – the patient is the owner of the records, and the only person who can search through them, unless he temporarily grants permission to someone else (for example his doctor) for diagnostic purposes. I couldn’t come up with a good way of doing it off the bat, but after some thought, came up with a design that roughly looked like the picture below:

With privacy being all the rage, a very timely post.

Not to mention an opportunity to try out Solr4.

Leaky Topic Maps?

Wednesday, August 10th, 2011

A Cloud that Can’t Leak

From the post:

Imagine getting a friend’s advice on a personal problem and being safe in the knowledge that it would be impossible for your friend to divulge the question, or even his own reply.

Researchers at Microsoft have taken a step toward making something similar possible for cloud computing, so that data sent to an Internet server can be used without ever being revealed. Their prototype can perform statistical analyses on encrypted data despite never decrypting it. The results worked out by the software emerge fully encrypted, too, and can only be interpreted using the key in the possession of the data’s owner.

Uses a technique called homomorphic encryption.

The article says 5 to 10 years before practical application, but it was 30 years between its proposal and a formal proof it was even possible. In the 2 or 3 years since that proof, a number of almost practical demonstrations have emerged. Would not bet on the 5 to 10 year time frame.

Homomorphic Encryption System

Thursday, March 10th, 2011

The rationale for a homomorphic encryption system (FHE = fully homomorphic encryption):

“Homomorphic” is a mathematical term meaning that if you do two things to a bit of data – say, encrypt it and process it – the order in which you do them won’t matter. In other words, in FHE, data can be processed after it is encrypted, as well as before. This means that a Gmail user could someday send an encrypted search query to the servers in the cloud, and those severs could carry out that query even though the query and the e-mails are completely inscrutable to them. Only the user who holds secret key can ever decrypt the original data, the query, or the query results.

For another example, imagine how FHE could help the proprietor of an online movie streaming service – call it Hackbuster Video– protect the privacy of customers while still giving them all the features they want. A customer’s request for a new movie would be encrypted, as would the movie itself, meaning that Hackbuster would not know what movie the customer was watching. Despite the privacy, the Hackbuster’s servers could still charge the correct amount, offer playback features such as pause and rewind, and even still make recommendations of similar movies, all without ever being privy to the movies involved.

From: Encryption that allows privacy and access to co-exist earns top dissertation award

Craig Gentry solved this problem (he has a law degree as well) in his dissertation at Stanford.

Not quite ready for prime time due to performance issues but definitely a step in the right direction.

Of interest to topic mappers because of the need for secure interaction with remote topic map facilities.

Additional resources of interest:

Craig Gentry’s dissertation: A fully homomorphic encryption scheme.

Craig’s “easy” version for ACM members: Computing Arbitrary Functions of Encrypted Data. (CACM, March 2010)

Fields Institute Presentation (slides)

Fields Institute Presentation (audio)

Encryption Using Topic Maps

Tuesday, September 21st, 2010

Topic maps are well suited to message passing in a loose confederation such as hackers.

Any loose confederation of actors could openly distribute information meaningful only to a small group.

Merging would be the key to assembling the correct message. (Imagine the “measurements” of models being merged to form geographic coordinates.)

Messages could be hidden in flood of other messages, only a tiny fraction of which merge.

Suggestions on a “secret” phrase to encode using merging and topic maps? (Must be non-libelous. Just in case it is ever decrypted.)


  1. Would this be more or less secure than a set of XQuery statements against an unknown (to others) public text, the results of which are ordered to display the message? Why?
  2. Would you transmit the merging rules or have them known in advance? Why?
  3. How would you transmit data and/or merging rules?
  4. Would you write merging rules against public data sets? Why?