Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

June 17, 2015

Black Freedom Struggle Collection [That Is Struggling To Be Free]

Filed under: Education,Government,History — Patrick Durusau @ 7:52 pm

Law Library Introduces Black Freedom Struggle Collection.

From the webpage:

The Law Library, Davis Library and the Sonja Haynes Stone Center have just purchased rich digital collections of NAACP, federal government and other organization documents. The collections illuminate the African American struggle to attain equal rights after Reconstruction. Collections span the 1870s to the 1980s. The collections are:

  • Black Freedom Struggle in the 20th Century: Federal Government Records
  • Black Freedom Struggle in the 20th Century: Organizational Records and Personal Papers

They supplement current UNC collections of NAACP documents and complement another new collection documenting earlier struggles, Slavery & the Law, and the existing Southern Life and African American History, 1715-1915, Plantation Records. Slavery and the Law features petitions on race, slavery, and free blacks that were submitted to state legislatures and county courthouses between 1775 and 1867.

The collections are in ProQuest’s History Vault Collection. For more information, contact a law librarian at 919-962-1194.

ProQuest sales brochure for Black Freedom Struggle in the 20th Century: Federal Government Records and Black Freedom Struggle in the 20th Century: Organizational Records and Personal Papers.

I rather doubt that the UNC Law Library has purchased these collections but rather has secured access to members of its faculty and student body to these materials. Hence the access via the ProQuest History Vault Collection.

Like any good massa, ProQuest is going to make a return on its investment, even if that excludes black Americans, indeed, all Americans, from learning the history of race in American from primary sources. Or at least those members of the population who don’t have institutional access to the Proquest History Vault Collection.

What makes this particularly galling in this case is that the materials represent a history of struggling for freedom, a story that should be widely told. A story that is being suppressed as it were in the name of our current IP model in the United States.

If we are confined to the artifices of commercial exploitation currently in place, why doesn’t Congress, which has wasted $billions on aircraft that exhibit spontaneous combustion (long rumored about people but confirmed in the F-35), site license this resource for everyone in the United States?

That would eliminate the paperwork for every institution that wants to access this material, eliminate the paperwork for all those contracts for ProQuest, make the original sources of our racial history available to every person located in the United States, so where is the downside?

While we work on changing the pernicious and exploitative IP regime of the present day, let’s change the rules on site licensing and let the greed of ProQuest lead it into doing the right thing. I care nothing for their motives, so long as universal access is the result.

Comprehensive Index of Legal Reports (Law Library of Congress)

Filed under: Law,Law - Sources,Librarian/Expert Searchers,Library — Patrick Durusau @ 4:56 pm

Comprehensive Index of Legal Reports (Law Library of Congress)

From the announcement that came via email:

In an effort to highlight the legal reports produced by the Law Library of Congress, we have revamped our display of the reports on our website.

The new Comprehensive Index of Legal Reports will house all reports available on our website. This will also be the exclusive location to find reports written before 2011.

The reports listed on the Comprehensive Index page are divided into specific topics designed to point you to the reports of greatest interest and relevance. Each report listed is under only one topic and several topics are not yet filled (“forthcoming”). We plan to add many reports from our archives to this page over the next few months, filling in all of the topics.

The Current Legal Topics page (http://www.loc.gov/law/help/current-topics.php) will now only contain the most current reports. The list of reports by topic also includes a short description explaining what you will find in each report.

No links will be harmed in this change, so any links you have created to individual reports will continue to work. Just remember to add http://loc.gov/law/help/legal-reports.php as a place to find research, especially of a historical nature, and http://loc.gov/law/help/current-topics.php to find recently written reports.

There are US entities that rival the British Library and the British Museum. The Library of Congress is one of those, as is the Law Library of Congress (the law library is a part of the Library of Congress but merits separate mention).

Every greedy, I would like to see something similar for the Congressional Research Service.

From the webpage:

The Congressional Research Service (CRS) works exclusively for the United States Congress, providing policy and legal analysis to committees and Members of both the House and Senate, regardless of party affiliation. As a legislative branch agency within the Library of Congress, CRS has been a valued and respected resource on Capitol Hill for more than a century.

CRS is well-known for analysis that is authoritative, confidential, objective and nonpartisan. Its highest priority is to ensure that Congress has 24/7 access to the nation’s best thinking.

Imagine US voters being given “…analysis that is authoritative, …, objective and nonpartisan,” analysis that they are paying for today and have for more than the last century.

I leave it to your imagination why Congress would prefer to have “confidential” reports that aren’t available to ordinary citizens. Do you prefer incompetence or malice?

Put Your Open Data Where Your Mouth Is (Deadline for Submission: 28 June 2015)

Filed under: Education,Open Access,Open Data — Patrick Durusau @ 4:03 pm

Open Data as Open Educational Resources – Case Studies: Call for Participation

From the call:

The Context:

Open Data is invaluable to support researchers, but we contend that open datasets used as Open Educational Resources (OER) can also be invaluable asset for teaching and learning. The use of real datasets can enable a series of opportunities for students to collaborate across disciplines, to apply quantitative and qualitative methods, to understand good practices in data retrieval, collection and analysis, to participate in research-based learning activities which develop independent research, teamwork, critical and citizenship skills. (For more detail please see: http://education.okfn.org/the-21st-centurys-raw-material-using-open-data-as-open-educational-resources)

The Call:

We are inviting individuals and teams to submit case studies describing experiences in the use of open data as open educational resources. Proposals are open to everyone who would like to promote good practices in pedagogical uses of open data in an educational context. The selected case studies will be published in a open e-book (CC_BY_NC_SA) hosted by Open Knowledge Foundation Open Education Group http://education.okfn.org by mid September 2015.

Participation in the call requires the submission of a short proposal describing the case study (of around 500 words), all proposal must be written in English, however, the selected authors will have the opportunity to submit the case both in English and another language, as our aim is to support the adoption of good practices in the use of open data in different countries.

Key dates:

  • Deadline for submission of proposals (approx. 500 words): 28th June
  • Notification to accepted proposals: 5th of July
  • Draft case study submitted for review (1500 – 2000 words): 26th of July
  • Publication-ready deadline: 16th of August
  • Publication date: September 2015

If you have any questions or comments please contact us by filling the “contact the editors” box at the end of this form

Javiera Atenas https://twitter.com/jatenas
Leo Havemann https://twitter.com/leohavemann

http://www.idea-space.eu/idea/72/info

Use of open data implies a readiness to further the use of open data. One way to honor that implied obligation is to share with others your successes and just as importantly, any failures in the use of open data in an educational context.

All too often we hear only a steady stream of success stories and we wonder where others drew such perfect students, assistants, and clean data that underlies their success. Never realizing that their students, assistants and data are no better and no worse than ours. The regular mis-steps, false starts, outright wrong paths are omitted in the story telling. For times’ sake no doubt.

If you can, do participate in this effort, even if you only have a success story to relate. 😉

BBC Trials Something Topic Map-Like

Filed under: Media,Navigation,Topic Maps — Patrick Durusau @ 3:33 pm

BBC trials a way to explain complex backstories in its shows by Nick Summers.

From the post:

Most of the BBC’s programming is only available for 30 days on iPlayer, so trying to keep up with long-running and complicated TV shows can be a pain. Want to remember how River Song fits into the Doctor Who universe, but don’t have the DVD box sets to hand? Your best option is normally to browse Wikipedia or some Whovian fan sites. To tackle the problem, the BBC is experimenting with a site format called “Story Explorer,” which could explain storylines and characters for some of its most popular shows. Today, the broadcaster is launching a version for its Home Front radio drama with custom illustrations, text descriptions and audio snippets. More importantly, the key events are laid out as simple, vertical timelines so that you can easily track the show’s wartime chronology.

With three seasons, sixteen interlocking storylines and 21 hours of audio, Story Explorer could be a valuable resource for new and lapsed Home Front fans. It’s been released as part of BBC Taster, a place where the broadcaster can share some of its more creative and forward-thinking ideas with the public. There’s a good chance it won’t be taken any further, although the BBC is already asking on its blog whether license fee payers would like an “informative, attractive and scalable” version “linked through to the rest of the BBC and the web.” Sort of like a multimedia Wikipedia for BBC shows, then. The broadcaster has suggested that the same format could be used to support shows like Doctor Who, Casualty, Luther, Poldark, Wolf Hall and The Killing. It sounds like a pretty good idea to us — an easy way for younger Who fans to recap early seasons would go down a storm.

This is one of those times when you wonder why you don’t live in the UK? Isn’t the presence of the BBC enough of a reason for immigration?

There are all those fascists at the ports of entry, so say nothing of the lidless eyes and their operators that follow you around. But still, there is the BBC, at the cost of living in a perpetual security state.

Doesn’t the idea of navigating through a series with links to other BBC and one presumes British Library and Museum resources sound quite topic map like? Rather than forcing viewers to rely upon fan sites with their trolls and fanatics? (sorry, no pun intended)

Of course, if the BBC had an effective (read user friendly) topic map authoring tool on its website, then fans could contribute content, linked to programs or even scenes, at their own expense, to be lightly edited by staff, in order to grow viewers around BBC offerings.

I suspect some nominal payment could be required to defray the cost of editing comments. Most of the people I know would pay for the right to “have their say,” even if the reading of other people’s content was free.

Should the BBC try that suggestion, I hope it works very well for them. I only ask in return is that they market the BBC more heavily to cable providers in the American South. Thanks!


For a deeper background on Story Explorer, see: Home Front Story Explorer: Putting BBC drama on the web by Tristan Ferne.

Check out this graphic from Tristan’s post:

BBC-world

Doesn’t that look like a topic map to you?

Well, except that I would have topics to represent the relationships (associations) and include the “real world” (gag, how I hate that phrase) as well as those shown.

How To Encrypt Your USB Drive to Protect Data [USB Drive Swapping Parties]

Filed under: Cybersecurity,Security — Patrick Durusau @ 3:12 pm

How To Encrypt Your USB Drive to Protect Data by Mohit Kumar.

From the post:

The USB flash drives or memory sticks are an excellent way to store and carry data and applications for access on any system you come across. With storage spaces already reaching 256 gigabytes, nowadays USB drives are often larger than past’s hard drives.

Thanks to increased storage capacity and low prices, you can easily store all your personal data on a tiny, easy-to-carry, USB memory stick.

The USB drive is a device that is used by almost everyone today. However, there’s a downside…

I think you’ll agree with me when I say:

USB sticks are easily lost or stolen.

Aren’t they?

However, in today’s post I am going to show you how to use your USB drives without fear of being misplaced.

If you are not aware, the leading cause of data breaches for the past few years has been the loss or theft of laptops and USB storage devices.

However, USB flash memory sticks are generally treated with far less care than laptops, and criminals seeking for corporate devices could cost your company a million dollars loss by stealing just a $12 USB drive.

By throwing light on the threats of USB drives, I’m not saying that you should never use a USB memory drives. Rather…

…I’ll introduce you a way to use your tiny data storage devices securely.

Losing USB flash drives wouldn’t be so concerning if the criminals were not granted immediate access to sensitive data stored in it.

Instead of relying merely on passwords, it’s essential for businesses to safeguard their data by encrypting the device.

Mohit gives great introduction to cryptsetup and its use with an Ubuntu system.

Except for one thing…in the image of creating a password, I only count seventeen (17) obscured characters. And there is no advice on how long your password/passphrase needs to be.

You may remember that the Office of Personnel Management had privileged users with passwords shorter than their own documented minimum (near the end of the post). Bad Joss!

The FAQ on cryptsetup has great advice that I will summarize and then you can read the details if you are interested.

Summary:

Ordinary:

Plain dm-crypt: Use > 80 bit. That is e.g. 17 random chars from a-z or a random English sentence of > 135 characters length.

LUKS: Use > 65 bit. That is e.g. 14 random chars from a-z or a random English sentence of > 108 characters length.

Paranoid:

Plain dm-crypt: Use > 100 bit. That is e.g. 21 random chars from a-z or a random English sentence of > 167 characters length.

LUKS: Use > 85 bit. That is e.g. 18 random chars from a-z or a random English sentence of > 140 characters length.

BTW, you were going to tell me about your minimum of eight (8) characters for a password? Feel as secure as before you started reading this post?

The fuller explanation from the cryptsetup FAQ:

5.1 How long is a secure passphrase ?

This is just the short answer. For more info and explanation of some of the terms used in this item, read the rest of Section 5. The actual recommendation is at the end of this item.

First, passphrase length is not really the right measure, passphrase entropy is. For example, a random lowercase letter (a-z) gives you 4.7 bit of entropy, one element of a-z0-9 gives you 5.2 bits of entropy, an element of a-zA-Z0-9 gives you 5.9 bits and a-zA-Z0-9!@#$%^&:-+ gives you 6.2 bits. On the other hand, a random English word only gives you 0.6…1.3 bits of entropy per character. Using sentences that make sense gives lower entropy, series of random words gives higher entropy. Do not use sentences that can be tied to you or found on your computer. This type of attack is done routinely today.

That said, it does not matter too much what scheme you use, but it does matter how much entropy your passphrase contains, because an attacker has to try on average

      1/2 * 2^(bits of entropy in passphrase)

different passphrases to guess correctly.

Historically, estimations tended to use computing time estimates, but more modern approaches try to estimate cost of guessing a passphrase.

As an example, I will try to get an estimate from the numbers in http://it.slashdot.org/story/12/12/05/0623215/new-25-gpu-monster-devours-strong-passwords-in-minutes More references can be found a the end of this document. Note that these are estimates from the defender side, so assuming something is easier than it actually is is fine. An attacker may still have vastly higher cost than estimated here.

LUKS uses SHA1 for hashing per default. The claim in the reference is 63 billion tries/second for SHA1. We will leave aside the check whether a try actually decrypts a key-slot. Now, the machine has 25 GPUs, which I will estimate at an overall lifetime cost of USD/EUR 1000 each, and an useful lifetime of 2 years. (This is on the low side.) Disregarding downtime, the machine can then break

     N = 63*10^9 * 3600 * 24 * 365 * 2 ~ 4*10^18

passphrases for EUR/USD 25k. That is one 62 bit passphrase hashed once with SHA1 for EUR/USD 25k. Note that as this can be parallelized, it can be done faster than 2 years with several of these machines.

For plain dm-crypt (no hash iteration) this is it. This gives (with SHA1, plain dm-crypt default is ripemd160 which seems to be slightly slower than SHA1):

    Passphrase entropy  Cost to break
    60 bit              EUR/USD     6k
    65 bit              EUR/USD   200K
    70 bit              EUR/USD     6M
    75 bit              EUR/USD   200M
    80 bit              EUR/USD     6B
    85 bit              EUR/USD   200B
    ...                      ...

For LUKS, you have to take into account hash iteration in PBKDF2. For a current CPU, there are about 100k iterations (as can be queried with ”cryptsetup luksDump”.

The table above then becomes:

    Passphrase entropy  Cost to break
    50 bit              EUR/USD   600k
    55 bit              EUR/USD    20M
    60 bit              EUR/USD   600M
    65 bit              EUR/USD    20B
    70 bit              EUR/USD   600B
    75 bit              EUR/USD    20T
    ...                      ...

Recommendation:

To get reasonable security for the next 10 years, it is a good idea to overestimate by a factor of at least 1000.

Then there is the question of how much the attacker is willing to spend. That is up to your own security evaluation. For general use, I will assume the attacker is willing to spend up to 1 million EUR/USD. Then we get the following recommendations:

Plain dm-crypt: Use > 80 bit. That is e.g. 17 random chars from a-z or a random English sentence of > 135 characters length.

LUKS: Use > 65 bit. That is e.g. 14 random chars from a-z or a random English sentence of > 108 characters length.

If paranoid, add at least 20 bit. That is roughly four additional characters for random passphrases and roughly 32 characters for a random English sentence.

The FAQ was edited about a month ago (as of June 17, 2015) so it should be relatively up to date. However, advances in attacks on encryption can occur without warning so use this information at your own risk. I do appreciate the FAQ answering what seems like a question that most people elide over.

PS: There are organizations that love to find encrypted USB drives in their facilities. Think of it as digital littering.

Or you can pick an encrypted USB drive with a > 100 bit key from a punch bowl at a USB drive swapping party to carry on your person and truthfully report that you don’t know the key, nor who does.

Enjoy!

A Roundup of Tips & Tools from IRE 2015

Filed under: Journalism,News,Reporting — Patrick Durusau @ 2:11 pm

A Roundup of Tips & Tools from IRE 2015 by Gary Price.

Gary summarizes six (6) tools that were discussed at #IRE15.

Yes, I saw this at the Global Investigative Journalism Network (GIJN). The same folks who have called for sponsoring a Muckraker.

What more endorsement does anyone need?

You need to follow Gary’s column for future updates. Some of it you will already know but as investigative reporters and editors build and share resources, there will be new to you material as well.

Enjoy!



Stegoloader: A Stealthy Information Stealer [TMs and Steganography, too]

Filed under: Cybersecurity,Security — Patrick Durusau @ 2:00 pm

Stegoloader: A Stealthy Information Stealer by Dell SecureWorks Counter Threat Unit™ Threat Intelligence.

Summary:

Malware authors are evolving their techniques to evade network and host-based detection mechanisms. Stegoloader could represent an emerging trend in malware: the use of digital steganography to hide malicious code. The Stegoloader malware family (also known as Win32/Gatak.DR and TSPY_GATAK.GTK despite not sharing any similarities with the Gataka banking trojan) was first identified at the end of 2013 and has attracted little public attention. Dell SecureWorks Counter Threat Unit(TM) (CTU) researchers have analyzed multiple variants of this malware, which stealthily steals information from compromised systems. Stegoloader’s modular design allows its operator to deploy modules as necessary, limiting the exposure of the malware capabilities during investigations and reverse engineering analysis. This limited exposure makes it difficult to fully assess the threat actors’ intent. The modules analyzed by CTU researchers list recently accessed documents, enumerate installed programs, list recently visited websites, steal passwords, and steal installation files for the IDA tool.

A bit more of the analysis from the post:

Stegoloader’s deployment module downloads and launches the main module; it does not have persistence. Before deploying other modules, the malware checks that it is not running in an analysis environment. For example, the deployment module monitors mouse cursor movements by making multiple calls to the GetCursorPos function. If the mouse always changes position, or if it does not change position, the malware terminates without exhibiting any malicious activity.

In another effort to slow down static analysis, most of the strings found in the binary are constructed on the program stack before being used. This standard malware technique ensures that strings are not stored in clear text inside the malware body but rather are constructed dynamically, complicating detection and analysis.

Before executing its main function, Stegoloader lists the running processes on the system and terminates if a process name contains one of the strings in Table 1. Most of the strings represent security products or tools used for reverse engineering. Stegoloader does not execute its main program code if it detects analysis or security tools on the system.

Pay particular attention to the use of “standard malware” in this and other posts. You won’t impress your management or other IT folks by proclaiming a vulnerability to be “zero-day” or “state enterprise malware,” only to find that it is a standard technique of hackers everywhere.

Stegoloader is reported to be the third malware family to use digital steganography.

Great read! Well, if you like reading about the details of malware anyway.

PS: Do you think the digital steganography techniques discussed would be useful for transmission of smallish topic maps that are dynamically constructed and that are destroyed when an application exits? That is no text version of the topic map for search or copying by unfriendly forces?

Given the reported storage of “smart” phones, etc. searching all the images and applying all the possible transforms could take a while. (Read “heat death of the universe.”) Come to think of it, while the topic map concealed in digital steganography is on your phone, you could access a website that supplies the key to unlock the map. On lose of connection or termination, the map simply disappears.

It would always be available, assuming you go to the correct site but just out of the reach of nosy neighbors and others.

Something like “Now You Don’t” (NYD) for a product name I think. Yes?

I don’t think the NYPD has any time on:

Rank Site System
1 National Super Computer Center in Guangzhou, China Tianhe-2 (MilkyWay-2)
2 DOE/SC/Oak Ridge National Laboratory, United States Titan
3 DOE/NNSA/LLNL, United States Sequoia – BlueGene/Q
4 RIKEN Advanced Institute for Computational Science (AICS), Japan K computer
5 DOE/SC/Argonne National Laboratory, United States Mira
6 Swiss National Supercomputing Centre (CSCS), Switzerland Piz Daint
7 Texas Advanced Computing Center/Univ. of Texas, United States Stampede
8 Forschungszentrum Juelich (FZJ), Germany JUQUEEN
9 DOE/NNSA/LLNL, United States Vulcan
10 Government, United States Cray CS-Storm

the top ten (10) systems as of November 2014.

The images need to resist decryption, at a minimum, until you can make bail. Properly done, I suspect your images, concealing a topic map, could resist decryption for far longer than that. (That is a suspicion, not a warranty.)

Perhaps the most amusing part of such an approach would be to use campaign images from candidates for public office in the five boroughs as the only images on the phone. Even the most dedicated officers would tire of looking at that rogues gallery. 😉

Sponsor a Muckraker:

Filed under: Journalism,News,Reporting — Patrick Durusau @ 11:03 am

Sponsor a Muckraker: Help Us Send Journalists to Lillehammer.

From the post:

Here’s your chance to support the global spread of investigative journalism. We need your help to sponsor dozens of journalists from developing and transitioning countries to come to the Global Investigative Journalism Conference in Norway this October 8-11.

Held once every two years, the GIJC is a giant training and networking event. At over 150 sessions, the world’s best journalists teach state-of-the-art investigative techniques, data analysis, cross-border reporting, online research, protecting sources, and more to reporters from some of the toughest media environments in the world.

We know from past conferences that our attendees return home to do groundbreaking investigations into corruption and abuse of power, launch investigative teams and non-profit centers, and help spread investigative reporting to where it is needed most.

We’ve been overwhelmed with requests to attend. GIJN and its co-host, Norway’s SKUP, have received 500 requests from more than 90 countries, and we can’t help them all. Please give what you can.

We will direct 100% of your gift to bring these journalists to GIJC15. Contributors will be publicly thanked (if they desire) on the GIJC15 conference page and social media. (Americans — your donation is fully tax deductible.)

Just click on the DONATE button, make a contribution, and write GIJC15 in the Special Instructions box. And thank you!

I don’t normally re-broadcast donation calls but the Global Investigative Journalism Conference transcends just be a good cause, public spirited, etc. sort of venture.

Think of it this way, governments, enterprises, businesses and their familiars, despite shortages, embargoes, famines, and natural disasters, all are suspiciously well fed and comfortable. Aren’t you the least bit curious why it seems to always work out that way? I am. Help sponsor a muckraker today!

Biggest threats to endpoint security? What’s Missing?

Filed under: Cybersecurity,Security — Patrick Durusau @ 10:45 am

security-threats

From: 2015 State of the Endpoint Report: User-Centric Risk

Negligent or careless employees are the #1 threat. OK, but except for malware, the rest of the choices focus on network technologies, all of which are insecure to varying degrees.

What two (2) threats are almost as big as #1 but aren’t listed here?

What about email and web browsers? Aren’t those typical vectors used to gain entry to endpoints?

Avoiding Phishing, Enterprise Wide

Imagine if your email server was configured to reject all email that does not originate from (yourcompany).com. How many phishing emails do you think management down to the least person with an email account would get in their daily email? Moreover, I assume you have control over email clients and no email can reach an employees computer without passing through your email server. Yes?

By taking control of incoming email, you can greatly reduce the risk of phishing emails and other insecurities being introduced into your network. No communication from off-site is ever 100% safe but reducing the amount of communication reduces your risk.

Avoiding Malicious Websites, Enterprise Wide

A large number of enterprises rely upon firewalls on a daily basis but given the prevalence of malicious websites, either firewalls are not used widely enough or not configured properly.

Reducing your risk from web browsers could be an exercise in big data for your IT staff. Start tracking the web traffic from all browsers in your enterprise and over a period of months you can build up a white-list of URLs for particular department. Assuming that you purge the white-list of sites irrelevant to that department’s function, as approved by its manager, implement that as a white-list in your firewall. (I assume you are going to check to make sure the sites on the white-list are not themselves malicious.)

That avoids you guessing what is useful for particular departments and enables you to avoid all manner of hazards, such as music download sites, etc.

It would take more effort than securing email but would greatly reduce yet another attack vector.

If anyone protests that they need unfettered access to the Internet, supply them with a standalone machine that is outside of the firewall with its drives, USB ports, etc. all glued shut. To avoid attempts at data transfer from outside of your firewall to inside your firewall. (I would assume that gluing all such access points shut was routine but the Snowden and Manning proved otherwise.)

Summary

If you float either of these proposals in your enterprise, be prepared for push-back. Users think they have some vested interest in making poor decisions with regard to your network security. Management encourages that by not focusing on the people problem that lies at the root of most security breaches, preferring instead to dream of pie-in-the-sky technology answers.

Perhaps shareholders should become aware of such recommendations when management is attempting to explain away its latest major data breach, post your making the recommendations.

PS: Users who insist on being security risks should be enabled to become security risks, for some other enterprise. Cybersecurity, liability is coming.

June 16, 2015

Cyberspace Law: Cases & Materials, Third Edition

Filed under: Cybersecurity,Security — Patrick Durusau @ 7:53 pm

Cyberspace Law: Cases & Materials, Third Edition

From the post:

Early adopters of Cyberspace Law: Cases and Materials were particularly pleased by how flexible, coherent, and practical the book is. Now strengthened and scrupulously updated for its Third Edition, this engaging casebook can help your students understand one of the most dynamic areas of law.

Written and structured for maximum effectiveness, the book:
– Can be used successfully in both introductory and advanced courses;
– Uses practical, classroom-tested “real world” problems to help students apply existing rules to cyberspace law;
– Features a flexible, logical organization that allows instructors to emphasize selected perspectives;
– Is designed for currency, with materials organized around competing approaches and theories for any given issue, rather than current leading cases;
– Presents current Internet law as well as related policy concerns that will drive future legal analysis when new issues emerge — the only casebook to address both areas. Offers a balanced presentation of competing approaches and theories for each issue;
– Provides a sophisticated analysis of cutting-edge legal issues through an excellent selection of cases;
– Remains up-to-date with postings of new cases and important developments on the author website.

Look for these important changes in the Third Edition:
– New co-author Jacqueline Lipton, who brings significant teaching and writing experience in the areas of international and comparative law;
– New and updated cases, including: Grokster, ACLU v. Ashcroft, U.S. v. American Library Association, Chamberlain v. Skylink, Lexmark v. Static Control Components, U.S. v. Elcomsoft, 321 Studios v. MGM Studios, Kremen v. Cohen, Blizzard v. Bnet In re Verizon, Bosley v. Kremer, and People for the Ethical Treatment of Animals v. Doughney;
– Treatment of important developments, such as political cybersquatting legislation enacted in some states (for example, California's Political Cyberfraud Abatement Act) and changes to privacy laws enacted following the Patriot Act;
– Greatly expanded international coverage, including new international cases: Sony v. Stevens, Telstra v. Desktop, Gutnick v. Dow Jones;
– Recent Canadian cases on Internet defamation issues;
– Decisions from the European Court of Justice interpreting the database directive in 2004, including the appeal in British Horseracing Board v. William Hill;
– Various developments between French and Californian courts in Yahoo litigation regarding Nazi memorabilia as well as domestic legislation implemented by all E.U. member states which complies with the requirements of the Copyright Directives;
– New section on the failed effort at private self-governance sponsored by ICANN and the scholarship surrounding that effort;
– Jurisdictional materials in the chapter on Regulating Cyberspace are consolidated for easier teaching and learning;
– Updated problems and notes.

When you consider casebooks for your next course, be sure to examine Cyberspace Law: Cases and Materials, Third Edition, the cohesive, realistic, and accessible alternative.

Useful if you are in law school, practicing cyberspace law or if you are risk adverse.

Otherwise, remember: The law is a thumb on the scale of justice, for one side or another.

ThumbScale

The image is from TexasWatch.org.

How to write a book in Emacs

Filed under: Editor — Patrick Durusau @ 7:40 pm

How to write a book in Emacs by Mickey Petersen.

From the post:

Writing a book is a time consuming process, and if you’re going to dedicate a year, or more, of your life to it, it makes sense to pick the best tool for the job. If you’re technically inclined – and if you use Emacs – then the choice is obvious.

As I recently finished writing a book on Mastering Emacs I figured I would share my thoughts, and ideas, on how to write a book – specifically my own book – in Emacs.

For all of its power, Emacs won’t write the content of the book for you but it can help with the details that go into writing.

I disagree with the choice of reStructuredText as a format but you can ascribe that to a long association with SGML/XML. I can read marked up text almost as easily as I can ASCII. And no I don’t read it in one of the visual editors. Nothing against them but Emacs is just fine.

But read the rest of the post for the many tips therein!

Reasoned Programming

Filed under: Functional Programming,Programming,Reasoning — Patrick Durusau @ 7:24 pm

Reasoned Programming by Krysia Broda, Susan Eisenbach, Hessam Khoshnevisan, and, Steve Vickers.

From the preface:

How do you describe what a computer program does without getting bogged down in how it does it? If the program hasn’t been written yet we can ask the same question using a different tense and slightly different wording: How do you specify what a program should do without determining exactly how it should do it? Then we can add the question: When the program is written, how do you judge that it satisfi es its specifi cation?

In civil engineering, one can ask a very similar pair of questions: How can you specify what a bridge should do without determining its design? And, when it has been designed, how can you judge whether it does indeed do what it should?

This book is about these questions for software engineering, and its answers can usefully be compared with what happens in civil engineering. First, a speci fication is a different kind of thing from a design; the specifi cation of a bridge may talk about loadbearing capacity, deflection under high winds and resistance of piers to water erosion, while the design talks about quite different things such as structural components and their assembly. For software, too, speci fications talk about external matters and programs talk about internal matters.

The second of the two questions is about judging that one thing satisfi es another. The main message of the book and a vitally important one is that judgement relies upon understanding.  This is obviously true in the case of the bridge; the judgement that the bridge can bear the speci fied load rests on structural properties of components enshrined in engineering principles, which
in turn rest upon the science of materials. Thus the judgement rests upon a tower of understanding.

This tower is well-established for the older engineering disciplines; for software engineering it is still being built. (We may call it software science.’) The authors have undertaken to tell students in their fi rst or second year about the tower as it now stands, rather than dictate principles to them. This is refreshing in software engineering there has been a tendency to substitute formality for understanding. Since a program is written in a very formal language and the specifi cation is also often written in formal logical terms, it is natural to emphasize formality in making the judgement that one satifis es the other. But in teaching it is stultifying to formalize before understanding, and software science is no exception — even if the industrial signi ficance of a formal veri fication is increasingly being recognized.

This book is therefore very approachable. It makes the interplay between speci fication and programming into a human and flexible one, albeit guided by rigour. After a gentle introduction, it treats three or four good-sized examples, big enough to give con fidence that the approach will scale up to industrial software; at the same time, there is a spirit of scienti c enquiry. The authors have made the book self-contained by including an introduction to logic written in the same spirit. They have tempered their care for accuracy with a light style of writing and an enthusiasm which I believe will endear the book to students.

Apologies for the long quote but I like the style of the preface. 😉

As you may guess from the date, 1994, the authors focus on functional programming, Miranda, and Modula-2.

Great read and highly recommended.

I first saw this in a tweet by Computer Science.

Vintage Infodesign [122] Naval Yards

Filed under: Graphics,Maps,Visualization — Patrick Durusau @ 4:42 pm

Vintage Infodesign [122] by Tiago Veloso.

From the post:

Published in October, 1940, the set of maps from Fortune magazine that open today’s Vintage Infodesign was part of a special about the industrial resources committed to the war effort by the United States. It used data compiled by the Bureau of the Census and Agricultural Commission, with the financial support by the Defense Commission. The maps within the four page report are signed by Philip Ragan Associates.

It’s just another great gem archived over at Fulltable, followed by the usual selection of ancient maps, graphics and charts from before 1960.

Hope you enjoy, and have a great week!

One original image (1940) and it modern counterpart to temp you into visiting this edition of Vintage Infodesign.

shipyards-1940a

US shipyards and arsenals in 1940.

shipyards-now

Modern map of shipyards. I couldn’t find an image quickly that had arsenals as well.

Notice the contrast in the amount of information given by the 1940 map versus that of the latest map from the Navy.

With the 1940 map, along with a state map I could get within walking distance of any of the arsenals or shipyards listed.

With the modern map, I know that shipyards need to be near water but it is only narrowed down to the coastline of any of the states with shipyards.

That may not seem like a major advantage, knowing the location of a shipyard from a map, but collating that information with a stream of other bits and pieces could be an advantage.

Such as watching wedding announcements near Navy yards for sailors getting married. Which means the happy couple will be on their honeymoon and any vehicle at their home with credentials to enter a Navy yard will be available. Of course, that information has to be co-located for the opportunity to present itself. For that I recommend topic maps.

Cybersecurity Sprint or Multi-Year Egg Roll?

Filed under: Cybersecurity,Security — Patrick Durusau @ 3:57 pm

White House Tells Agencies To Tighten Up Cyber Defenses ‘Immediately’ by Aliya Sternstein.

The steps Aliya reports start off strong enough..

U.S. Chief Information Officer Tony Scott “recently launched” what officials are calling a 30-day cybersecurity sprint.

According to White House officials, the emergency procedures include:

  • “Immediately” deploying so-called indicators, or tell-tale signs of cybercrime operations, into agency anti-malware tools. Specifically, the indicators contain “priority threat-actor techniques, tactics and procedures” that should be used to scan systems and check logs.
  • Patching critical-level software holes “without delay.” Each week, agencies receive a list of these security vulnerabilities in the form of DHS Vulnerability Scan Reports.
  • Tightening technological controls and policies for “privileged users,” or staff with high-level access to systems. Agencies should cut the number of privileged users; limit the types of computer functions they can perform; restrict the duration of each user’s online sessions, presumably to prevent the extraction of large amounts of data; “and ensure that privileged user activities are logged and that such logs are reviewed regularly.”
  • Dramatically accelerating widespread use of of “multifactor authentication” or two-step ID checks. Passwords alone are insufficient access controls, officials said. Requiring personnel to log in with a smartcard or alternative form of ID can significantly reduce the chances adversaries will pierce federal networks, they added, stopping short of mandating multi-step ID checks.
  • Agencies must report on progress and problems complying with these procedures within 30 days.

    … but end with a whimper.

    If you recall, OPM doesn’t have the IT staff, interest or even a complete list of its IT assets, not to mention this activity will conflict with other agency goals. U.S. Was Warned of System Open to Cyberattacks

    How quickly do you think OPM will be able to take any of the steps Aliya outlines?

    I’m betting on the government cybersecurity sprint being a multi-year egg roll.

    You?

    Tor for Technologists

    Filed under: Cybersecurity,Privacy,Security,Tor — Patrick Durusau @ 3:34 pm

    Tor for Technologists by Martin Fowler.

    From the post:

    Tor is a technology that is cropping up in news articles quite often nowadays. However, there exists a lot of misunderstanding about it. Even many technologists don’t see past its use for negative purposes, but Tor is much more than that. It is an important tool for democracy and freedom of speech – but it’s also something that is very useful in the day-to-day life of a technologist. Tor is also an interesting case study in how to design a system that has very specific security requirements.

    The Internet is currently a quite hostile place. There are threats of all kinds, ranging from script kiddies and drive-by phishing attacks to pervasive dragnet surveillance by many of the major intelligence services in the world. The extent of these problems have only recently become clear to us. In this context, a tool like Tor fills a very important niche. You could argue that it’s a sign of the times that even a company like Facebook encourages the use of Tor to access their services. The time is right to add Tor to your tool belt.

    Martin does a great job of summarizing Tor and giving a overview of what Tor does and does not do. Both are important for security conscious users (that should include you).

    If you aren’t already using Tor and are a technologist, read Martin’s introduction first and then become an active user/supporter of Tor.

    Fed Biz Opp – Security

    Filed under: Cybersecurity,Security — Patrick Durusau @ 2:39 pm

    The US Navy wants to buy your zero-day vulnerabilities by Graham Cluley.

    Graham reports on a solicitation from the Department of the Navy for zero-day vulnerabilities. The solicitation has been removed but Dave Maass preserved a copy here.

    From the solicitation:

    70­­-Common Vulnerability Exploit Products
    Solicitation Number: N0018915T0245
    Agency: Department of the Navy
    Office: Naval Supply Systems Command
    Location: NAVSUP Fleet Logistics Center Norfolk

    The vendor shall provide the government with a proposed list of available vulnerabilities, 0­day or N­day (no older than 6 months old). This list should be updated quarterly and include intelligence and exploits affecting widely used software. The government will select from the supplied list and direct development of exploit binaries.

    The vendor shall accept vulnerability data to include patch code, proof of concept code, or analytic white papers from the government to assist with product development. Products developed under these conditions will not be available to any other customer and will remain exclusively licensed to the government.

    Documentation of technical expertise must be presented in sufficient detail for the Government to determine that your company possesses the necessary functional area expertise and experience to compete for this acquisition.

    I understand the solicitation has now been removed from the FedBizOpps.gov site. I checked with the solicitation number and that appears to be true.

    Perhaps the GSA is going to issue a more general solicitation on behalf of all government agencies.

    The only odd thing I noticed in the solicitation was the exclusive license to the government of any exploit. Certainly possible but that put push the cost up several times over.

    There are already private exchanges for vulnerabilities but operate in the shadows so it isn’t an efficient market. Congress should de-criminalize vulnerability exploits (unless used) so there can be an open market in vulnerabilities. Vendors and the government can compete alongside others in such a market.

    One advantage to such a market is the vulnerability hunters could make a legitimate living from the sale of the fruits of their labors. Less temptation to engage in unsavory activities.

    Second-in-Command Al Qaeda – Most Dangerous Job in the World

    Filed under: Government,Politics — Patrick Durusau @ 2:11 pm

    If you have missed the The Most Dangerous Job in the World, you have not missed seeing a story about the most dangerous job in the world.

    I know, the Discovery series has a lot of drama and splashing around but it isn’t like the crabs are hunting down the ships. Yes?

    No, the most dangerous job in the world is being Al Qaeda’s second-in-command. The United States recently confirmed the second-in-command of Al Qaeda, Nasir al-Wuhayshi has been killed in a drone strike [2015] and according to CNN:

    dealing a heavy setback to the leadership of the international terrorist group.

    Really? What about the Al Qaeda second-in-command al-Shihri, that the United States killed three (3) times? How Many Times Does Al Qaeda’s Number Two Need to Die? [2013]

    Abu Yahya al-Libi [2012]:

    White House spokesman Jay Carney called al-Libi’s death a “major blow” to the terror network.

    Or, Atiyah Abd Al-Rahman Dead: Al Qaeda Second In Command Killed In Pakistan [2011]

    Or, Al-Qaeda’s second-in-command, Mohammed Atef, who was killed in a US drone strike in 2001. [2001]

    I feel certain that the “second-in-command” was killed multiple times between 2002 and 2011 but I lacked the interest to run them all down. If you ask nicely, I may search the fantasy reports from the State Department for all references to Al Qaeda and leadership and prepare a canonical list of US claimed kills at any rate.

    Personally I think your time would be better spent on sources like: Eighty Percent Of Al-Qaeda No. 2s Now Dead. At least there you know the intent is to amuse. Not certain what the intent of self-congratulatory notices about operations against Al Qaeda. I rather doubt they read United States government press releases. They are much better at getting them issued.

    A Topic Map Irony

    Filed under: Topic Maps — Patrick Durusau @ 12:55 pm

    I have been working for weeks to find a “killer” synopsis of topic maps for a presentation later this summer. I have re-read all the old ones, mine, yours, theirs and a number of imaginary ones. None of the really seem to be the uber topic map synopsis.

    After my latest exchange with a long time correspondent and my most recent suggestion lamed in the exchange, a topic map irony occurred to me:

    For all of the flogging of semantic diversity in promotion of topic maps, it never occurred to me that looking for one (1) uber explanation of topic maps was going in the wrong direction.

    What aspect of “topic maps” are important to one audience is very unlikely to be important to another.

    What if their requirements are to point to (occurrences) of a subject in a data set and to maintain documentation about those subjects, separate and apart from that data set? The very concept of merging may not be relevant for a documentation use case.

    What if their requirements are the modeling of relationships (associations) with multiple inputs of the same role and a pipeline of operations. The focus there being on modeling with associations. That topic maps have other characteristics may be interesting but not terribly important.

    What if their requirements are the auditing of the mapping of multiple data sources that are combined together for a data pipeline? There we get into merging and what basis for merging exists, etc. and perhaps not so much into associations.

    And there are any number of variations on those use cases, each one of which would require a different explanation and emphasis on topic maps.

    To say nothing of having different merging models, some of which might ignore IRIs as a basis for merging.

    To approach semantic diversity with an attempt at uniformity seems deeply ironic.

    What was I thinking?

    PS: To be sure, interchange in a community of use requires the use of standards but those should exist only in domain specific cases. Trying to lasso the universe of subjects in a single representation isn’t a viable enterprise.

    June 15, 2015

    FORTH For A Supercomputer

    Filed under: Forth,Parallel Programming — Patrick Durusau @ 6:14 pm

    Raspberry Pi JonesFORTH O/S

    From the post:

    A bare-metal operating system for Raspberry Pi, based on Jonesforth-ARM.

    Jonesforth-ARM is an ARM port, by M2IHP’13 class members listed in AUTHORS, of x86 JonesForth.

    x86 JonesForth is a Linux-hosted FORTH presented in a Literate Programming style by Richard W.M. Jones rich@annexia.org originally at http://annexia.org/forth. Comments embedded in the original provide an excellent FORTH implementation tutorial. See the /annexia/ directory for a copy of this original source.

    The algorithm for our unsigned DIVMOD instruction is extracted from ‘ARM Software Development Toolkit User Guide v2.50’ published by ARM in 1997-1998

    Firmware files to make bootable images are maintained at https://github.com/raspberrypi/firmware. See the /firmware/ directory for local copies used in the build process.

    The Raspberry Pi or something very similar will be commonplace in the IoT.

    Will those machines be doing your bidding or someone else’s?

    htmlwidgets for Rich Data Visualization in R

    Filed under: Graphics,R,Visualization — Patrick Durusau @ 6:04 pm

    htmlwidgets for Rich Data Visualization in R

    From the webpage:

    With the booming popularity of big data and data science, nice visualizations are getting a lot of attention. Sure, R and Python have built-in support for basic graphs and charts, but what if you want more. What if you want interaction so you can mouse-over or rotate a visualization. What if you want to explore more than a static image? Enter Rich Visualizations.

    And, creating them is not as hard as you might think!

    Four compelling examples of interactive graphics using htmlwidgets to bring interactivity to R code.

    At first I thought this might be useful for an interactive map of cybersecurity incompetence inside the DC beltway but quickly realized that a map with only one uniform feature isn’t all that useful.

    I am sure htmlwidgets will be useful for many other visualizations!

    Enjoy!

    Neo4j 2.3.0 Milestone 2 Release

    Filed under: Graphs,Neo4j — Patrick Durusau @ 4:32 pm

    Neo4j 2.3.0 Milestone 2 Release by Michael Hunger.

    New features (not all operational) include:

    • Highly Scalable Off-Heap Graph Cache
    • Mac Installer
    • Cypher Cost-Based Optimizer Improvements
    • Compiled Runtime
    • UX Improvements

    And the important information:

    Download: http://neo4j.com/download/#milestone

    Documentation: http://neo4j.com/docs/milestone/

    Cypher Reference Card: http://neo4j.com/docs/milestone/cypher-refcard

    Feedback please to: feedback@neotechnology.com

    GitHub: http://github.com/neo4j/neo4j/issues

    Enjoy!

    Clojure By Example

    Filed under: Clojure,Programming — Patrick Durusau @ 3:47 pm

    Clojure By Example by Hirokuni Kim.

    From About:

    I don’t like reading thick O’Reilly books when I start learning new programming languages. Rather, I like starting by writing small and dirty codes. If you take this approach, having many simple codes examples are extremely helpful because I can find answers to these questions very easily.

    How can I define a function?

    What’s the syntax for if and else?

    Does the language support string interpolation?

    What scopes of variables are available?

    These are very basic questions, but enough to start hacking with the new languages.

    Recently, I needed to learn this completely new language Clojure but couldn’t find what I wanted. So, I decided to create one while learning Clojure.

    Hopefully, this helps you to start learning and writing Clojure.

    Personally I like the side-by-side text — code presentation. You?

    15 Easy Solutions To Your Data Frame Problems In R

    Filed under: Data Frames,R,Spark — Patrick Durusau @ 3:40 pm

    15 Easy Solutions To Your Data Frame Problems In R.

    From the post:

    R’s data frames regularly create somewhat of a furor on public forums like Stack Overflow and Reddit. Starting R users often experience problems with the data frame in R and it doesn’t always seem to be straightforward. But does it really need to be so?

    Well, not necessarily.

    With today’s post, DataCamp wants to show you that data frames don’t need to be hard: we offer you 15 easy, straightforward solutions to the most frequently occurring problems with data.frame. These issues have been selected from the most recent and sticky or upvoted Stack Overflow posts. If, however, you are more interested in getting an elaborate introduction to data frames, you might consider taking a look at our Introduction to R course.

    If you are having trouble with frames in R, you are going to have trouble with frames in Spark.

    Questions and solutions you will see here:

    • How To Create A Simple Data Frame in R
    • How To Change A Data Frame’s Row And Column Names
    • How To Check A Data Frame’s Dimensions
    • How To Access And Change A Data Frame’s Values …. Through The Variable Names
    • … Through The [,] and $ Notations
    • Why And How To Attach Data Frames
    • How To Apply Functions To Data Frames
    • How To Create An Empty Data Frame
    • How To Extract Rows And Colums, Subseting Your Data Frame
    • How To Remove Columns And Rows From A Data Frame
    • How To Add Rows And Columns To A Data Frame
    • Why And How To Reshape A Data Frame From Wide To Long Format And Vice Versa
    • Using stack() For Simply Structured Data Frames
    • Using reshape() For Complex Data Frames
    • Reshaping Data Frames With tidyr
    • Reshaping Data Frames With reshape2
    • How To Sort A Data Frame
    • How To Merge Data Frames
    • Merging Data Frames On Row Names
    • How To Remove Data Frames’ Rows And Columns With NA-Values
    • How To Convert Lists Or Matrices To Data Frames And Back
    • Changing A Data Frame To A Matrix Or List

    Rather than looking for a “cheatsheet” on data frames, suggest you work your way through these solutions, more than once. Over time you will learn the ones relevant to your particular domain.

    Enjoy!

    A gallery of interesting IPython Notebooks

    Filed under: Programming,Python — Patrick Durusau @ 3:05 pm

    A gallery of interesting IPython Notebooks by David Mendler.

    From the webpage:

    This page is a curated collection of IPython notebooks that are notable for some reason. Feel free to add new content here, but please try to only include links to notebooks that include interesting visual or technical content; this should not simply be a dump of a Google search on every ipynb file out there.

    https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks#general-python-programming

    The table of contents:

    1. Entire books or other large collections of notebooks on a topic

    2. Scientific computing and data analysis with the SciPy Stack

    3. General Python Programming
    4. Notebooks in languages other than Python

    5. Miscellaneous topics about doing various things with the Notebook itself
    6. Reproducible academic publications
    7. Other publications using the Notebook
    8. Data-driven journalism
    9. Whimsical notebooks
    10. Videos of IPython being used in the wild

    Yes, quoting the table of contents may impact my ranking by Google but I prefer content that is useful to me and hopefully you. Please bookmark this site and pass it on.

    Project Oberon

    Filed under: Computer Science,Cybersecurity,Programming,Security — Patrick Durusau @ 2:36 pm

    Project Oberon

    From the webpage:

    Project Oberon is a design for a complete computer system. Its simplicity and clarity enables a single person to know and implement the entire system, while still providing enough power to make it useful and usable in a production environment. This website contains information and resources to help you explore and use the system. The project is fully described in a book — Project Oberon: The Design of an Operating System, a Compiler, and a Computer — written by the designers, Niklaus Wirth and Jürg Gutknecht. The second (2013) edition of the book and the accompanying code are published on Niklaus Wirth’s website. We provide links to the original material here, along with local packaged copies, with kind permission from the authors.

    You are unlikely to encounter an Oberon system in production use at most government or enterprise offices. Still, the experience of knowing how computer operating systems work will enable you to ask pointed security questions and to cut through the fog of evasion.

    Nicklaus comments in the 2013 preface:

    But surely new systems will emerge, perhaps for different, limited purposes, allowing for smaller systems. One wonders where their designers will study and learn their trade. There is little technical literature, and my conclusion is that understanding is generally gained by doing, that is, “on the job”. However, this is a tedious and suboptimal way to learn. Whereas sciences are governed by
    principles and laws to be learned and understood, in engineering experience and practice are indispensable. Does Computer Science teach laws that hold for (almost) ever? More than any other field of engineering, it would be predestined to be based on rigorous mathematical principles. Yet, its core hardly is. Instead, one must rely on experience, that is, on studying sound examples.

    The main purpose of and the driving force behind this project is to provide a single book that serves as an example of a system that exists, is in actual use, and is explained in all detail. This task drove home the insight that it is hard to design a powerful and reliable system, but even much harder to make it so simple and clear that it can be studied and fully understood. Above everything else, it requires a stern concentration on what is essential, and the will to leave out the rest, all the popular “bells and whistles”.

    Recently, a growing number of people has become interested in designing new, smaller systems. The vast complexity of popular operating systems makes them not only obscure, but also provides opportunities for “back doors”. They allow external agents to introduce spies and devils unnoticed by the user, making the system attackable and corruptible. The only safe remedy is to build a safe system anew from scratch.

    Did you catch that last line?

    The only safe remedy is to build a safe system anew from scratch.

    We don’t all need to build diverse (safe) systems but is does sound like a task the government could contract out to computer science departments. Adoption by the government alone would create a large enough market share to make it a viable platform.

    Think of it this way: We can keep building sieves upon sieves upon sieves….nth sieve, all the while proclaiming increasing security, or, a safe system can be built. Developers should think of all the apps to be re-invented for the safe system. Something for everybody.

    Map of the Tracks of Yu, 1136

    Filed under: History,Mapping,Maps — Patrick Durusau @ 12:47 pm

    Tracks-of-Yu-1136

    I first saw this on Instagram at: https://instagram.com/p/363b2lOpn7/ with the following comment:

    Map of the Tracks of Yu, 1136, is the first known map to use a cartographic grid.

    The David Rumsey Map Collection, Cartography Associates, offers this more complete image from the Harvard Fine Arts Library:

    Yujitu1136

    And the following blurb:

    Yujitu (Map of the Tracks of Yu), 1136. This map’s title derives from the Yugong, a treatise describing the sage-king Yu’s mythical channeling of China’s rivers. It is a rare surviving example of cartography used in the 12th century for public education, mixing classical references with later administrative history. Carved on a large stone tablet so that students or visitors could make rubbings, the map strikingly depicts a riverine network on a regular grid of squares intended to represent 100 li to a side. Read a more detailed description of this map by Alexander Akin, Ph.D. View the map in Google Earth. The image is courtesy Harvard Fine Arts Library.

    To temp you into further reading, Alexander Akin’s description opens with these lines:

    The Yijitu (Map of the Tracks of Yu) is the earliest extant map based on the Yugong (introduced below). Engraved in stone in 1136, the map measures about one meter to a side. It was carved into the face of an upright monument on the grounds of a school in Xi’an so that visitors could make detailed rubbings using paper and ink. These rubbings could be taken away for later reference. The stone plaque thus functioned as something like an immovable printing block, remaining in Xi’an while copies of its map found their way further afield. Harvard University holds one such rubbing made from the original stone, and has generously granted permission for the use of this unusually clear image, which shows more detail than any previously published version….

    Alexander struggles, as only a modern would, over the “accuracy” of the map. A map that at times accords with the findings of modern map makers and at times accords with its Confucian heritage.

    With maps in general and topic maps in particular, a question of “accuracy” cannot be answered with being supplied with the measurement to be applied in answering that question.

    June 14, 2015

    CVPR 2015 Papers

    CVPR [Computer Vision and Pattern Recognition] 2015 Papers by @karpathy.

    This is very cool!

    From the webpage:

    Below every paper are TOP 100 most-occuring words in that paper and their color is based on LDA topic model with k = 7.
    (It looks like 0 = datasets?, 1 = deep learning, 2 = videos , 3 = 3D Computer Vision , 4 = optimization?, 5 = low-level Computer Vision?, 6 = descriptors?)

    You can sort by LDA topics, view the PDFs, rank the other papers by tf-idf similarity to a particular paper.

    Very impressive and suggestive of other refinements for viewing a large number of papers in a given area.

    Enjoy!

    10 Raspberry Pi Projects For Learning IoT

    Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 3:43 pm

    10 Raspberry Pi Projects For Learning IoT by Curtis Franklin Jr..

    From the post:

    The Internet of Things (IoT) is, arguably, the hottest topic in IT. Every organization wants to participate in the IoT, and many IT professionals want to know how to add IoT skills to their resume. There are lots of options for learning about IoT, but nothing really beats the hands-on experience.

    One of the key learning platforms for IoT is the Raspberry Pi. The RasPi is a popular platform because it offers a complete Linux server in a tiny platform for a very low cost. In fact, one of the most difficult parts of using Raspberry Pi for learning about IoT is picking the right projects with which to begin.

    A great way to pick up IoT skills so you can assess your vulnerability as it slowly creeps into your home. Or not, if you are security conscious.

    As you already know, the IoT offers an exponential growth in bugs and unlawful access opportunities. Good news for the unlawful because I understand a “30-day cybersecurity sprint” has been ordered by Tony Scott, CIO for the United States. Take that as fair warning that the script kiddie vulnerabilities, also known as zero-day vulnerabilities in FBI press releases, are likely to disappear from government sites, over the next five (5) to ten (10) years.

    ClojureCL

    Filed under: Clojure,OpenCL,Parallel Programming,Parallelism — Patrick Durusau @ 2:35 pm

    ClojureCL

    From the getting started page:

    ClojureCL is a Clojure library for High Performance Computing with OpenCL, which supports:

  • GPUs from AMD, nVidia, Intel;
  • CPUs from Intel, AMD, ARM etc;
  • Computing accelerators and embedded devices (Intel Xeon Phi, Parallella, etc.).
  • BTW, the page is asking for native English speakers to help edit the pages. Recent enough effort that the documentation may not be set in stone, unlike some projects.

    You have to smile at the comment found under: Making sense of OpenCL:

    Once you get past the beginner’s steep learning curve, it makes sense, and opens a whole new world of high-performance computing – you practically have a supercomputer on your desktop. (emphasis added)

    I’m not accustomed to that much honesty in documentation. 😉

    Surprising enough to make it worthwhile to read further.

    Enjoy!

    #DallasPDShooting

    Filed under: Artificial Intelligence,News,Politics — Patrick Durusau @ 2:13 pm

    AJ+ tweets today:

    A gunman fired at police and had a van full of pipe bombs, but no one called him a terrorist. #DallasPDShooting

    That’s easy enough to explain:

    1. He wasn’t setup by the FBI
    2. He wasn’t Muslim
    3. He wasn’t black

    Next question.

    « Newer PostsOlder Posts »

    Powered by WordPress