Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

June 25, 2015

Internationalization & Unicode Conference ICU 39

Filed under: Conferences,Unicode — Patrick Durusau @ 8:00 pm

Internationalization & Unicode Conference ICU 39

October 26-28, 2015 – Santa Clara, CA USA

From the webpage:

The Internationalization and Unicode® Conference (IUC) is the premier event covering the latest in industry standards and best practices for bringing software and Web applications to worldwide markets. This annual event focuses on software and Web globalization, bringing together internationalization experts, tools vendors, software implementers, and business and program managers from around the world. 

Expert practitioners and industry leaders present detailed recommendations for businesses looking to expand to new international markets and those seeking to improve time to market and cost-efficiency of supporting existing markets. Recent conferences have provided specific advice on designing software for European countries, Latin America, China, India, Japan, Korea, the Middle East, and emerging markets.

This highly rated conference features excellent technical content, industry-tested recommendations and updates on the latest standards and technology. Subject areas include web globalization, programming practices, endangered languages and un-encoded scripts, integrating with social networking software, and implementing mobile apps. This year’s conference will also highlight new features in Unicode and other relevant standards. 

In addition, please join us in welcoming over 20 first-time speakers to the program! This is just another reason to attend; fresh talks, fresh faces, and fresh ideas!

(emphasis and colors in original)

If you want your software to be an edge case and hard to migrate in the future, go ahead, don’t support Unicode. Unicode libraries exist in all the major and many minor programming languages. Not supporting Unicode isn’t simpler, it’s just dumber.

Sorry, I have been a long time follower of the Unicode work and an occasional individual member of the Consortium. Those of us old enough to remember pre-Unicode days want to lessen the burden of interchanging texts, not increase it.

Enjoy the conference!

1.5 Million Slavery Era Documents Will Be Digitized…

Filed under: Crowd Sourcing,History — Patrick Durusau @ 7:40 pm

1.5 Million Slavery Era Documents Will Be Digitized, Helping African Americans to Learn About Their Lost Ancestors

From the post:

The Freedmen’s Bureau Project — a new initiative spearheaded by the Smithsonian, the National Archives, the Afro-American Historical and Genealogical Society, and the Church of Jesus Christ of Latter-Day Saints — will make available online 1.5 million historical documents, finally allowing ancestors [sic. descendants] of former African-American slaves to learn more about their family roots. Near the end of the US Civil War, The Freedmen’s Bureau was created to help newly-freed slaves find their footing in postbellum America. The Bureau “opened schools to educate the illiterate, managed hospitals, rationed food and clothing for the destitute, and even solemnized marriages.” And, along the way, the Bureau gathered handwritten records on roughly 4 million African Americans. Now, those documents are being digitized with the help of volunteers, and, by the end of 2016, they will be made available in a searchable database at discoverfreedmen.org. According to Hollis Gentry, a Smithsonian genealogist, this archive “will give African Americans the ability to explore some of the earliest records detailing people who were formerly enslaved,” finally giving us a sense “of their voice, their dreams.”

You can learn more about the project by watching the video below, and you can volunteer your own services here.

A crowd sourced project that has a great deal of promise with regard to records on 4 million African Americans, who were previously held as slaves.

Making the documents “searchable” will be of immense value. However, imagine capturing the myriad relationships documented in these records so that subsequent searchers can more quickly find relationships you have already documented.

Finding former slaves with a common owner or other commonalities, could be the clues others need to untangle a past we only see dimly.

Topic maps are a nice fit for this work.

Eidyia (Scientific Python)

Filed under: Python,Science — Patrick Durusau @ 2:30 pm

Eidyia

From the webpage:

A scientific Python 3 environment configured with Vagrant. This environment is designed to be used by professionals and students, with ease of access a priority.

Libraries included:

Databases

Eidyia also includes MongoDB and PostgreSQL

Getting Started

With Vagrant and VirtualBox installed:

Watch the Vagrant link on the Github page, it is broken. Correct link appears above. (I am posting an issue about the link to Github.)

The more experience I have with virtual environments, the more I like them. Mostly from a configuration perspective. I don’t have to worry about library upgrades stepping on other programs, port confusion, etc.

Enjoy!

June 24, 2015

Flash Audit on OPM Infrastructure Update Plan

Filed under: Cybersecurity,Government,Project Management — Patrick Durusau @ 3:56 pm

Flash Audit Alert – U.S. Office ofPersonnel Management’s Infrastructure Improvement Project (Report No. 4A-CI-00-15-055)

Hot off the presses! Just posted online today!

From the report:

The U.S. Office of Personnel Management (OPM) Office ofthe Inspector General (OIG) is issuing this Flash Audit Alert to bring to your immediate attention serious concerns we have regarding the Office of the Chief Information Officer’ s (OCIO) infrastructure improvement project (Project). 1 This Project includes a full overhaul ofthe agency’s technical infrastructure by implementing additional information technology (IT) security controls and then migrating the entire infrastructure into a completely new environment (referred to as Shell).

Our primary concern is that the OCIO has not followed U.S . Office ofManagement and Budget (OMB) requirements and project management best practices. The OCIO has initiated this project without a complete understanding ofthe scope ofOPM’ s existing technical infrastructure or the scale and costs of the effort required to migrate it to the new environment.

In addition, we have concerns with the nontraditional Government procurement vehicle that was used to secure a sole-source contract with a vendor to manage the infrastructure overhaul. While we agree that the sole-source contract may have been appropriate for the initial phases of securing the existing technical environment, we do not agree that it is appropriate to use this vehicle for the long-term system migration efforts.

How bad is it?

Several examples of critical processes that OPM has not completed for this project include:

  • Project charter;
  • Comprehensive list of project stakeholders;
  • Feasibility study to address scope and timeline in concert with budgetary justification/cost estimates;
  • Impact assessment for existing systems and stakeholders;
  • Quality assurance plan and procedures for contractor oversight;
  • Technological infrastructure acquisition plan;
  • High-level test plan; and,
  • Implementation plan to include resource planning, readiness assessment plan, success factors, conversion plan, and back-out plan.

The report isn’t that long, six (6) page in total, but it is a snap shot of bad project management in its essence.

I helped torpedo a project once upon a time where management defended a one paragraph email description of a proposed CMS system as being “agile.” The word they were looking for was “juvenile,” but they were unwilling to admit to years of mistakes in allowing the “programmer” (used very loosely) to remain employed.

What do you think of inspector generals as an audience for topic maps? They investigate large and disorganized agencies, repeatedly over time, with lots of players and documents. Thoughts?

PS: I read about the flash audit report several days ago but didn’t want to post about it until I could share a source for it. Would make great example material for a course on project management.

World Factbook 2015 (paper, online, downloadable)

Filed under: Geography,Government,Government Data — Patrick Durusau @ 3:22 pm

World Factbook 2015 (GPO)

From the webpage:

The Central Intelligence Agency’s World Factbook provides brief information on the history, geography, people, government, economy, communications, transportation, military, and transnational issues for 267 countries and regions around world.

The CIA’s World Factbook also contains several appendices and maps of major world regions, which are located at the very end of the publication. The appendices cover abbreviations, international organizations and groups, selected international environmental agreements, weights and measures, cross-reference lists of country and hydrographic data codes, and geographic names.

For maps, it provides a country map for each country entry and a total of 12 regional reference maps that display the physical features and political boundaries of each world region. It also includes a pull-out Flags of the World, a Physical Map of the World, a Political Map of the World, and a Standard Time Zones of the World map.

Who should read The World Factbook? It is a great one-stop reference for anyone looking for an expansive body of international data on world statistics, and has been a must-have publication for:

  • US Government officials and diplomats
  • News organizations and researchers
  • Corporations and geographers
  • Teachers, professors, librarians, and students
  • Anyone who travels abroad or who is interested in foreign countries

The print version is $89.00 (U.S.), is 923 pages long and weighs in at 5.75 lb. in paperback.

A convenient and frequently updated alternative is the online CIA World Factbook.

I can’t compare the two versions because I am not going to spend $89.00 for an arm wrecker. 😉

You can also download a copy of the HTML version.

I downloaded and unzipped the file, only to find that the last update was in June, 2014.

That may be updated soon or it may not. I really don’t know.

If you just need background information that is unlikely to change or you want to avoid surveillance on what countries you look at and for how long, download the 2014 HTML version or pony up for the 2015 paper version.

Semi-nude Photos and iPhones

Filed under: Privacy — Patrick Durusau @ 1:27 pm

Graham Cluley has advice in Dear politicians, here’s some advice before you check out semi-nude photos on your iPhone… that works for everyone viewing semi-nude photos on their iPhones, not just politicians.

In a prior post, Nude Heather Morris pictures – hacker blamed, Graham has this advice on taking nude photos of yourself (iPhone or not):

nude-photo-advice

Keeping that in your wallet may or may not help.

Startup idea: App that prevents nude or semi-nude photos of the phone owner. 😉

Would you choose a 1,425% or 0% ROI?

Filed under: Cybersecurity,Security — Patrick Durusau @ 9:46 am

The 2015 Trust Wave Security Report calculates that attacks on end users enjoys an ROI of 1,425%.

Can you guess the liability for producing software that allows attacks on end users?

It’s the same amount as the return on making software secure. That is 0% ROI.

No doubt the Obama administration will spend $millions if not $billions in its multi-year cyber egg roll to improve cybersecurity for government networks, but the result will be:

present-IT-stack-plus-security

an insecure IT stack topped off by insecure security software.

Unless and until there are economic incentives and hence meaningful ROIs for secure software, cyberinsecurity will continue.

Given the near idolatry of capitalism and economic incentives in the United States, it is truly surprising that lesson remains unlearned.

Well, save for the realization that secure software requires more investment in tools, training and testing, than current approaches to building commercial software.

Customers demanding more secure software, who are willing to pay more for secure software and liability for the production of insecure software, are all keys to solving (over time) the current state of cyberinsecurity.

June 23, 2015

Country Reports on Terrorism 2014

Filed under: Government,Security — Patrick Durusau @ 7:33 pm

Country Reports on Terrorism 2014 by United States Department of State. (June 2015)

The report runs some three hundred and eighty-eight (388) pages but I thought you would find the criteria for inclusion (starts on page 386) is rather amusing.

Note in particular that:

Section 2656f(d) of Title 22 of the United States Code defines certain key terms used in Section 2656f(a) as follows:

(2) the term “terrorism” means premeditated, politically motivated violence perpetrated against non-combatant targets by subnational groups or clandestine agents; and (emphasis added)

I am guessing that means that US drone pilots are not terrorists because they are not part of “subnational groups or clandestine agents.”

The rest of the report is fairly disheartening. If you have been following the attempts of the Obama Whitehouse to say nice things about the slavers operating out of Malaysia, you will find a number of laudatory comments about Malaysia in this report.

You will notice that the number of those “killed” by “terrorist” figure prominently in the report but no mention is made of civilian casualties inflicted by the United States for the same years.

As a political theater document you may find this useful for detecting shifts in who is a “favorite” of the State Department when this report is published every year.

The report originates under the following authority:

Section 2656f(a) of Title 22 of the United States Code states as follows:

(a) … The Secretary of State shall transmit to the Speaker of the House of Representatives and the Committee on Foreign Relations of the Senate, by April 30 of each year, a full and complete report providing –

(1) (A) detailed assessments with respect to each foreign country –

(i) in which acts of international terrorism occurred which were, in the opinion of the Secretary, of major significance;

(ii) about which the Congress was notified during the preceding five years pursuant to Section 2405(j) of the Export Administration Act of 1979; and

(iii) which the Secretary determines should be the subject of such report; and

(B) detailed assessments with respect to each foreign country whose territory is being used as a sanctuary for terrorist organizations;

(2) all relevant information about the activities during the preceding year of any terrorist group, and any umbrella group under which such terrorist group falls, known to be responsible for the kidnapping or death of an American citizen during the preceding five years, any terrorist group known to have obtained or developed, or to have attempted to obtain or develop, weapons of mass destruction, any terrorist group known to be financed by countries about which Congress was notified during the preceding year pursuant to section 2405(j) of the Export Administration Act of 1979, any group designated by the Secretary as a foreign terrorist organization under section 219 of the Immigration and Nationality Act (8 U.S.C. 1189), and any other known international terrorist group which the Secretary determines should be the subject of such report;

(3) with respect to each foreign country from which the United States Government has sought cooperation during the previous five years in the investigation or prosecution of an act of international terrorism against United States citizens or interests, information on –

(A) the extent to which the government of the foreign country is cooperating with the United States Government in apprehending, convicting, and punishing the individual or individuals responsible for the act; and

(B) the extent to which the government of the foreign country is cooperating in preventing further acts of terrorism against United States citizens in the foreign country; and

(4) with respect to each foreign country from which the United States Government has sought cooperation during the previous five years in the prevention of an act of international terrorism against such citizens or interests, the information described in paragraph (3)(B).

Section 2656f(d) of Title 22 of the United States Code defines certain key terms used in Section 2656f(a) as follows:

(1) the term “international terrorism” means terrorism involving citizens or the territory of more than one country;

(2) the term “terrorism” means premeditated, politically motivated violence perpetrated against non-combatant targets by subnational groups or clandestine agents; and

(3) the term “terrorist group” means any group practicing, or which has significant subgroups which practice, international terrorism.

I first saw this in a tweet by switched.

June 22, 2015

LuxRender

Filed under: Graphics,Visualization — Patrick Durusau @ 4:04 pm

LuxRender – Physically Based Renderer.

From the webpage:

LuxRender is a physically based and unbiased rendering engine. Based on state of the art algorithms, LuxRender simulates the flow of light according to physical equations, thus producing realistic images of photographic quality.

LuxRender is now a member project of the Software Freedom Conservancy which provides administrative and financial support to FOSS projects. This allows us to receive donations, which can be tax deductible in the US.

Physically based spectral rendering

LuxRender is built on physically based equations that model the transportation of light. This allows it to accurately capture a wide range of phenomena which most other rendering programs are simply unable to reproduce. This also means that it fully supports high-dynamic range (HDR) rendering.

Materials

LuxRender features a variety of material types. Apart from generic materials such as matte and glossy, physically accurate representations of metal, glass, and car paint are present. Complex properties such as absorption, dispersive refraction and thin film coating are available.

Fleximage (virtual film)

The virtual film allows you to pause and continue a rendering at any time. The current state of the rendering can even be written to a file, so that the computer (or even another computer) can continue rendering at a later moment.

Free for everyone

LuxRender is and will always be free software, both for private and commercial use. It is being developed by people with a passion for programming and for computer graphics who like sharing their work. We encourage you to download LuxRender and use it to express your artistic ideas. (learn more)

Too advanced for my graphic skills but I thought some of you might find this useful in populating your topic maps with high-end visualizations.

I first saw this in a tweet by David Bucciarelli that announced the LuxRender v1.5RC1 release.

Learning to Execute

Filed under: Machine Learning,Neural Networks — Patrick Durusau @ 3:18 pm

Learning to Execute by Wojciech Zaremba and Ilya Sutskever.

Abstract:

Recurrent Neural Networks (RNNs) with Long Short-Term Memory units (LSTM) are widely used because they are expressive and are easy to train. Our interest lies in empirically evaluating the expressiveness and the learnability of LSTMs in the sequence-to-sequence regime by training them to evaluate short computer programs, a domain that has traditionally been seen as too complex for neural networks. We consider a simple class of programs that can be evaluated with a single left-to-right pass using constant memory. Our main result is that LSTMs can learn to map the character-level representations of such programs to their correct outputs. Notably, it was necessary to use curriculum learning, and while conventional curriculum learning proved ineffective, we developed a new variant of curriculum learning that improved our networks’ performance in all experimental conditions. The improved curriculum had a dramatic impact on an addition problem, making it possible to train an LSTM to add two 9-digit numbers with 99% accuracy.

Code to replicate the experiments: https://github.com/wojciechz/learning_to_execute.

A step towards generation of code that conforms to coding standards?

I first saw this in a tweet by samin.

Mars Code

Filed under: Cybersecurity,Programming,Security,Software,Software Engineering — Patrick Durusau @ 2:55 pm

Mars Code by Gerald Holzmann, JPL Laboratory for Reliable Software.

Abstract:

On August 5 at 10:18 p.m. PDT, a large rover named Curiosity made a soft landing on the surface of Mars. Given the one-way light-time to Mars, the controllers on Earth learned about the successful touchdown 14 minutes later, at 10:32 p.m. PDT. As can be expected, all functions on the rover, and on the spacecraft that brought it to Mars, are controlled by software. In this talk we review the process that was followed to secure the reliability of this code.

Gerard Holzmann is a senior research scientist and a fellow at NASA’s Jet Propulsion Laboratory, the lab responsible for the design of the Mars Science Laboratory Mission to Mars and its Curiosity Rover. He is best known for designing the Logic Model Checker Spin, a broadly used tool for the logic verification of multi-threaded software systems. Holzmann is a fellow of the ACM and a member of the National Academy of Engineering.

Timemark 8:50 starts the discussion of software environments for testing.

The first slide about software reads:

3.8 million lines
~ 60,000 pages
~ 100 really large books

120 Parallel Threads

2 CPUs (1 spare, not parallel, hardware backup)

5 years development time, with a team of 40 software engineers, < 10 lines of code per hour

1 customer, 1 use: it must work the first time

So how do you make sure you get it right?

Steps they took to make the software right:

  1. adopted a risk-based Coding Standard with tool-based compliance checks (very few rules and every rule had a mission that failed because the rule wasn’t followed)
  2. provided training & Certification for software developers
  3. conducted daily builds integrated with Static Source Code Analysis (with penalities for breaking the build)
  4. used a tool-based Code Review process
  5. thorough unit- and (daily) integration testing
  6. did Logic Verification of critical subsystems with a model checker

Continues to examine each these areas in detail. Be forewarned, the first level of conformance is compiling with all warnings on and having 0 warnings. The bare minimum.

BTW, there are a number of resources online at the JPL Laboratory for Reliable Software (LaRS).

Share this post with anyone who claims it is too hard to write secure software. It may be, for them, but not for everyone.

One bold, misinformed spider…

Filed under: Government,Politics — Patrick Durusau @ 1:36 pm

One bold, misinformed spider slows a colony’s ability to learn by Susan Milius.

From the post:

The wrong-headed notions of an influential individual can make a group sluggish in learning from its mistakes, even among spiders.

Velvet spiders (Stegodyphus dumicola) with bold behavior but incorrect information were bad influences on their colonies, researchers reported June 11 at the annual conference of the Animal Behavior Society. Spider growth faltered in these colonies because the naïve group mates were slow to catch on to bold ones’ errors. In contrast, misinformed shier spiders didn’t undermine their colonies’ prospects.

The rest of the article is fire-walled but you get the gist of the finding.

What if you had ten+ (10+) bold, misinformed spiders with seemingly unlimited TV time? Would that have the potential to make an entire nation dumber?

Is this why federal agencies function so poorly? Bold, misinformed spiders are appointed to positions of leadership?

There is certainly an argument for editing topic map input to eliminate views of bold, misinformed individuals.

URLs Are Porn Vulnerable

Filed under: Identifiers,Semantic Web,WWW — Patrick Durusau @ 10:34 am

Graham Cluley reports in Heinz takes the heat over saucy porn QR code that some bottles of Heinz Hot Ketchup provide more than “hot” ketchup. The QR code on the bottle leads to a porn site. (It is hard to put a “prize” in a ketchup bottle.)

Graham observes a domain registration lapsed for Heinz and the new owner wasn’t in the same line of work.

Are you presently maintaining every domain you have ever registered?

The lesson here is that URLs (as identifiers) are porn vulnerable.

June 21, 2015

People Don’t Want Something Truly New,…

Filed under: Marketing,Topic Maps — Patrick Durusau @ 8:05 pm

People Don’t Want Something Truly New, They Want the Familiar Done Differently by Nir Eyal.

From the post:

I’ll admit, the bento box is an unlikely place to learn an important business lesson. But consider the California Roll — understanding the impact of this icon of Japanese dining can make all the difference between the success or failure of your product.

If you’ve ever felt the frustration of customers not biting, then you can sympathize with Japanese restaurant owners in America during the 1970s. Sushi consumption was all but non-existent. By all accounts, Americans were scared of the stuff. Eating raw fish was an aberration and to most, tofu and seaweed were punch lines, not food.

Then came the California Roll. While the origin of the famous maki is still contested, its impact is undeniable. The California Roll was made in the USA by combining familiar ingredients in a new way. Rice, avocado, cucumber, sesame seeds, and crab meat — the only ingredient unfamiliar to the average American palate was the barely visible sliver of nori seaweed holding it all together.

The success story of introducing Americans to the consumption of sushi, from almost no consumption at all, to a $2.25 billion market annually.

How would you answer the question:

What’s the “California Roll” for topic maps?

June 20, 2015

Real-time Trainable Neural Network (on a chip)

Filed under: Machine Learning,Neural Networks — Patrick Durusau @ 8:07 pm

Real-time Trainable Neural Network

From the webpage:

The architecture of the CogniMem™ chip makes it the most practical implementation of a Radial Basis Function classifier with autonomous adaptive learning capabilities.

The Radial Basis Function is a classifier capable of representing complex nonlinear decision spaces using hyperspheres with adaptable radii. It is widely used for face recognition and other image recognition applications, function approximation, time series prediction, novelty detection.

RBF-decision-space

The CogniMem Advantage: Upon receipt of an input vector, all the cognitive memories holding a previously learned vector calculate their distance to the input vector and evaluate immediately if it falls in their similarity domain. If so, the “firing” cells are ready to output their response in an orderly fashion giving the way to the cell which holds the smallest distance. If no cell fires and a teaching command is issued, the next available cell automatically learns the vector. Also, if a teaching command conflicts with the category that a firing cell, the latter automatically corrects itself by reducing its influence field.

This autonomous learning and recognition behavior pertains to the unique CogniMem parallel architecture and a patented Search and Sort process.

The website has a wealth of information and modules start at $175 per unit.

I first saw this in a tweet by Kirk Borne.

Introducing the Witness Media Lab

Filed under: Journalism,News,Reporting — Patrick Durusau @ 4:34 pm

Introducing the Witness Media Lab

From the webpage:

We are pleased to announce our newest initiative: the WITNESS Media Lab. The project is dedicated to unleashing the potential of eyewitness video as a powerful tool to report, monitor, and advocate for human rights.

In collaboration with the News Lab at Google, and continuing the work of its predecessor, the Human Rights Channel on YouTube, the WITNESS Media Lab will address the challenges of finding, verifying, and contextualizing eyewitness videos for the purpose of creating lasting change.

The WITNESS Media Lab will focus on one issue for a few months at a time, using new tools, strategies and platforms for research, verification and contextualization of citizen video. We will share analysis and resources publicly online via case studies, blog articles, multimedia presentations and through in-person convenings with peers. The first project will look at several cases of eyewitness video of police violence in the United States.

“We’re incredibly encouraged by the growing capacity of people everywhere to capture video of human rights abuses in their communities. We’re also aware of the critical need for skills to harness the potential of those videos, in order to turn them into tools for justice,” said Madeleine Bair, Program Manager for the WITNESS Media Lab.

Drawing on more than two decades of supporting people to use video for human rights advocacy, the WITNESS Media Lab will leverage the organization’s in-house expertise as well as that of our extensive peer networks in the fields of advocacy, technology, and journalism. Together with them, the WITNESS Media Lab will seek to develop solutions to ensure that footage taken by average citizens can impact some of the world’s most pressing and persistent injustices.

“Videos depicting human rights abuses on YouTube can be an incredibly powerful tool to expose injustice, but context is critical to ensuring they have maximum impact,” said Steve Grove of the News Lab at Google. “We’re thrilled that WITNESS is bringing their deep expertise to this space in the WITNESS Media Lab, and we are honored to be partnering with them.”

For more details visit us at WITNESS Media Lab website and follow us @WITNESS_Lab. And YouTube published an announcement today detailing their support of the WITNESS Media Lab and a two other projects focused on the power of citizen video.

Press inquiries and requests for interviews should be directed to Matisse Bustos-Hawkes at WITNESS via our press kit or on Twitter @matissebh.

If “What Witness Learned in Our Three Years Curating Human Rights Videos on Youtube,” is indicative of the work to expect from the Witness Media Lab, the Lab will be a welcome resource.

The “What Witness Learned..” is a quick introduction to some principal issues surrounding videos, such as “Verification is a Spectrum,” “Context is Key,” “New Platforms for Sharing Videos,” and, “Persistent Challenges (language, vicarious trauma, impact).”

Not in depth coverage of any of those issues but enough to give would be creators of eyewitness videos or those seeking to distribute or use them pause for reflection.

Don’t miss the Witness Resources page! A treasure trove of videos, training materials, an archive of 4,000 hours of videos from human rights defenders and other materials.

Remember that the authenticity of the image of Phan Thi Kim Phuc was questioned by then President Nixon and the New York Times published it after cropping out the media reporter on the right. To avoid showing reporters ignoring young girls burned by napalm?

250px-TrangBang

Every image tells a story. What story will yours tell?

Creating-maps-in-R

Filed under: Mapping,Maps,R — Patrick Durusau @ 2:37 pm

Creating-maps-in-R by Robin Lovelace.

From the webpage:

Introductory tutorial on graphical display of geographical information in R, to contribute to teaching material. For the context of this tutorial and a video introduction, please see here: http://robinlovelace.net/r/2014/01/30/spatial-data-with-R-tutorial.html

All of the information needed to run the tutorial is contained in a single pdf document that is kept updated: see github.com/Robinlovelace/Creating-maps-in-R/raw/master/intro-spatial-rl.pdf.

By the end of the tutorial you should have the confidence and skills needed to convert a diverse range of geographical and non-geographical datasets into meaningful analyses and visualisations. Using data and code provided in this repository all of the results are reproducible, culminating in publication-quality maps such as the faceted map of London’s population below:

Quite a treat in thirty (30) pages! You will have R and some basic spatial data packages installed and be well on your way to creating maps in R. From a topic map perspective, the joining of attributes to polygons is quite similar to adding properties to topics. Assuming you want to treat each polygon as a subject to be represented by a topic.

Enjoy!

PS:

You will also enjoy:

Cheshire, J. & Lovelace, R. (2014). Spatial data visualisation with R. In Geocomputation, a Practical Primer. In Press with Sage. Preprint available online

and other publications by Robin.

Saudi Cables (or file dump?)

Filed under: Government,Government Data — Patrick Durusau @ 1:41 pm

WikiLeaks publishes the Saudi Cables

From the post:

Today, Friday 19th June at 1pm GMT, WikiLeaks began publishing The Saudi Cables: more than half a million cables and other documents from the Saudi Foreign Ministry that contain secret communications from various Saudi Embassies around the world. The publication includes “Top Secret” reports from other Saudi State institutions, including the Ministry of Interior and the Kingdom’s General Intelligence Services. The massive cache of data also contains a large number of email communications between the Ministry of Foreign Affairs and foreign entities. The Saudi Cables are being published in tranches of tens of thousands of documents at a time over the coming weeks. Today WikiLeaks is releasing around 70,000 documents from the trove as the first tranche.

Julian Assange, WikiLeaks publisher, said: “The Saudi Cables lift the lid on a increasingly erratic and secretive dictatorship that has not only celebrated its 100th beheading this year, but which has also become a menace to its neighbours and itself.

The Kingdom of Saudi Arabia is a hereditary dictatorship bordering the Persian Gulf. Despite the Kingdom’s infamous human rights record, Saudi Arabia remains a top-tier ally of the United States and the United Kingdom in the Middle East, largely owing to its globally unrivalled oil reserves. The Kingdom frequently tops the list of oil-producing countries, which has given the Kingdom disproportionate influence in international affairs. Each year it pushes billions of petro-dollars into the pockets of UK banks and US arms companies. Last year it became the largest arms importer in the world, eclipsing China, India and the combined countries of Western Europe. The Kingdom has since the 1960s played a major role in the Organization of Petroleum Exporting Countries (OPEC) and the Cooperation Council for the Arab States of the Gulf (GCC) and dominates the global Islamic charity market.

For 40 years the Kingdom’s Ministry of Foreign Affairs was headed by one man: Saud al Faisal bin Abdulaziz, a member of the Saudi royal family, and the world’s longest-serving foreign minister. The end of Saud al Faisal’s tenure, which began in 1975, coincided with the royal succession upon the death of King Abdullah in January 2015. Saud al Faisal’s tenure over the Ministry covered its handling of key events and issues in the foreign relations of Saudi Arabia, from the fall of the Shah and the second Oil Crisis to the September 11 attacks and its ongoing proxy war against Iran. The Saudi Cables provide key insights into the Kingdom’s operations and how it has managed its alliances and consolidated its position as a regional Middle East superpower, including through bribing and co-opting key individuals and institutions. The cables also illustrate the highly centralised bureaucratic structure of the Kingdom, where even the most minute issues are addressed by the most senior officials.

Since late March 2015 the Kingdom of Saudi Arabia has been involved in a war in neighbouring Yemen. The Saudi Foreign Ministry in May 2015 admitted to a breach of its computer networks. Responsibility for the breach was attributed to a group calling itself the Yemeni Cyber Army. The group subsequently released a number of valuable “sample” document sets from the breach on file-sharing sites, which then fell under censorship attacks. The full WikiLeaks trove comprises thousands of times the number of documents and includes hundreds of thousands of pages of scanned images of Arabic text. In a major journalistic research effort, WikiLeaks has extracted the text from these images and placed them into our searchable database. The trove also includes tens of thousands of text files and spreadsheets as well as email messages, which have been made searchable through the WikiLeaks search engine.

By coincidence, the Saudi Cables release also marks two other events. Today marks three years since WikiLeaks founder Julian Assange entered the Ecuadorian Embassy in London seeking asylum from US persecution, having been held for almost five years without charge in the United Kingdom. Also today Google revealed that it had been been forced to hand over more data to the US government in order to assist the prosecution of WikiLeaks staff under US espionage charges arising from our publication of US diplomatic cables.

A searcher with good Arabic skills is going to be necessary to take full advantage of this release.

I am unsure about the title: “Saudi Cables” because some of the documents I retrieved searching for “Bush,” were public interviews and statements. Hardly the burning secrets that are hinted at by “cables.” See for example, Exclusive Interview with Daily Telegraph 27-2-2005.doc or Interview with Wall Street Joutnal 26-4-2004.doc.

Putting “public document” in the words to exclude filter doesn’t eliminate the published interviews.

This has the potential, particularly out of more than 500,000 documents, to have some interesting tidbits. The first step would be to winnow out all published and/or public statements, in English and/or Arabic. Not discarded but excluded from search results until you need to make connections between secret statements and public ones.

A second step would be to identify the author/sender/receiver of each document so they can be matched to known individuals and events.

This is a great opportunity to practice your Arabic NLP processing skills. Or Arabic for that matter.

Hopefully Wikileaks will not decide to act as public censor with regard to these documents.

Governments do enough withholding of the truth. They don’t need the assistance of Wikileaks.

June 19, 2015

China-based Contractors Working For The OPM

Filed under: Cybersecurity,Security — Patrick Durusau @ 2:34 pm

Some of the contractors that have helped OPM with managing internal data have had security issues of their own—including potentially giving foreign governments direct access to data long before the recent reported breaches. A consultant who did some work with a company contracted by OPM to manage personnel records for a number of agencies told Ars that he found the Unix systems administrator for the project “was in Argentina and his co-worker was physically located in the [People’s Republic of China]. Both had direct access to every row of data in every database: they were root. Another team that worked with these databases had at its head two team members with PRC passports. I know that because I challenged them personally and revoked their privileges. From my perspective, OPM compromised this information more than three years ago and my take on the current breach is ‘so what’s new?'”
(emphasis added)

From: Encryption “would not have helped” at OPM, says DHS official by Sean Gallagher.

China based contractors? What about that would raise security concerns in your mind?

The president should spend the remainder of his term in office:

  1. Dismantling the current IT contracting system
  2. Forcing all government offices to inventory their IT systems

Either one of those steps without the other will perpetuate the deeply insecure federal IT systems of today.

Inceptionism: Going Deeper into Neural Networks

Filed under: Neural Networks — Patrick Durusau @ 1:02 pm

Inceptionism: Going Deeper into Neural Networks by Alexander Mordvintsev, Christopher Olah, and Mike Tyka.

From the post:

Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t. So let’s take a look at some simple techniques for peeking inside these networks.

We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.

One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations—these neurons activate in response to very complex things such as entire buildings or trees.

Have you ever looked under the hood of a neural network? If not, you are in for a real treat! As a bonus, this research may help you understand why some models work and others don’t.


Same title but images as seen by neural networks before it reaches an outcome.

I don’t think anyone has captured an interruption of image processing in the human brain. With a neural network, that is a reality.

Enjoy!

Addicted: An Industry Matures / Hooked: How to Build Habit-Forming Products

Filed under: Marketing,Topic Maps — Patrick Durusau @ 12:31 pm

Addicted: An Industry Matures by Ted McCarthy.

From the post:

Perhaps nothing better defines our current age than to say it is one of rapid technological change. Technological improvements will continue to provide more to individuals and society, but also to demand more: demand (and leak) more of our data, more time, more attention and more anxieties. While an increasingly vocal minority have begun to rail against certain of these demands, through calls to pull our heads away from our screens and for corporations and governments to stop mining user data, a great many in the tech industry see no reason to change course. User data and time are requisite in the new business ecosystem of the Internet; they are the fuel that feeds the furnace.

Among those advocating for more fuel is Nir Eyal and his recent work, Hooked: How to Build Habit-Forming Products. The book — and its accompanying talk — has attracted a great deal of attention here in the Bay Area, and it’s been overwhelmingly positive. Eyal outlines steps that readers — primarily technology designers and product managers — can follow to make ‘habit-forming products.’ Follow his prescribed steps, and rampant entrepreneurial success may soon be yours.

Since first seeing Eyal speak at Yelp’s San Francisco headquarters last fall, I’ve heard three different clients in as many industries refer to his ideas as “amazing,” and some have hosted reading groups to discuss them. His book has launched to Amazon’s #1 bestseller spot in Product Management, and hovers near the same in Industrial & Product Design and Applied Psychology. It is poised to crack into the top 1000 sellers across the entire site, and reviewers have offered zealous praise: Eric Ries, a Very Important tech Person indeed, has declared the book “A must read for everyone who cares about driving customer engagement.”

And yet, no one offering these reviews has pointed what should be obvious: that Eyal’s model for “hooking” users is nearly identical to that used by casinos to “hook” their own; that such a model engenders behavioral addictions in users that can be incredibly difficult to overcome. Casinos may take our money, but these products can devour our time; and while we’re all very aware of what the casino owners are up to, technology product development thus far has managed to maintain an air of innocence.

While it may be tempting to dismiss a book seemingly written only for, and read only by, a small niche of $12 cold pressed juice-drinking, hoodie and flip flop-wearing techies out on the west coast, one should consider the ways in which those techies are increasingly creating the worlds we all inhabit. Technology products are increasingly determining the news we read, the letters we send, the lovers we meet and the food we eat — and their designers are reading this book, and taking note. I should know: I’m one of them.

I start with Ted McCarthy’s introduction because I found out about Hooked: How to Build Habit-Forming Products by Nir Eyal. It certainly sounded like a book that I must read!

I was hoping to find reviews sans moral hand-wringing but even Hooked: How To Make Habit-Forming Products, And When To Stop Flapping by Wing Kosner gets in on the moral concern act:

In the sixth chapter of the book, Eyal discusses these manipulations, but I think he skirts around the morality issues as well as the economics that make companies overlook them. The Candy Crush Saga game is a good example of how his formulation fails to capture all the moral nuance of the problem. According to his Manipulation Matrix, King, the maker of Candy Crush Saga, is an Entertainer because although their product does not (materially) improve the user’s life, the makers of the game would happily use it themselves. So, really, how bad can it be?

Consider this: Candy Crush is a very habit-forming time-waster for the majority of its users, but a soul-destroying addiction for a distinct minority (perhaps larger, however, than the 1% Eyal refers to as a rule of thumb for user addiction.) The makers of the game may be immune to the game’s addictive potential, so their use of it doesn’t necessarily constitute a guarantee of innocuousness. But here’s the economic aspect: because consumers are unwilling to pay for casual games, the makers of these games must construct manipulative habits that make players seek rewards that are most easily attained through in-app purchases. For “normal” players, these payments may just be the way that they pay to play the game instead of a flat rate up-front or a subscription, and there is nothing morally wrong with getting paid for your product (obviously!) But for “addicted” players these payments may be completely out of scale with any estimate of the value of a casual game experience. King reportedly makes almost $1 million A DAY from Candy Crush, all from in app purchases. My guess is that there is a long tail going on with a relative few players being responsible for a disproportionate share of that revenue.

This is in Forbes.

I don’t read Forbes for moral advice. 😉 I don’t consult technologists either. For moral advice, consult your local rabbi, priest or iman.

Here is an annotated introduction to Hooked, if you want to get a taste of what awaits before ordering the book. If you visit the book’s website, you will be offered a free Hooked workbook. And you can follow Nir Eyal on Twitter: @nireyal. Whatever else can be said about Nir Eyal, he is a persistent marketeer!)

Before you become overly concerned about the moral impact of Hooked, recall that legions of marketeers have labored for generations to produce truly addictive products, some with “added ingredients” and others, more recently, not. Creating additive products isn’t as easy as “read the book” and the rest of us will start wearing bras on our heads. (Apologies to Scott Adams and especially to Dogbert.)

Implying that you can make all of us into addictive product mavens, however, is a marketing hook that few of us can resist.

Enjoy!

June 18, 2015

Thematic Cartography Guide

Filed under: Cartography,Mapping,Maps — Patrick Durusau @ 8:29 pm

Thematic Cartography Guide

From the webpage:

Welcome! In this short guide we share some insights and tips for making thematic maps. Our goal is to cover the important concepts in cartography and flag the important decision points in the map-making process. As with many activities in life, there isn’t always a single best answer in cartography, and in those cases we’ve tried to outline some of the pros and cons to different solutions.

This is by no means a replacement for a full textbook on cartography; rather it is a quick reference guide for those moments when you’re stumped, unsure of what to do next, or unfamiliar with the terminology. While the recommendations on these pages are short and not loaded with academic references, please appreciate that they represent a thoughtful synthesis of decades of map-making research.

This guide was written by Axis Maps, adapted from documentation written for indiemapper in 2010. However, the content here is about general cartography principles, not software-specific tips. To see the material in its original context, visit indiemapper and its help pages.

If that doesn’t sound exciting, perhaps this will:

Thematic maps are meant not simply to show locations, but rather to show attributes or statistics about places, spatial patterns of those attributes, and relationships between places. For example, while a reference map might show the locations of cities, a thematic map might also represent the population of those cities. It’s the difference between mapping places and mapping data. This site is about thematic maps, describing some of the different types and basic principles.

Hmmm, data about places? Relationships? That’s starting to sound suspiciously like a topic map expressed in a different vocabulary.

The same principles apply, in addition to places on a geographic grid, you can have subjects that exist only on your own intellectual grid, arranged in relationships as you see fit.

Over the years you have no doubt seen a number of offenses against the art of presentation in the name of topic maps. You have the power to break from that tradition. Seeing what works in other mapping domains is one place to start.

Where else would you look for fresh ideas and themes?

CNIL Anoints Itself Internet Censor

Filed under: Censorship,Government,WWW — Patrick Durusau @ 8:07 pm

France seeks to extend Google ‘right to be forgotten’.

From the post:

Google has 15 days to comply with a request from France’s data watchdog to extend the “right to be forgotten” to all its search engines.

Last year a European Court of Justice ruling let people ask Google to delist some information about them.

However, the data deleting system only strips information from searches done via Google’s European sites.

French data regulator CNIL said Google could face sanctions if it did not comply within the time limit.

In response, Google said in a statement: “We’ve been working hard to strike the right balance in implementing the European Court’s ruling, co-operating closely with data protection authorities.

“The ruling focused on services directed to European users, and that’s the approach we are taking in complying with it.”

(emphasis in the original)

The first news I saw of this latest round of censorship from the EU was dated June 12, 2015. Assuming that started the fifteen (15) days running, Google has until the 27th of June, 2015, to comply.

Plenty enough time to reach an agreement with the other major search providers to go dark in the EU on the 27th of June, 2015.

By working with the EU at all on the fantasy right-to-be-forgotten, Google has encouraged a step towards Balkanization of the Internet, where what resources you may or may not see, will depend upon your physical location.

Not only does that increase the overhead for providers of Internet content, but it also robs the Internet of its most powerful feature, the free exchange of ideas, education and resources.

Eventually, even China will realize that the minor social eddies caused by use of the Internet pale when compared to the economic activity spurred by it. People do blame/credit the Internet with social power but where it has worked, the people who lost should have been removed long ago by other means.

Internet advocates are quick to take credit for things the Internet has not done, much as Unitarians of today want to claim Thomas Jefferson as a Unitarian. I would not credit the view of advocates as being a useful measure of the Internet’s social influence.

If that were the case, then why does sexism, rape, child porn, violence, racism, discrimination, etc. still exist? Hmmm, maybe the Internet isn’t as powerful as people think? Maybe the Internet reflects the same social relationships and short falls that exist off of the Internet? Could be.

Google needs to agree with other search providers to go dark for the EU for some specified time period. EU residents can see how the Internet looks with effective search tools. Perhaps they will communicate their wishes with regard to search engines to their duly elected representatives.

PS: Has anyone hacked CNIL lately? Just curious.

Otherworldly CAD Software…

Filed under: Graphics,Visualization — Patrick Durusau @ 7:39 pm

Otherworldly CAD Software Hails From A Parallel Universe by Joshua Vasquez.

From the post:

The world of free 3D-modeling software tends to be grim when compared to the expensive professional packages. Furthermore, 3D CAD modeling software suggestions seem to throw an uproar when new users seek open-source or inexpensive alternatives. Taking a step apart from the rest, [Matt] has developed his own open-source CAD package with a spin that inverts the typical way we do CAD.

Antimony is a fresh perspective on 3D modeling. In contrast to Blender’s “free-form sculpting” and Solidworks’ sequential extrudes and cuts, Antimony invites you to break down your model into a network of both primitive geometry and operations that interact with that geometry.

Functionally, Antimony represents objects as a graphical collection of nodes that encode both primitives and operations. Want a cylinder? Start with a circle node and pipe it into an extrude node. Need to cut out some part geometry? Try defining it with one or more primitives, and then perform a boolean intersection operation. Users can even write their own nodes with custom scripts written in Python. Overall, Antimony boasts the power of parametric design similar to OpenSCAD while it also boosts readability with a graphical, rather than text-based, part description. Finally, because part geometry is essentially stored as a series of instructions, the process of modeling the part does not limit the resolution of the output .STL mesh. (Think: vector-based images, versus pixel-based images).

Current versions of the software are available for both Mac and Linux, and the entire project is open-source and available on the Githubs. (For the shrewd-eyed software developers, most of the project is written with Python that interacts with lower-level routines handled in C++ and exposed through Boost.Python.) Take a video tour of an Antimony workflow with [Matt] after the break. All-in-all, despite that the software is still in its alpha stages, it’s highly functional and (for the block-diagram fans) intuitive. We’re thrilled to put our programming hats on and try CAD from, as [Matt] coins it “a parallel universe.”

For all you graph lovers, parts are linked as a graph.

If you are looking for a project, try modeling historical clatrops. They remain as effective as they were in the time of Alexander the Great.

stationaRy (R package)

Filed under: R,Weather Data — Patrick Durusau @ 5:20 pm

stationaRy by Richard Iannone.

From the webpage:

Get hourly meteorological data from one of thousands of global stations.

Want some tools to acquire and process meteorological and air quality monitoring station data? Well, you’ve come to the right repo. So far, because this is merely the beginning, there’s only a few functions that get you data. These are:

  • get_ncdc_station_info
  • select_ncdc_station
  • get_ncdc_station_data

They will help you get the hourly met data you need from a met station located somewhere on Earth.

I’m old school about the weather. I go outside to check on it. 😉

But, my beloved is interested in earthquakes, volcanoes, hurricanes, weather, etc. so I track resources for those.

Some weather conditions lend themselves more to some activities than others. As Hitler discovered in the winter of 1943-44. Weather can help or hinder your plans, whatever those may be.

You may like the Farmer’s Almanac, but it isn’t a good source for strategic weather data. Try stationaRy.

If you know of any unclassified military strategy guides that cover collection and analysis of weather data, give me a shout.

I’m a bird watcher, I’m a bird watcher, here comes one now…

Filed under: Image Recognition,Image Understanding,Machine Learning — Patrick Durusau @ 4:51 pm

New website can identify birds using photos

From the post:

In a breakthrough for computer vision and for bird watching, researchers and bird enthusiasts have enabled computers to achieve a task that stumps most humans—identifying hundreds of bird species pictured in photos.

The bird photo identifier, developed by the Visipedia research project in collaboration with the Cornell Lab of Ornithology, is available for free at: AllAboutBirds.org/photoID.

Results will be presented by researchers from Cornell Tech and the California Institute of Technology at the Computer Vision and Pattern Recognition (CVPR) conference in Boston on June 8, 2015.

Called Merlin Bird Photo ID, the identifier is capable of recognizing 400 of the mostly commonly encountered birds in the United States and Canada.

“It gets the bird right in the top three results about 90% of the time, and it’s designed to keep improving the more people use it,” said Jessie Barry at the Cornell Lab of Ornithology. “That’s truly amazing, considering that the computer vision community started working on the challenge of bird identification only a few years ago.”

The perfect website for checking photos of birds made on summer vacation and an impressive feat of computer vision.

The more the service is used, the better it gets. Upload your vacation bird pics today!

Who Is To Blame For The OMP Hack? [Turns out, it’s us.]

Filed under: Cybersecurity,Security — Patrick Durusau @ 3:00 pm

Enough time has passed since the known OMP hacks for some of the commentary to become less breathless and morally outraged. As I pointed out in The New ‘China Syndrone’ – Saving Face By Blaming China (June 6, 2015), the hack of OPM could have been by anybody given the state of its security.

Kristen Eichensehr points out in The OPM Hack and the New DOD Law of War Manual that even assuming that China was behind the hack, this was just day-to-day espionage, which is no prohibited by international law.

I have been meaning to post about the new DOD Law of War Manual just to call it to your attention so consider that done. Bearing in mind the laws of war are drafted to favor current “conventional” tactics. Another example of law having its thumb on the scale of justice.

ThumbScale

Benjamin Wittes quotes Dennis Hayden (former NSA and CIA chief) as saying:

The episode, he says, “is not shame on China. This is shame on us for not protecting that kind of information.” (Michael Hayden: “Those Records are a Legitimate Foreign Intelligence Target”)

How much “shame on us?”

Ken Dilanian reports in Fed Personnel Agency Admits History of Security Problems:

An Office of Personnel Management investigative official said June 16, 2015, the agency entrusted with millions of personnel records has a history of failing to meet basic computer network security requirements. Michael Esser, assistant inspector general for audit, said in testimony prepared for delivery that, for years, many of the people running the agency’s information technology had no IT background. He also said the agency had not disciplined any employees for the agency’s failure to pass numerous cyber security audits.

I suspect it will take months of testimony to drag out the sorry tale of cyberinsecurity at OPM. But in a calmer atmosphere, it is clear that all fault for the breach lies with the Office of Personnel Management.

The some of the remaining questions are:

  1. How to fix a fundamentally broken IT system (that hasn’t been fully accounted for)?
  2. How to hold staff accountable for failures to maintain cybersecurity?

I can give you a hint on the first question: Don’t use traditional the traditional prime, sub-primes, etc. infrastructure. Hire contractors to write requirements (who won’t be bidding on fulfillment), with built-in milestones and then ask several of the larger IT services companies to bid.

The second question is easier. Fire everyone with managerial responsibilities and keep the other staff. Blacklist the fired staff from any future government service or employment with a government contractor. Then freeze all their benefits until their liability for damages from the data breaches can be assessed.

Think of it as a “teaching moment” that will encourage greater diligence when it comes to cybersecurity in government offices.

PS: In terms of a likely timetable for improvement of cybersecurity at OMP and other federal offices, see my: Cybersecurity Sprint or Multi-Year Egg Roll?

In a Galaxy, Near, Near to You…

Filed under: Cybersecurity,Security — Patrick Durusau @ 2:11 pm

Samsung devices, including Galaxy S6, vulnerable to remote code execution by Ashley Carman.

From the post:

More than 600 million Samsung mobile device owners are vulnerable to cyberattacks that could allow a perpetrator to remotely execute code as a privileged system user.

The vulnerability exists in Samsung’s pre-installed Swift keyboard. The keyboard, which cannot be uninstalled or disabled, issues update check requests every couple hours or so, explained NowSecure CEO Andrew Hoog during a Wednesday interview with SCMagazine.com. NowSecure discovered the bug, CVE-2015-2865, in 2014, and notes that to execute a successful attack, a person must be capable of modifying upstream traffic.

If a user is logged into an insecure WiFi network, for example, a successful man-in-the-middle (MitM) attack could allow a cybercriminal to monitor the network traffic for these requests. Once one is spotted, the attacker can respond with a malicious payload. From there, the attacker could tamper with the compromised device. Sample exploitations could include accessing sensors and resources, such as the device’s camera; installing malicious apps without the user’s knowledge; listening in on calls; or accessing personal data, such as pictures and text messages.

See Ashley’s post for the rest of the details. Including a suggestion that an attack focused on a single device is unlikely.

That’s like arguing purse snatching isn’t a problem because banks have more money. In fact we know that purse snatching happens despite the existence of banks. Smaller return but less risk.

A 600 million device target area sounds big enough to attract malicious interest.

Android Security Rewards Program [Go Navy!]

Filed under: Cybersecurity,Security — Patrick Durusau @ 1:52 pm

Android Security Rewards Program

Covers:

Rewards as of June 2015:

Reward amounts

The reward amount depends on the severity of the vulnerability and the quality of the report. A bug report that includes reproduction code will get more than a simple report pointing out vulnerable code. A well-written CTS test and patch will result in an even higher reward.

Our base reward amounts for vulnerability severity are typically:

  • Critical – $2,000
  • High – $1,000
  • Moderate – $500

We’ll reward up to 1.5x the base amount if the bug report includes standalone reproduction code or a standalone test case (e.g., a malformed file). If the bug report includes a patch that fixes the issue or a CTS test that detects the issue, we’ll apply up to a 2x reward modifier. If there is both a CTS test and a patch, there’s a potential 4x reward modifier. Keep in mind that submitted CTS tests and patches must apply cleanly to AOSP’s master branch and comply with Android’s Coding Style Guidelines to be eligible for these additional reward amounts.

This table shows an overview of the reward schedule for typical rewards:

Severity Bug Test case CTS / patch CTS+Patch
Critical $2,000 $3,000 $4,000 $8,000
High $1,000 $1,500 $2,000 $4,000
Moderate $500 $750 $1,000 $2,000
Low $0 $333 $500 $1,000

Besides these reward levels, we offer additional rewards for functional exploits:

  • An exploit or chain of exploits leading to kernel compromise from an installed app or with physical access to the device will get up to an additional $10,000. Going through a remote or proximal attack vector can get up to an additional $20,000.
  • An exploit or chain of exploits leading to TEE (TrustZone) or Verified Boot compromise from an installed app or with physical access to the device will get up to an additional $20,000. Going through a remote or proximal attack vector can get up to an additional $30,000.

The final amount is always chosen at the discretion of the reward panel. In particular, we may decide to pay higher rewards for unusually clever or severe vulnerabilities; decide that a single report actually constitutes multiple bugs; or that multiple reports are so closely related that they only warrant a single reward.

We understand that some of you are not interested in money. We offer the option to donate your reward to an established charity. If you do so, we will double your donation – subject to our discretion. Any rewards that are unclaimed after 12 months will be donated to a charity of our choosing.

You need to decide what you want to earn per hour and then decide if this reward program meets your needs.

Personally I would call the US Navy about its recently advertised zero-day program. (officially withdrawn but it doesn’t hurt to ask)

The Whitehouse should establish a public auction for zero-day exploits. Vulnerabilities would have more publicity and buyers would be on a fairer footing. Let the free market decide what vulnerabilities are worth.

Apple CORED: [And A $Billion Theft]

Filed under: Cybersecurity,Security — Patrick Durusau @ 1:20 pm

Apple CORED: Boffins reveal password-killer 0-days for iOS and OS X by Darren Pauli.

From the post:

Six university researchers have revealed deadly zero-day flaws in Apple’s iOS and OS X, claiming it is possible to crack Apple’s password-storing keychain, break app sandboxes, and bypass its App Store security checks.

Attackers can exploit these bugs to steal passwords from installed apps, including the native email client, without being detected.

The team was able to upload malware to Apple’s app stores, and passed the vetting processes without triggering any alarms. That malware, when installed on a victim’s Mac, raided the keychain to steal passwords for services including iCloud and the Mail app, and all those stored within Google Chrome.

Lead researcher Luyi Xing told El Reg he and his team complied with Apple’s request to withhold publication of the research for six months, but had not heard back as of the time of writing.

They say the holes are still present in Apple’s software, meaning their work will likely be consumed by miscreants looking to weaponize the work.

Apple was not available for immediate comment.

The Indiana University boffins Xing; Xiaolong Bai; XiaoFeng Wang; and Kai Chen joined Tongxin Li, of Peking University, and Xiaojing Liao, of Georgia Institute of Technology, to develop the research, which is detailed in a paper titled Unauthorized Cross-App Resource Access on Mac OS X and iOS.

See Darren’s post for more non-technical details and the paper for the full monty.

Is your first impulse on reading about Apple CORED to run out and buy stock in security software companies that don’t address the underlying vulnerability? No? Perhaps because security software doesn’t address the underlying problem? Or perhaps because security software could have its own vulnerability to add to the maze of known and unknown vulnerabilities of your own IT stack?

You should feel smart today because a lot of investors aren’t as bright as you. The Billion-Dollar Bet That Better Software Can Back Off Hackers by Joseph Ciolli, is more appropriately titled: The Billion-Dollar Theft That Better Software Can Back Off Hackers.

From Joseph’s post:

About the only thing rising as fast as online mischief is the stock of firms trying to thwart it.

Companies from FireEye Inc. to Palo Alto Networks Inc. have taken off in 2015, extending the gain in a four-year-old index tracking network security firms past 200 percent. An exchange-traded fund tied to the shares just surpassed $1 billion in market value, having doubled in size since the start of April.

Online intrusions such as last week’s breach of confidential government employee records have fanned cyber paranoia, boosting spending on network security and making darlings out of companies with a hand in safeguarding digital data. The theme is starting to feed on itself among investors afraid of being left out.

“This is an industry where the higher the price goes, the more attractive it becomes to people,” said John Manley, who helps oversee about $233 billion as chief equity strategist for Wells Fargo Funds Management in New York. “It’s been a herd mentality in these stocks as these companies offer a level of security that wasn’t offered before.”

A 24 percent gain in 2015 and a frenzy of investor inflows swelled the market value of the PureFunds ISE Cyber Security ETF past the $1 billion threshold on Tuesday. That’s up from $107 million at the start of the year and $494 million at the end of the first quarter.

I have been practicing with Gimp and here is how I would visualize the present security dilemma.

present-IT-stack-final

As you can see, your present IT stack is holes all the way down. That is the crux of the current security dilemma. Not that the network has vulnerabilities or your OS has vulnerabilities or that applications have vulnerabilities or that your hardware has vulnerabilities. Holes all the way down.

The $billion software solution makes your IT stack look like this:

present-IT-stack-plus-security

I colored the security software contribution to your stack in red to symbolize its impact on your bottom line.

The answer to cybersecurity is NOT better but leaky security software on top of a leaky IT stack.

The answer to cybersecurity IS better software in the IT stack. Security as an integral part of software design.

With the rising tide of data breaches, it won’t be that long before some judge is quoting these lines in a judgement against a software vendor:

To establish the manufacturer’s liability it was sufficient that plaintiff proved he was injured while using the [product] in a way it was intended to be used as a result of a defect in the design and manufacture of which the plaintiff was not aware that made the [product] unsafe for its intended use. (Greenman v. Yuba Power Products, Justice Traynor)

I found that quote in a delightful summary on product liability law, An Introduction to Product Liability Law by Dennis W. Stearns. The summary only runs thirteen pages and 1,000+ page tomes exist on the subject. Use the summary as a starting point, not an end point.

« Newer PostsOlder Posts »

Powered by WordPress