Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

October 10, 2018

Dodging Paywalls: Zotero Adds Improved PDF Retrieval

Filed under: Open Source,Zotero — Patrick Durusau @ 2:57 pm

How often do you hit paywalls? Every week? Every day?

You find an article of interest to ~200 researchers in your sub-field and the publisher wants $39.95 for you to “buy” the article. The research was free to the publisher, usually supported by public grants. The copy editing and peer review was free to the publisher. Yet they are squatting like Cerberus over value they didn’t create.

Not quite Hercules or Virgil but Zotero makes it easier to find open source PDFs to replace those behind firewalls.

Improved PDF retrieval with Unpaywall integration

From the post:

As an organization dedicated to developing free and open-source research tools, we care deeply about open access to scholarship. With the latest version of Zotero, we’re excited to make it easier than ever to find PDFs for the items in your Zotero library.

While Zotero has always been able to download PDFs automatically as you save items from the web, these PDFs are often behind publisher paywalls, putting them out of reach of many people.

Enter Unpaywall, a database of legal, full-text articles hosted by publishers and repositories around the world. Starting in Zotero 5.0.56, if you save an item from a webpage where Zotero can’t find or access a PDF, Zotero will automatically search for an open-access PDF using data from Unpaywall.

Which reminds me, I need to upgrade my current Zotero installation!

Don’t forget to harry, harass and penalize those who seek to deny access to materials being produced on 17th century economic models. Whatever befalls them, it won’t be severe enough.

Microsoft Open-Sources Patent Portfolio: OIN ~1,300 + 60,000 = ~61,300 Patents

Filed under: Open Source,Patents — Patrick Durusau @ 1:06 pm

Kudos! Microsoft Open-Sources Patent Portfolio by Steven J Vaughan-Nichols.

From the post:

Several years ago, I said the one thing Microsoft has to do — to convince everyone in open source that it’s truly an open-source supporter — is stop using its patents against Android vendors. Now, it’s joined the Open Invention Network (OIN), an open-source patent consortium. Microsoft has essentially agreed to grant a royalty-free and unrestricted license to its entire patent portfolio to all other OIN members.

Before Microsoft joined, OIN had more than 2,650 community members and owns more than 1,300 global patents and applications. OIN is the largest patent non-aggression community in history and represents a core set of open-source intellectual-property values. Its members include Google, IBM, Red Hat, and SUSE. The OIN patent license and member cross-licenses are available royalty-free to anyone who joins the OIN community.

In a conversation, Erich Andersen, Microsoft’s corporate vice president and chief intellectual property (IP) counsel — that is, Microsoft top patent person — added: We “pledge our entire patent portfolio to the Linux system. That’s not just the Linux kernel, but other packages built on it.”

This is huge

How many patents does this affect? Andersen said Microsoft is bringing 60,000 patents to OIN.
(emphasis in original)

If approximately 1,300 patents attracted members to Open Invention Network (OIN), imagine the attractive force exerted by an additional 60,000!

Suggestion: None of us are who we were yesterday, much less ten or twenty years ago. Let’s take these new facts on the patent landscape and move forward.

Discussions of “could have, should have, what if had, etc.,” are non-contributions to building a new tomorrow.

February 8, 2018

Porn, AI and Open Source Ethics

Filed under: Artificial Intelligence,Deep Learning,Open Source,Porn,TensorFlow — Patrick Durusau @ 4:18 pm

Google Gave the World Powerful AI Tools, and the World Made Porn With Them by Dave Gershgorn.

From the post:

In 2015, Google announced it would release its internal tool for developing artificial intelligence algorithms, TensorFlow, a move that would change the tone of how AI research and development would be conducted around the world. The means to build technology that could have an impact as profound as electricity, to borrow phrasing from Google’s CEO, would be open, accessible, and free to use. The barrier to entry was lowered from a Ph.D to a laptop.

But that also meant TensorFlow’s undeniable power was now out of Google’s control. For a little over two years, academia and Silicon Valley were still the ones making the biggest splashes with the software, but now that equation is changing. The catalyst is deepfakes, an anonymous Reddit user who built around AI software that automatically stitches any image of a face (nearly) seamlessly into a video. And you can probably imagine where this is going: As first reported by Motherboard, the software was being used to put anyone’s face, such as a famous woman or friend on Facebook, on the bodies of porn actresses.

After the first Motherboard story, the user created their own subreddit, which amassed more than 91,000 subscribers. Another Reddit user called deepfakeapp has also released a tool called FakeApp, which allows anyone to download the AI software and use it themselves, given the correct hardware. As of today, Reddit has banned the community, saying it violated the website’s policy on involuntary pornography.

According to FakeApp’s user guide, the software is built on top of TensorFlow. Google employees have pioneered similar work using TensorFlow with slightly different setups and subject matter, training algorithms to generate images from scratch. And there are plenty of potentially fun (if not inane) uses for deepfakes, like putting Nicolas Cage in a bunch of different movies. But let’s be real: 91,000 people were subscribed to deepfakes’ subreddit for the porn.

While much good has come from TensorFlow being open source, like potential cancer detection algorithms, FakeApp represents the dark side of open source. Google (and Microsoft and Amazon and Facebook) have loosed immense technological power on the world with absolutely no recourse. Anyone can download AI software and use it for anything they have the data to create. That means everything from faking political speeches (with help from the cadre of available voice-imitating AI) to generating fake revenge porn. All digital media is a series of ones and zeroes, and artificial intelligence is proving itself proficient at artfully arranging them to generate things that never happened.

You can imagine the rest or read the rest of Gershgon’s (deep voice): “dark side of open source.”

While you do, remember that Gershgon would have made the same claims about:

  1. Telephones
  2. Photography
  3. Cable television
  4. Internet
  5. etc.

The simplest rejoinder is that the world did not create porn with AI. A tiny subset of the world signed up to see porn created by an even smaller subset of the world.

The next simplest rejoinder is the realization that Gershgon wants a system that dictates ethics to users of open source software. Gershgon should empower an agency to enforce ethics on journalists and check back in a couple of years to report on their experience.

I’m willing to be ahead of time it won’t be a happy report.

Bottom line: Leave the ethics of open source software to the people using such software. May not always have a happy outcome but will always be better than the alternatives.

December 10, 2017

Releasing Failed Code to Distract from Accountability

Filed under: Government,Open Source,Programming,Project Management — Patrick Durusau @ 11:16 am

Dutch government publishes large project as Free Software by
Carmen Bianca Bakker.

From the post:

The Dutch Ministry of the Interior and Kingdom Relations released the source code and documentation of Basisregistratie Personen (BRP), a 100M€ IT system that registers information about inhabitants within the Netherlands. This comes as a great success for Public Code, and the FSFE applauds the Dutch government’s shift to Free Software.

Operation BRP is an IT project by the Dutch government that has been in the works since 2004. It has cost Dutch taxpayers upwards of 100 million Euros and has endured three failed attempts at revival, without anything to show for it. From the outside, it was unclear what exactly was costing taxpayers so much money with very little information to go on. After the plug had been pulled from the project earlier this year in July, the former interior minister agreed to publish the source code under pressure of Parliament, to offer transparency about the failed project. Secretary of state Knops has now gone beyond that promise and released the source code as Free Software (a.k.a. Open Source Software) to the public.

In 2013, when the first smoke signals showed, the former interior minister initially wanted to address concerns about the project by providing limited parts of the source code to a limited amount of people under certain restrictive conditions. The ministry has since made a complete about-face, releasing a snapshot of the (allegedly) full source code and documentation under the terms of the GNU Affero General Public License, with the development history soon to follow.

As far as the “…complete about-face…,” the American expression is: “You’ve been had.

Be appearing to agonize over the release of the source code, the “former interior minister” has made it appear the public has won a great victory for transparency.

Actually not.

Does the “transparency” offered by the source code show who authorized the expenditure of each part of the 100M€ total and who was paid that 100M€? Does source code “transparency” disclose project management decisions and who, in terms of government officials, approved those project decisions. For that matter, does source code “transparency” disclose discussions of project choices at all and who was present at those discussions?

It’s not hard to see that source code “transparency” is a deliberate failure on the part of the Dutch Ministry of the Interior and Kingdom Relations to be transparent. It has withheld, quite deliberately, any information that would enable Dutch citizens, programmers or otherwise, to have informed opinions about the failure of this project. Or to hold any accountable for its failure.

This may be:

…an unprecedented move of transparency by the Dutch government….

but only if the Dutch government is a black hole in terms of meaningful accountability for its software projects.

Which appears to be the case.

PS: Assuming Dutch citizens can pry project documentation out of the secretive Dutch Ministry of the Interior and Kingdom Relations, I know some Dutch topic mappers could assist with establishing transparency. If that’s what you want.

September 8, 2017

Unpatched Windows Vulnerability – Cost of Closed Source Software

Filed under: Cybersecurity,Microsoft,Open Source,Security — Patrick Durusau @ 3:40 pm

Bug in Windows Kernel Could Prevent Security Software From Identifying Malware by Catalin Cimpanu.

From the post:

Malware developers can abuse a programming error in the Windows kernel to prevent security software from identifying if, and when, malicious modules have been loaded at runtime.

Continue on with Cimpanu for a good overview or catch Windows’ PsSetLoadImageNotifyRoutine Callbacks: the Good, the Bad and the Unclear (Part 1).

Symantec says proactive security includes:

  • Inventory of Authorized and Unauthorized Devices
  • Inventory of Authorized and Unauthorized Software
  • Secure Configurations for Hardware & Software
  • Constant Vulnerability Assessment and Remediation
  • Malware Defense

But since Windows is closed source software, you can’t remedy the vulnerability. Whatever your cyberdefenses, closed source MS Windows leaves you vulnerable.

Eternal (possibly) vulnerability – the cost of closed source software.

It’s hard to think of a better argument for open source software.

Open source software need not be free, just open source so you can fix it if broken.

PS: Open source enables detection of government malware.

August 2, 2017

Continue Flash? As what? Example of insecure coding?

Filed under: Cybersecurity,Open Source — Patrick Durusau @ 1:34 pm

Some People Want Adobe Flash to Continue as an Open Source Project by Derick Sullivan M. Lobga.

From the post:

Last week we heard the good news that Adobe is officially killing Flash in 2020.

This news was well received by developers and end users alike. Well, at least most people liked the demise of Adobe Flash. But it seems that Adobe Flash has still some fans left.

A group of developers at GitHub have come up with a petition to “save Adobe Flash”. Just a few days after the announcement by Adobe, Juha Linstedt, a web developer with username “Pakastin” on GitHub started a petition calling on Adobe to allow for open source Flash, which he thinks is part of Internet history.

Losing Flash altogether will impair access to resources developed using Flash but even as open source, preserving Flash strikes me as the equivalent of preserving small pox for later study.

If Adobe does open source the necessary components, it could have value as examples of how not to code an application. Or for testing of code auditing tools.

July 6, 2017

Kaspersky: Is Source Code Disclosure Meaningful?

Filed under: Cybersecurity,Government,Open Source,Security — Patrick Durusau @ 2:23 pm

Responding to a proposed ban of Kaspersky Labs software, Eugene Kaspersky, chief executive of Kaspersky, is quoted in Russia’s Kaspersky Lab offers up source code for US government scrutiny, as saying:

The chief executive of Russia’s Kaspersky Lab says he’s ready to have his company’s source code examined by U.S. government officials to help dispel long-lingering suspicions about his company’s ties to the Kremlin.

In an interview with The Associated Press at his Moscow headquarters, Eugene Kaspersky said Saturday that he’s also ready to move part of his research work to the U.S. to help counter rumors that he said were first started more than two decades ago out of professional jealousy.

“If the United States needs, we can disclose the source code,” he said, adding that he was ready to testify before U.S. lawmakers as well. “Anything I can do to prove that we don’t behave maliciously I will do it.”

Personally I think Kaspersky is about to be victimized by anti-Russia hysteria, where repetition of rumors, not facts, are the coin of the realm.

Is source code disclosure is meaningful? A question applicable to Kasperky disclosures to U.S. government officials, or Microsoft or Oracle disclosures of source code to foreign governments.

My answer is no, at least if you mean source code disclosure limited to governments or other clients.

Here’s why:

  • Limited competence: For the FBI in particular, source code disclosure is meaningless. Recall the FBI blew away $170 million in the Virtual Case File project with nothing to show and no prospect of a timeline, after four years of effort.
  • Limited resources: Guido Vranken‘s The OpenVPN post-audit bug bonanza demonstrates that after two (2) manual audits, vulnerabilities remain to be found in OpenVPN. Unlike OpenVPN, any source code given to a government will be reviewed at most once and then only by a limited number of individuals. Contrast that with OpenVPN, which has been reviewed for years by a large number of people and yets flaws remain to be discovered.
  • Limited staff: Closely related to my point about limited resources, the people in government who are competent to undertake a software review are already busy with other tasks. Most governments don’t have a corps of idle but competent programmers waiting for source code disclosures to evaluate. Whatever source code review takes place, it will be the minimum required and that only as other priorities allow.

If Kaspersky Labs were to open source but retain copyright on their software, then their source code could be reviewed by:

  • As many competent programmers as are interested
  • On an ongoing basis
  • By people with varying skills and approaches to software auditing

Setting a new standard, that is open source but copyrighted for security software, would be to the advantage of leaders in Gartner’s Magic Quadrant, others, not so much.

It’s entirely possible for someone to compile source code and avoid paying a license fee but seriously, is anyone going to pursue pennies on the ground when there are $100 bills blowing overhead? Auditing, code review, transparency, trust. (I know, the RIAA chases pennies but it’s run by delusional paranoids.)

Three additional reasons for Kaspersky to go open source but copyrighted:

  • Angst among its more poorly managed competitors will soar.
  • Example for government mandated open source but copyright for domestic sales. (Think China, EU, Russia.)
  • Front page news featuring Kaspersky Labs as breaking away from the pack.

Entirely possible for Kaspersky to take advantage of the narrow-minded nationalism now so popular in some circles of the U.S. government. Not to mention changing the landscape of security software to its advantage.

May 23, 2017

China Draws Wrong Lesson from WannaCry Ransomware

Filed under: Cybersecurity,Government,NSA,Open Source,Security — Patrick Durusau @ 7:48 pm

Chinese state media says US should take some blame for cyberattack

From the post:


China’s cyber authorities have repeatedly pushed for what they call a more “equitable” balance in global cyber governance, criticizing U.S. dominance.

The China Daily pointed to the U.S. ban on Chinese telecommunication provider Huawei Technologies Co Ltd, saying the curbs were hypocritical given the NSA leak.

Beijing has previously said the proliferation of fake news on U.S. social media sites, which are largely banned in China, is a reason to tighten global cyber governance.

The newspaper said that the role of the U.S. security apparatus in the attack should “instill greater urgency” in China’s mission to replace foreign technology with its own.

The state-run People’s Daily compared the cyber attack to the terrorist hacking depicted in the U.S. film “Die Hard 4”, warning that China’s role in global trade and internet connectivity opened it to increased risks from overseas.

China is certainly correct to demand a place at the table for China and other world powers in global cyber governance.

But China is drawing the wrong lesson from the WannaCry ransomeware attacks if that is used as a motivation for closed source Chinese software to replace “foreign” technology.

NSA staffers may well be working for Microsoft and/or Oracle, embedding NSA produced code in their products. With closed source code, it isn’t possible to verify the absence of such code or to prevent its introduction.

Sadly, the same is true if closed source code is written by Chinese programmers, some of who may have agendas, domestic or foreign, of their own.

The only defense to rogue code is to invest in open source projects. Not everyone will read every line of code but being available for being read, is a deterrent to obvious subversion of an applications security.

China should have “greater urgency” to abandon closed source software, but investing in domestic closed source only replicates the mistake of investing in foreign closed source software.

Opensource projects cover every office, business and scientific need.

Chinese government support for Chinese participation in existing and new opensource projects can make these projects competitors to closed and potential spyware products.

The U.S. made the closed source mistake for critical cyber infrastructure. China should not make the same mistake.

December 31, 2016

Getting Started in Open Source: A Primer for Data Scientists

Filed under: Data Science,Open Source — Patrick Durusau @ 4:26 pm

Getting Started in Open Source: A Primer for Data Scientists by Rebecca Bilbro.

From the post:

The phrase "open source” evokes an egalitarian, welcoming niche where programmers can work together towards a common purpose — creating software to be freely available to the public in a community that sees contribution as its own reward. But for data scientists who are just entering into the open source milieu, it can sometimes feel like an intimidating place. Even experienced, established open source developers like Jon Schlinkert have found the community to be less than welcoming at times. If the author of more than a thousand projects, someone whose scripts are downloaded millions of times every month, has to remind himself to stay positive, you might question whether the open source community is really the developer Shangri-la it would appear to be!

And yet, open source development does have a lot going for it:

  • Users have access to both the functionality and the methodology of the software (as opposed to just the functionality, as with proprietary software).
  • Contributors are also users, meaning that contributions track closely with user stories, and are intrinsically (rather than extrinsically) motivated.
  • Everyone has equal access to the code, and no one is excluded from making changes (at least locally).
  • Contributor identities are open to the extent that a contributor wants to take credit for her work.
  • Changes to the code are documented over time.

So why start a blog post for open source noobs with a quotation from an expert like Jon, especially one that paints such a dreary picture? It's because I want to show that the bar for contributing is… pretty low.

Ask yourself these questions: Do you like programming? Enjoy collaborating? Like learning? Appreciate feedback? Do you want to help make a great open source project even better? If your answer is 'yes' to one or more of these, you're probably a good fit for open source. Not a professional programmer? Just getting started with a new programming language? Don't know everything yet? Trust me, you're in good company.

Becoming a contributor to an open source project is a great way to support your own learning, to get more deeply involved in the community, and to share your own unique thoughts and ideas with the world. In this post, we'll provide a walkthrough for data scientists who are interested in getting started in open source — including everything from version control basics to advanced GitHub etiquette.

Two of Rebecca’s points are more important than the rest:

  • the bar for contributing is low
  • contributing builds community and a sense of ownership

Will 2017 be the year you move from the sidelines of open source and into the game?

August 29, 2016

Open Source Software & The Department of Defense

Filed under: Government,Military,Open Source — Patrick Durusau @ 1:21 pm

Open Source Software & The Department of Defense by Ben FitzGerald, Peter L. Levin, and Jacqueline Parziale.

A great resource for sharing with Department of Defense (DoD) staff who may be in positions to influence software development, acquisition policies.

In particular you may want to point to the “myths” about security and open source software:

Discussion of open source software in national security is often dismissed out of hand because of technical security
concerns. These are unfounded.

To debunk a few myths:

  • Using open source licensing does not mean that changes to the source code must be shared publicly.
  • The ability to see source code is not the same as the ability to modify deployed software in production.
  • Using open source components is not equivalent to creating an entire system that is itself open sourced.

As In-Q-Tel’s Chief Information Security Officer Dan Geer explains, security is “the absence of unmitigatable surprise.”23 It is particularly difficult to mitigate surprise with closed proprietary software, because the source code, and therefore the ability to identify and address its vulnerabilities, is hidden. “Security through obscurity” is not an effective defense against today’s cybersecurity threats.

In this context, open source software can generate better security outcomes than proprietary alternatives. Conventional anti-malware scanning and intrusion detection are inadequate for many reasons, including their “focus on known vulnerabilities” that miss unknown threats, such as zero-day exploits. As an example, a DARPA-funded team built a flight controller for small quadcopter drones based on an open source autopilot readily downloaded from the Internet. A red team “found no security flaws in six weeks with full access [to the] source code,” making their UAV the most secure on the planet.24

Except that “security” to a DoD contractor has little to do with software security.

No, for a DoD contractor, “security” means change orders, which trigger additional software development cycles, which are largely unauditable, software testing, changes to documentation, all of which could be negatively impacted by “…an open source autopilot.”

If open source is used, there are fewer billing opportunities and that threatens the “security” of DoD contractors.

The paper makes a great case for why the DoD should make greater use of open source software and development practices, but the DoD will have to break the strangle hold of a number of current DoD contractors to do so.

August 8, 2016

U.S. Government Open Source Pilot – Hidden Costs? (Vulnerabilities?)

Filed under: Government,Open Source — Patrick Durusau @ 4:38 pm

Federal Source Code Policy: Achieving Efficiency, Transparency, and Innovation through Reusable and Open Source Software by Tony Scott and Anne E. Rung.

From the post:

The U.S. Government is committed to improving the way Federal agencies buy, build, and deliver information technology (IT) and software solutions to better support cost efficiency, mission effectiveness, and the consumer experience with Government programs. Each year, the Federal Government spends more than $6 billion on software through more than 42,000 transactions.1 A significant proportion of software used by the Government is comprised of either preexisting Federal solutions or commercial solutions. These solutions include proprietary, open source, and mixed source2 code and often do not require additional custom code development.

When Federal agencies are unable to identify an existing Federal or commercial software solution that satisfies their specific needs, they may choose to develop a custom software solution on their own or pay for its development. When agencies procure custom-developed source code, however, they do not necessarily make their new code (source code or code) broadly available for Federal Government-wide reuse. Even when agencies are in a position to make their source code available on a Government-wide basis, they do not make such code available to other agencies in a consistent manner. In some cases, agencies may even have difficulty establishing that the software was produced in the performance of a Federal Government contract. These challenges may result in duplicative acquisitions for substantially similar code and an inefficient use of taxpayer dollars. This policy seeks to address these challenges by ensuring that new custom-developed Federal source code be made broadly available for reuse across the Federal Government.3 This is consistent with the Digital Government Strategy’s “Shared Platform” approach, which enables Federal employees to work together—both within and across agencies—to reduce costs, streamline development, apply uniform standards, and ensure consistency in creating and delivering information.4 Enhanced reuse of custom-developed code across the Federal Government can have significant benefits for American taxpayers, including decreasing duplicative costs for the same code and reducing Federal vendor lock-in.5

This policy also establishes a pilot program that requires agencies, when commissioning new custom software, to release at least 20 percent of new custom-developed code as Open Source Software (OSS) for three years, and collect additional data concerning new custom software to inform metrics to gauge the performance of this pilot.6 (footnotes omitted)

This open source pilot is a good example of government leadership. After open source has become the virtual default of private industry, the government decided to conduct a three-year pilot project to assess the concept.

Not a bad idea but someone needs to ramp up to track every open source release from the federal government.

Such releases need to be evaluated for the costs of new security bugs introduced into the software ecosystem and poor programming practices on software development.

Otherwise, a rosy picture of reduced duplicative costs for the same code may conceal higher software costs due to widespread security vulnerabilities.

Trust is ok, verification is better.

May 28, 2016

Danger! Danger! Oracle Attorney Defends GPL

Filed under: Fair Use,Intellectual Property (IP),Open Source — Patrick Durusau @ 10:49 am

Op-ed: Oracle attorney says Google’s court victory might kill the GPL by Annette Hurst.

From the header:

Annette Hurst is an attorney at Orrick, Herrington & Sutcliffe who represented Oracle in the recent Oracle v. Google trial. This op-ed represents her own views and is not intended to represent those of her client or Ars Technica.

The Oracle v. Google trial concluded yesterday when a jury returned a verdict in Google’s favor. The litigation began in 2010, when Oracle sued Google, saying that the use of Java APIs in Android violated copyright law. After a 2012 trial, a judge held that APIs can’t be copyrighted at all, but that ruling was overturned on appeal. In the trial this month, Google successfully argued that its use of Java APIs, about 11,500 lines of code in all, was protected by “fair use.”

I won’t propogate Annette’s rant but you can read it for yourself at: http://arstechnica.com/tech-policy/2016/05/op-ed-oracle-attorney-says-googles-court-victory-might-kill-the-gpl/.

What are free software supporters to make of their long time deranged, drooling critic expressing support for GPL?

Should they flee as pursued by wraiths on wings?

Should they stuff their cloaks in their ears?

Are these like the lies of Suraman?

Or perhaps better, Wormtongue?

My suggestion? Point to Annette’s rant to alert others but don’t repeat it, don’t engage it, just pass over it in silence.

Repeating evil counsel gives it legitimacy.

Yours.

May 13, 2016

Vulnerable 7-Zip As Poster Child For Open Source

Filed under: Compression,Cybersecurity,Open Source,Security — Patrick Durusau @ 12:28 pm

Anti-virus products, security devices affected by 7-Zip flaws by David Bisson.

From the post:


But users be warned. Cisco Talos recently discovered multiple vulnerabilities in 7-Zip that are more serious than regular security flaws. As explained in a blog post by Marcin Noga and Jaeson Schultz, two members of the Cisco Talos Security Intelligence & Research Group:

“These type of vulnerabilities are especially concerning since vendors may not be aware they are using the affected libraries. This can be of particular concern, for example, when it comes to security devices or antivirus products. 7-Zip is supported on all major platforms, and is one of the most popular archive utilities in-use today. Users may be surprised to discover just how many products and appliances are affected.”

Cisco Talos has identified two flaws in particular. The first (CVE-2016-2335) is an out-of-bounds read vulnerability that exists in the way 7-Zip handles Universal Disk Format (UDF) files. An attacker could potentially exploit this vulnerability to achieve arbitrary code execution.

The “many products and appliances” link results in:

7-zip-03-450

If you use the suggested search string:

7-zip-02-450

Every instance of software running a vulnerable 7-Zip library is subject to this hack. A number likely larger than the total 2,490,000 shown by these two searches.

For open source software, you can check to see if it has been upgraded to 7-Zip, version 16.0.

If you have non-open source software, how are you going to check for the upgrade?

Given the lack of liability under the usual EULA, are you really going to take a vendor’s word for the upgrade?

The vulnerable 7-Zip library is a great poster child for open source software.

Not only for the discovery of flaws but to verify vendors have properly patched those flaws.

April 20, 2016

EU Too Obvious With Wannabe A Monopoly Antics

Filed under: EU,Open Source,Software — Patrick Durusau @ 4:31 pm

If you ever had any doubts (I didn’t) that the EU is as immoral as any other government, recent moves by the EU in the area of software will cure those.

EU hits Google with second antitrust charge by Foo Yun Chee reports:

EU antitrust regulators said that by requiring mobile phone manufacturers to pre-install Google Search and the Google Chrome browser to get access to other Google apps, the U.S. company was harming consumers by stifling competition.

Show of hands. How many of you think the EU gives a sh*t about consumers?

Yeah, that’s what I thought as well.

Or as Chee quotes European Competition Commissioner Margrethe Vestager:

“We believe that Google’s behavior denies consumers a wider choice of mobile apps and services and stands in the way of innovation by other players,” she said.

Hmmm, “other players.” Those don’t sound like consumers, those sound like people who will be charging consumers.

If you need confirmation of that reading, consider Anti-innovation: EU excludes open source from new tech standards by Glyn Moody.

From the post:


“Open” is generally used in the documents to denote “open standards,” as in the quotation above. But the European Commission is surprisingly coy about what exactly that phrase means in this context. It is only on the penultimate page of the ICT Standardisation Priorities document that we finally read the following key piece of information: “ICT standardisation requires a balanced IPR [intellectual property rights] policy, based on FRAND licensing terms.”

It’s no surprise that the Commission was trying to keep that particular detail quiet, because FRAND licensing—the acronym stands for “fair, reasonable, and non-discriminatory”—is incompatible with open source, which will therefore find itself excluded from much of the EU’s grand new Digital Single Market strategy. That’s hardly a “balanced IPR policy.”

Glyn goes on to say that FRAND licensing is the result of lobbying by American technical giants but seems unlikely.

The EU has attempted to favor EU-origin “allegedly” competitive software for years.

I say “allegedly” because the EU never points to competitive software in its antitrust proceedings that was excluded, only to the speculation that but for those evil American monopolists, there would be this garden of commercial and innovative European software. You bet.

There is a lot of innovative European software, but it hasn’t been produced in the same mindset that afflicts officials at the EU. They are fixated on an out-dated software sales/licensing model. Consider the rising number of companies based on nothing but open source if you want a sneak peek at the market of the future.

Being mired in market models from the past, the EU sees only protectionism (the Google complaint) and out-dated notions of software licensing (FRAND) as foundations for promoting a software industry in Europe.

Not to mention the provincialism of the EU makes it the enemy of a growing software industry in Europe. Did you know that EU funded startups are limited to hiring EU residents? (Or so I have been told, by EU startups.) That certainly works that way with EU awards.

There is nothing inconsistent with promoting open source and a vibrant EU software industry, so long as you know something about both. Knowing nothing about either has led the EU astray.

March 14, 2016

Open Source Clojure Projects

Filed under: Clojure,Functional Programming,Open Source,Programming — Patrick Durusau @ 12:28 pm

Open Source Clojure Projects by Daniel Higginbotham.

Daniel Higginbotham of Clojure for the Brave and True, has posted this listing of open source Clojure projects with the blurb:

Looking to improve your skills and work with real code? These projects are under active development and welcome new contributors.

You can see the source at: https://github.com/braveclojure/open-source, where it says:

Pull requests welcome!

Do you know of any other open source Clojure projects that welcome new contributors?

Like yours?

Just by way of example, marked as “beginner friendly,” you will find:

alda – A general purpose music programming language

Avi – A lively vi (a spec & implementation of vim)

clj-rethinkdb – An idomatic RethinkDB client for Clojure

For the more sure-footed:

ClojureCL – Parallel computations on the GPU with OpenCL 2.0 in Clojure

Enjoy!

March 13, 2016

2015 Open Source Yearbook (without email conscription)

Filed under: Open Source,Software — Patrick Durusau @ 8:08 pm

Publication of the 2015 Open Source Yearbook is good news!

Five or six “clicks” and having my email conscripted to obtain a copy, not so much.

For your reading pleasure with one-click access:

The 2015 Open Source Yearbook.

Impressive work, but marred by convoluted access and email conscription.

If you want to make a resource “freely” available, do so. Don’t extort contact information for “free” information.

I’m leading conference calls tomorrow or else I would be reading the 2015 Open Source Yearbook during my calls!

March 10, 2016

Government Source Code Policy

Filed under: Government,Open Source,Software — Patrick Durusau @ 9:28 pm

Government Source Code Policy

From the webpage:

The White House committed to adopting a Government-wide Open Source Software policy in its Second Open Government National Action Plan that “will support improved access to custom software code developed for the Federal Government,” emphasizing that using and contributing back to open source software can fuel innovation, lower costs, and benefit the public.[1] In support of that commitment, today the White House Office of Management and Budget (OMB) is releasing a draft policy to improve the way custom-developed Government code is acquired and distributed moving forward. This policy is consistent with the Federal Government’s long-standing policy of ensuring that “Federal investments in IT (information technology) are merit-based, improve the performance of our Government, and create value for the American people.”[2]

This policy requires that, among other things: (1) new custom code whose development is paid for by the Federal Government be made available for reuse across Federal agencies; and (2) a portion of that new custom code be released to the public as Open Source Software (OSS).

We welcome your input on this innovative draft policy. We are especially interested in your comments on considerations regarding the release of custom code as OSS. The draft policy proposes a pilot program requiring covered agencies to release at least 20 percent of their newly-developed custom code, in addition to the release of all custom code developed by Federal employees at covered agencies as part of their official duties, subject to certain exceptions as noted in the main body of the policy.[3]

In some absolute sense this is a step forward from the present practices of the government with regard to source code that it develops or pays to have developed.

On the other hand, what’s difficult about saying that all code (not 20%) developed by or at the direction of the federal government is deposited under an Apache license within 90 days of its posting to any source code repository. Subject to national security exceptions and then notice has to be given with the decision to be reviewed in the local DC federal court.

Short, simple, clear time constraints and a defined venue for review.

Anytime someone dodges the easy, obvious solution, there is a reason for that dodging. Not a reason or desire to benefit you. Unless you are the person orchestrating the dodge.

February 2, 2016

Bumping into Stallman, again [Stallmanism]

Filed under: Open Source,Software — Patrick Durusau @ 7:42 pm

Bumping into Stallman, again by Frederick Jacobs.

From the post:

This is the second time I’m talking at the same conference as Richard Stallman, after the Ind.ie Tech Summit in Brighton, this time was at the Fri Software Days in Fribourg, Switzerland.

One day before my presentation, I got an email from the organizers, letting me know that Stallman would like me to rename the title of my talk to remove any mentions of “Open Source Software” and replace them with “Free Software”.

The email read like this:

Is it feasible to remove the terms “Open-Source” from the title of your presentation and replace them by “Free-libre software”? It’s the wish of M. Stallman, that will probably attend your talk.

Frederick didn’t change his title or presentation, while at the same time handling the issue much better than I would have.

Well, after I got through laughing my ass off that Stallman would presume to dictate word usage to anyone.

Word usage, for any stallmanists in the crowd, is an empirical question of how many people use a word with a common meaning.

At least if you want to be understood by others.

January 20, 2016

The Semasiology of Open Source [How Do You Define Source?]

Filed under: Meaning,Open Source,Semantics — Patrick Durusau @ 3:08 pm

The Semasiology of Open Source by Robert Lefkowitz (Then, VP Enterprise Systems & Architecture, AT&T Wireless) 2004. Audio file.

Robert’s keynote from the Open Source Convention (OSCON) 2004 in Portland, Oregon.

From the description:

Semasiology, n. The science of meanings or sense development (of words); the explanation of the development and changes of the meanings of words. Source: Webster’s Revised Unabridged Dictionary, 1996, 1998 MICRA, Inc. “Open source doesn’t just mean access to the source code.” So begins the Open Source Definition. What then, does access to the source code mean? Seen through the lens of an Enterprise user, what does open source mean? When is (or isn’t) it significant? And a catalogue of open source related arbitrage opportunities.

If you haven’t heard this keynote, I hadn’t, do yourself a favor and make time to listen to it.

I do have one complaint: It’s not long enough. 😉

Enjoy!

December 31, 2015

History of Apache Storm and lessons learned

Filed under: Open Source,Storm — Patrick Durusau @ 4:40 pm

History of Apache Storm and lessons learned by Nathan Marz.

From the post:

Apache Storm recently became a top-level project, marking a huge milestone for the project and for me personally. It’s crazy to think that four years ago Storm was nothing more than an idea in my head, and now it’s a thriving project with a large community used by a ton of companies. In this post I want to look back at how Storm got to this point and the lessons I learned along the way.

The topics I will cover through Storm’s history naturally follow whatever key challenges I had to deal with at those points in time. The first 25% of this post is about how Storm was conceived and initially created, so the main topics covered there are the technical issues I had to figure out to enable the project to exist. The rest of the post is about releasing Storm and establishing it as a widely used project with active user and developer communities. The main topics discussed there are marketing, communication, and community development.

Any successful project requires two things:

  1. It solves a useful problem
  2. You are able to convince a significant number of people that your project is the best solution to their problem

What I think many developers fail to understand is that achieving that second condition is as hard and as interesting as building the project itself. I hope this becomes apparent as you read through Storm’s history.

All projects are different but the requirements for success:

  1. It solves a useful problem
  2. You are able to convince a significant number of people that your project is the best solution to their problem

sound universal to me!

To clarify point #2, “people” means “other people.”

Preaching to a mirror or choir isn’t going to lead to success.

Nor will focusing on “your problem” as opposed to “their problem.”

PS: New Year’s Eve advice – Don’t download large files. 😉 Slower than you want to think. Suspect people on my subnet are streaming football games and/or porno videos, perhaps both (screen within screen).

I first saw this in a tweet by Bob DuCharme.

November 17, 2015

Building Software, Building Community: Lessons from the rOpenSci Project

Filed under: Open Source,R,Science — Patrick Durusau @ 6:36 pm

Building Software, Building Community: Lessons from the rOpenSci Project by Carl Boettiger, Scott Chamberlain, Edmund Hart, Karthik Ram.

Abstract:

rOpenSci is a developer collective originally formed in 2011 by graduate students and post-docs from ecology and evolutionary biology to collaborate on building software tools to facilitate a more open and synthetic approach in the face of transformative rise of large and heterogeneous data. Born on the internet (the collective only began through chance discussions over social media), we have grown into a widely recognized effort that supports an ecosystem of some 45 software packages, engages scores of collaborators, has taught dozens of workshops around the world, and has secured over $480,000 in grant support. As young scientists working in an academic context largely without direct support for our efforts, we have first hand experience with most of the the technical and social challenges WSSSPE seeks to address. In this paper we provide an experience report which describes our approach and success in building an effective and diverse community.

Given the state of world affairs, I can’t think of a better time for the publication of this article.

The key lesson that I urge you to draw from this paper is the proactive stance of the project in involving and reaching out to build a community around this project.

Too many projects (and academic organizations for that matter) take the approach that others know they exist and so they sit waiting for volunteers and members to queue up.

Very often they are surprised and bitter that the queue of volunteers and members is so sparse. If anyone dares to venture that more outreach might be helpful, the response is nearly always, sure, you go do that and let us know when it is successful.

How proactive are you in promoting your favorite project?

PS: The rOpenSci website.

DegDB (Open Source Distributed Graph Database) [Tackling Who Pays For This Data?]

Filed under: Distributed Computing,Graph Databases,Graphs,Open Source — Patrick Durusau @ 4:52 pm

DegDB (Open Source Distributed Graph Database) (GitHub)

The Design Doc/Ramble reads in part:

Problems With Existing Graph Databases

  • Owned by private companies with no incentive to share.
  • Public databases are used by few people with no incentive to contribute.
  • Large databases can’t fit on one machine and are expensive to traverse.
  • Existing distributed graph databases require all nodes to be trusted.

Incentivizing Hosting of Data

Every request will have either a debit (with attached bitcoin) or credit (with bitcoin promised on delivery) payment system. The server nodes will attempt to estimate how much it will cost to serve the data and if there isn’t enough bitcoin attached, will drop the request. This makes large nodes want to serve as much popular data as possible, because it allows for faster responses as well as not having to pay other nodes for their data. At the same time, little used data will cost more to access due to requiring more hops to find the data and “cold storage” servers can inflate the prices thus making it profitable for them.

Incentivizing Creation of Data

Data Creation on Demand

A system for requesting certain data to be curated can be employed. The requestor would place a bid for a certain piece of data to be curated, and after n-sources add the data to the graph and verify its correctness the money would be split between them.
This system could be ripe for abuse by having bots automatically fulfilling every request with random data.

Creators Paid on Usage

This method involves the consumers of the data keeping track of their data sources and upon usage paying them. This is a trust based model and may end up not paying creators anything.

The one “wow” factor of this project is the forethought to put the discussion of “who pays for this data?” up front and center.

We have all seen the failing model that starts with:

For only $35.00 (U.S.) you can view this article for 24 hours.

That makes you feel like you are almost robbing the publisher at that price. (NOT!)

Right. I’m tracking down a citation to make sure a quote or data is correct and I am going to pay $35.00 (U.S.) to have access for 24 hours. Considering that the publishers with those pricing models have already made back their costs of production and publication plus a profit from institutional subscribers (challenge them for the evidence if they deny), a very low micro-payment would be more suitable. Say $00.01 per paragraph or something on that order. Payable out of a deposit with the publisher.

I would amend the Creators Paid on Usage section to have created content unlocked only upon payment (set by the creator). Over time, creators would develop reputations for the value of their data and if you choose to buy from a private seller with no online history, that’s just your bad.

Imagine that for the Paris incident (hypothetical, none of the following is true), I had the school records for half of the people carrying out that attack. Not only do I have the originals but I also have them translated into English, assuming some or all of them are in some other language. I could cast that data (I’m not fond of the poverty of triples) into a graph format and make it know as part of a distributed graph system.

Some of the data, such as the identities of the people for who I had records, would appear in the graphs of others as “new” data. Up to the readers of the graph to decide if the data and the conditions for seeing it are acceptable to them.

Data could even carry a public price tag. That is if you want to pay a large enough sum, then the data in question will be opened up for everyone to have access to it.

I don’t know of any micropayment systems that are eating at the foundations of traditional publishers now but there will be many attempts before one eviscerates them one and all.

The choices we face now of “free” (read unpaid for research, writing and publication, which excludes many) versus the “pay-per-view” model that supports early 20th century models of sloth, cronyism and gate-keeping, aren’t the only ones. We need to actively seek out better and more nuanced choices.

November 13, 2015

Microsoft open sources Distributed Machine Learning Toolkit…

Filed under: Distributed Computing,Machine Learning,Microsoft,Open Source — Patrick Durusau @ 2:12 pm

Microsoft open sources Distributed Machine Learning Toolkit for more efficient big data research by George Thomas Jr.

From the post:

Researchers at the Microsoft Asia research lab this week made the Microsoft Distributed Machine Learning Toolkit openly available to the developer community.

The toolkit, available now on GitHub, is designed for distributed machine learning — using multiple computers in parallel to solve a complex problem. It contains a parameter server-based programing framework, which makes machine learning tasks on big data highly scalable, efficient and flexible. It also contains two distributed machine learning algorithms, which can be used to train the fastest and largest topic model and the largest word-embedding model in the world.

The toolkit offers rich and easy-to-use APIs to reduce the barrier of distributed machine learning, so researchers and developers can focus on core machine learning tasks like data, model and training.

The toolkit is unique because its features transcend system innovations by also offering machine learning advances, the researchers said. With the toolkit, the researchers said developers can tackle big-data, big-model machine learning problems much faster and with smaller clusters of computers than previously required.

For example, using the toolkit one can train a topic model with one million topics and a 20-million word vocabulary, or a word-embedding model with 1000 dimensions and a 20-million word vocabulary, on a web document collection with 200 billion tokens utilizing a cluster of just 24 machines. That workload would previously have required thousands of machines.

This has been a banner week for machine learning!

On November 9th, Google open sourced TensorFlow.

On November 12th, Single Artificial Neuron Taught to Recognize Hundreds of Patterns (why neurons have thousands of synapses) is published.

On November 12th, Microsoft open sources its Distributed Machine Learning Toolkit.

Not every week is like that for machine learning but it is impressive when that many major stories drop in a week!

I do like the line from the Microsoft announcement:

For example, using the toolkit one can train a topic model with one million topics and a 20-million word vocabulary, or a word-embedding model with 1000 dimensions and a 20-million word vocabulary, on a web document collection with 200 billion tokens utilizing a cluster of just 24 machines. (emphasis added)

Prices are falling all the time and a 24 machine cluster should be within the reach of most startups if not most individuals now. Next year? Possibly within the reach of a large number of individuals.

What are your machine learning plans for 2016?

More DMTK information.

November 12, 2015

Quartz to open source two mapping tools

Filed under: Journalism,Mapping,Maps,Open Source,Reporting — Patrick Durusau @ 3:53 pm

Quartz to open source two mapping tools by Caroline Scott.

From the post:

News outlet Quartz is developing a searchable database of compiled map data from all over the world, and a tool to help journalists visualise this data.

The database, called Mapquery, received $35,000 (£22,900) from the Knight Foundation Prototype Fund on 3 November.

Keith Collins, project lead, said Mapquery will aim to make the research stage in the creation of maps easier and more accessible, by creating a system for finding, merging and refining geographic data.

Mapquery will not be able to produce visual maps itself, as it simply provides a database of information from which maps can be created – so Quartz will also open source Mapbuilder as the “front end” that will enable journalists to visualise the data.

Quartz aims to have a prototype of Mapquery by April, and will continue to develop Mapbuilder afterwards.

That’s news to look forward to in 2016!

I’m real curious where Quartz is going to draw the boundary around “map data?” The post mentions Mapquery including “historical boundary data,” which would be very useful for some stories, but is traditional “map data.”

What if Mapquery could integrate people who have posted images with geographic locations? So a reporter could quickly access a list of potential witnesses for events the Western media doesn’t cover?

Live feeds of the results of US bombing raids against ISIS for example. (Doesn’t cover out of deference to the US military propaganda machine or for other reasons I can’t say.)

Looking forward to more news on Mapquery and Mapbuilder!

I first saw this in a tweet by Journalism Tools.

October 7, 2015

Treasure Trove of R Scripts…

Filed under: Open Source,R — Patrick Durusau @ 8:30 pm

Treasure Trove of R Scripts for Auto Classification, Chart Generation, Solr, Mongo, MySQL and Ton More by Jitender Aswani.

From the post:

In this repository hosted at github, the datadolph.in team is sharing all of the R codebase that it developed to analyze large quantities of data.

datadolph.in team has benefited tremendously from fellow R bloggers and other open source communities and is proud to contribute all of its codebase into the community.

The codebase includes ETL and integration scripts on –

  • R-Solr Integration
  • R-Mongo Interaction
  • R-MySQL Interaction
  • Fetching, cleansing and transforming data
  • Classification (identify column types)
  • Default chart generation (based on simple heuristics and matching a dimension with a measure)

Github Source: https://github.com/datadolphyn/R

I count twenty-two (22) R scripts in this generous donation back to the R community!

Enjoy!

September 14, 2015

Getting started with open source machine learning

Filed under: Machine Learning,Open Source — Patrick Durusau @ 8:45 pm

Getting started with open source machine learning by Grant Ingersoll.

From the post:

Despite all the flashy headlines from Musk and Hawking on the impending doom to be visited on us mere mortals by killer robots from the skies, machine learning and artificial intelligence are here to stay. More importantly, machine learning (ML) is quickly becoming a critical skill for developers to enhance their applications and their careers, better understand data, and to help users be more effective.

What is machine learning? It is the use of both historical and current data to make predictions, organize content, and learn patterns about data without being explicitly programmed to do so. This is typically done using statistical techniques that look for significant events like co-occurrences and anomalies in the data and then factoring in their likelihood into a model that is queried at a later time to provide a prediction for some new piece of data.

Common machine learning tasks include classification (applying labels to items), clustering (grouping items automatically), and topic detection. It is also commonly used in natural language processing. Machine learning is increasingly being used in a wide variety of use cases, including content recommendation, fraud detection, image analysis and ecommerce. It is useful across many industries and most popular programming languages have at least one open source library implementing common ML techniques.

Reflecting the broader push in software towards open source, there are now many vibrant machine learning projects available to experiment with as well as a plethora of books, articles, tutorials, and videos to get you up to speed. Let’s look at a few projects leading the way in open source machine learning and a few primers on related ML terminology and techniques.

Grant rounds up a starting list of primers and projects if you need an introduction to machine learning.

Enjoy!

September 8, 2015

Leaping the chasm from proprietary to open: …

Filed under: Open Source — Patrick Durusau @ 4:29 pm

by Bryan Cantrill.

Slides: http://www.slideshare.net/bcantrill/leaping-the-chasm-from-proprietary-to-open-a-survivors-guide.

Full illumos history mentioned in talk: https://www.youtube.com/watch?v=-zRN7XLCRhc

Corporate open source anti-patterns: https://www.youtube.com/watch?v=Pm8P4oCIY3g

Very high energy presentation starting with the early history of software. Great coverage of the history of Solaris.

My favorite quip:

Every thing you think about Oracle is true, is actually truer than you think it could be.

You will greatly enjoy the disclaimer story.

Natural law wrong. – “…assertion that APIs can be copyrighted!”

Opensource projects by Joyent:

SmartDataCenter: https://github.com/joyent/sdc

Manta: https://github.com/joyent/manta

My take away? Despite all the amusing stories and tales, I would have to pick “use a weak copy-left license.”

August 5, 2015

Who is in Charge of Android Security?

Filed under: Cybersecurity,IoT - Internet of Things,Open Source,Security — Patrick Durusau @ 2:20 pm

Just the other day I posted Targeting 950 Million Android Phones – Open Source Security Checks?. Today my email had a link to: Nearly 90 percent of Android devices vulnerable to endless reboot bug by Allen Greenberg.

Allen points to: Android MediaServer Bug Traps Phones in Endless Reboots by Wish Wu, which reads in part:

We have discovered a new vulnerability that allows attackers to perform denial of service (DoS) attacks on Android’s mediaserver program. This causes a device’s system to reboot and drain all its battery life. In more a severe case, where a related malicious app is set to auto-start, the device can be trapped in an endless reboot and rendered unusable.

The vulnerability, CVE-2015-3823, affects Android versions 4.0.1 Jelly Bean to 5.1.1 Lollipop. Around 89% of the Android users (roughly 9 in 10 Android devices active as of June 2015) are affected. However, we have yet to discover active attacks in the wild that exploit this vulnerability.

This discovery comes hot on the heels of two other major vulnerabilities in Android’s media server component that surfaced last week. One can render devices silent while the other, Stagefright, can be used to install malware through a multimedia message.

Wow! Three critical security bugs in Android in a matter of weeks.

Which makes me ask the question: Who (the hell) is in Charge of Android Security?

Let’s drop the usual open source answer to complaints about the software: “…well, if you have an issue with the software you should contribute a patch…” and wise up that commercial entities are making money off the Android “open source” project.

People can and should contribute to open source projects but at the same time, commercial vendors should not foist avoidance of security bugs off onto the public.

Commercial vendors are already foisting security bugs off on the public because so far, not for very much longer, they have avoided liability for the same. They simply don’t invest in the coding practices that would avoid the security bugs that are so damaging to enterprises and individuals alike.

The same was true in the history of products liability. It is a very complex area of law that is developing rapidly and someday soon the standard EULA will fall and there will be no safety net under software vendors.

There are obvious damages from security bugs and there are vendors who could have avoided the security bugs in the first place. It is only a matter of time before courts discover that the same bugs (usually unchecked input) is causing damages over and over again and that checking input avoids the bug in the majority of cases.

Who can choose to check input or not? That’s right, the defendant with the deep pockets, the software vendor.

Who is in charge of security for your software?

PS: I mentioned the other day that the CVE database is available for download. That would be the starting point for developing a factual basis for known/avoidable bug analysis for software liability. I suspect that has been done and I am unaware of it. Suggestions?

August 3, 2015

Targeting 950 Million Android Phones – Open Source Security Checks?

Filed under: Cybersecurity,Open Source — Patrick Durusau @ 7:34 am

How to Hack Millions of Android Phones Using Stagefright Bug, Without Sending MMS by Swati Khandelwal.

From the post:

Earlier this week, security researchers at Zimperium revealed a high-severity vulnerability in Android platforms that allowed a single multimedia text message to hack 950 Million Android smartphones and tablets.

As explained in our previous article, the critical flaw resides in a core Android component called “Stagefright,” a native Android media playback library used by Android to process, record and play multimedia files.

To Exploit Stagefright vulnerability, which is actively being exploited in the wild, all an attacker needed is your phone number to send a malicious MMS message and compromise your Android device with no action, no indication required from your side.

Security researchers from Trend Micro have discovered two new attack scenarios that could trigger Stagefright vulnerability without sending malicious multimedia messages:

  • Trigger Exploit from Android Application
  • Crafted HTML exploit to Target visitors of a Webpage on the Internet

These two new Stagefright attack vectors carry more serious security implications than the previous one, as an attacker could exploit the bug remotely to:

  • Hack millions of Android devices, without knowing their phone numbers and spending a penny.
  • Steal Massive Amount of data.
  • Built a botnet network of Hacked Android Devices, etc.

The specially crafted MP4 file will cause mediaserver‘s heap to be destroyed or exploited,” researchers explained how an application could be used to trigger Stagefright attack.

Swati has video demonstrations of both of the new attack vectors and covers defensive measures for users.

Does the presence of such a bug in software from Google, which has access to almost unlimited programming talent and to hear its tale, the best programming talent in the business, make you curious about security for the Internet of Things (IoT)?

Or has Google been practicing “good enough” software development and cutting corners on testing for bugs and security flaws?

Now that I think about it, Android is an open source project and as we all know, given enough eyeballs, all bugs are shallow (Linus’s Law).

Hmmm, perhaps there aren’t enough eyes or eyes with a view towards security issues reviewing the Android codebase?

Is it the case the Google is implicitly relying on the community to discover subtle security issues in Android software?

Or to ask a more general question: Who is responsible for security checks on open source software? If everyone is responsible, I take that to mean no one is responsible.

July 15, 2015

Blue Light Special: Windows Server 2003

Filed under: Cybersecurity,Open Source,Security — Patrick Durusau @ 10:18 am

“Blue light special” is nearly a synonym for KMart. If you search for “blue light special” at Wikipedia, you will be redirected to the entry for Kmart.

A “blue light special” consisted of a blue police light being turned on and a KMart employee announcing the special to all shoppers in the store.

As of Tuesday, July 14, 2015, there are now blue light specials on Windows Server 2003. Well, sans the blue police light and the KMart employee. But hackers will learn of vulnerabilities in Windows Server 2003 and there will be no patches to close off those opportunities.

The last patches for Windows Server 2003 were issued on Tuesday and are described at: Microsoft releases 14 bulletins on Patch Tuesday, ends Windows Server 2003 support.

You can purchase, from Microsoft, special support contracts but as the experience of the US Navy has shown, that can be an expensive proposition ($9.1 million per year).

That may sound like a lot of income, and it is to a small to medium company, but remember that $9.1 million is 0.00010% of Microsoft’s revenue as shown in its 2014 Annual Report.

I don’t know who to ask at Microsoft but they could should making Windows XP, Windows Server 2003, etc. into open source projects.

Some 61% of businesses are reported to still be using Windows Server 2003. Support beyond the end of life for Windows Server 2003 will be $600 per server, for the first year with higher fees to follow.

Although open sourcing Windows Server 2003 might cut into some of the maintenance contract income, it would greatly increase the pressure on businesses to migrate off of Windows Server 2003 as hackers get first hand access to this now ancient code base.

In some ways, open sourcing Windows XP, Windows Server 2003 could be a blue light special that benefits all shoppers.

Microsoft obtains the obvious benefits of greater demand, initially, for formal support contracts and in the long run, the decreasing costs of maintaining ancient code bases, plus new income from migrations.

People concerned with the security, or lack thereof in ancient systems gain first hand knowledge of those systems and bugs to avoid in the future.

IT departments benefit from having stronger grounds to argue that long delayed migrations must be undertaken or face the coming tide of zero-day vulnerabilities based on source code access.

Users benefit in the long run from the migration to modern computing architectures and their features. A jump comparable to going from a transistor radio to a smart phone.

Older Posts »

Powered by WordPress