Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

August 9, 2015

User-generated content can traumatize journalists… [And the Problem Is?]

Filed under: Journalism,News,Reporting — Patrick Durusau @ 4:38 pm

User-generated content can traumatize journalists who work with it — a new project aims to help by Laura Hazard Owen.

From the post:

Journalists and human rights workers who work with troubling user-generated content as part of their jobs may experience vicarious trauma as a result of handling distressing content. A new research project aims to help by surveying and interviewing such workers and developing a set of best practices for news and humanitarian organizations.

Nonprofit think-tank Eyewitness Media Hub is running the project with backing from the Open Society Foundation. EMH was founded in 2014 by former Tow fellows Sam Dubberley, Pete Brown, and Claire Wardle, who had previously researched how broadcasters use user-generated content (UGC) in their news output, along with Jenni Sargent.

“A lot of research so far has [questioned whether] vicarious trauma is something that exists,” said Dubberley. “We’re starting from the premise that it does exist, and would like to understand what organizations are doing about it, and how people who are using it on a day-to-day basis feel about it.” As head of the Eurovision News Exchange, he said, “I had a team of 20 journalists sourcing content from Syria and the Arab Spring through YouTube and saw them being impacted by it, quite seriously.”

I am quite mystified as to why news content traumatizing journalist or members of the general pubic is a problem?

If we could vicariously experience the horrors that are financed by the United States government and others, literally be retching from fear of the next newspaper, radio broadcast or cable news broadcast, wouldn’t that be a good thing?

If anything, reporters need to take the gloves off and record death rattles, people screaming in agony, calling for death, while identifying the forces that were responsible.

One wonders how may heroes parades would happen if the streets were lined the photos of their victims, pulsing with audio of the last moments of their lives.

How proud would you feel to have butchered innocent women and children in service of your country?

Let’s bring the reality of war back to the evening dinner table. It helped end Viet-Nam. Perhaps it could help end some of the current cycle of madness.

Desperately Seeking a Regex

Filed under: Programming,Regex — Patrick Durusau @ 11:00 am

A Javascript regex to match a regex was posted to Stackoverflow (formerly computer programming) by Mike Samuel:

/\/((?![*+?])(?:[^\r\n\[/\\]|\\.|\[(?:[^\r\n\]\\]|\\.)*\])+)\/((?:g(?:im?|m)?|i(?:gm?|m)?|m(?:gi?|i)?)?)/

Along with a nifty explanation and caveats about its use.

Other candidates?

Thinking this could be very useful for mining regexes out of discussion groups, etc.

Machine Learning and Human Bias: An Uneasy Pair

Filed under: Bias,Machine Learning — Patrick Durusau @ 10:45 am

Machine Learning and Human Bias: An Uneasy Pair by Jason Baldridge.

From the post:

“We’re watching you.” This was the warning that the Chicago Police Department gave to more than 400 people on its “Heat List.” The list, an attempt to identify the people most likely to commit violent crime in the city, was created with a predictive algorithm that focused on factors including, per the Chicago Tribune, “his or her acquaintances and their arrest histories – and whether any of those associates have been shot in the past.”

Algorithms like this obviously raise some uncomfortable questions. Who is on this list and why? Does it take race, gender, education and other personal factors into account? When the prison population of America is overwhelmingly Black and Latino males, would an algorithm based on relationships disproportionately target young men of color?

There are many reasons why such algorithms are of interest, but the rewards are inseparable from the risks. Humans are biased, and the biases we encode into machines are then scaled and automated. This is not inherently bad (or good), but it raises the question: how do we operate in a world increasingly consumed with “personal analytics” that can predict race, religion, gender, age, sexual orientation, health status and much more.

Jason’s post is a refreshing step back from the usual “machine learning isn’t biased like people are,” sort of stance.

Of course machine learning is biased, always biased. The algorithms are biased themselves, to say nothing of the programmers who inexactly converted those algorithms into code. It would not be much of an algorithm if it could not vary its results based on its inputs. That’s discrimination no matter how you look at it.

The difference, at least in some cases, is that discrimination is acceptable in some cases and not others. One imagines that only women are eligible for birth control pill prescriptions. That’s a reasonable discrimination. Other bases for discrimination, not so much.

And machine learning is further biased by the data we choose to input to the already biased implementation of a biased algorithm.

That isn’t a knock on machine learning but a caveat when confronted with a machine learning result, look behind the result to the data, the implementation of the algorithm and the algorithm itself before taking serious action based on the result.

Of course, the first question I would ask is: “Why is this person showing me this result and want do they expect me to do based on it?”

That they are trying to help me on my path to becoming self-actualized isn’t my first reaction.

Yours?

Birth of Music Visualization (Apr, 1924)

Filed under: Music,Visualization — Patrick Durusau @ 10:25 am

Birth of Music Visualization (Apr, 1924)

xlg_light_show_0

The date’s correct. Article in Popular Mechanics, April 1924.

From the article:

The clavilux has three manuals and a triple light chamber, corresponding respectively to the keyboard and wind chest of the pipe organ. Disk keys appear on the manual, moving to and from the operator and playing color and form almost as the pipe organ plays sound.

There are 100 positions for each key, making possible almost infinite combinations of color and form. The “music,” or notation, is printed in figures upon a five-lined staff, three staves joined, as treble and bass clefs are joined for piano, to provide a “clef” for each of the three manuals. A color chord is represented by three figures as, for example, “40-35-60″; and movement of the prescribed keys to the designated positions on the numbered scale of the keyboard produces the desired figure.

The artist sits at the keyboard with the notation book before him. He releases the light by means of switches. By playing upon the keys he projects it upon the screen, molds it into form, makes the form move and change in rhythm, introduces texture and depth, and finally injects color of perfect purity in any degree of intensity.

When you have the time, check out the archives of Popular Mechanics and Popular Electronics for that matter at Google Books.

I don’t know if a topic map of the “hands-on” projects from those zines would have a market or not. The zines covering that sort of thing have died, or at least that is my impression.

Modern equivalents to Popular Mechanics/Electronics that you can point out?

August 8, 2015

Your Fingers Are Safer Now – Biometric Security Failure

Filed under: Cybersecurity,Security — Patrick Durusau @ 8:18 pm

Hackers Can Remotely Steal Fingerprints From Android Phones by Wang Wei.

From the post:

FireEye researchers Tao Wei and Yulong Zhang presented their research in a talk titled, Fingerprints on Mobile Devices: Abusing and Leaking, at the Black Hat conference in Las Vegas on Wednesday, where they outlined new ways to attack Android devices in an effort to extract user fingerprints.

The new attack is limited mostly to Android devices with Fingerprint Sensors that helps the user to authenticate their identity by simply touching their phone’s screen, instead of by entering a passcode.

Researchers confirmed the attack on the HTC One Max and Samsung’s Galaxy S5, which allowed them to stealthily obtain a fingerprint image from the device because vendors don’t lock down fingerprint sensors well enough.

The attack affects mobile phones by major manufacturers including handsets delivered by Samsung, HTC, and Huawei.

For some reason the link to the paper isn’t working so try:

Fingerprints on Mobile Devices: Abusing and Leaking.

Competent hackers don’t have to cut off your finger(s) to bypass fingerprint based biometric security precautions.

At least on cellphones. Other devices in the works.

Does that make you (and your fingers) feel more or less secure?

Perspectives on Terrorism: Special Issue on the Islamic State

Filed under: Government,Politics — Patrick Durusau @ 10:57 am

Perspectives on Terrorism: Special Issue on the Islamic State

From the introduction to this issue:

We are pleased to announce the release of Volume IX, Issue 4 (August 2015) of Perspectives on Terrorism at www.terrorismanalysts.com. Our free online journal is a joint publication of the Terrorism Research Initiative (TRI), headquartered in Vienna (Austria), and the Center for Terrorism and Security Studies (CTSS), at the Lowell Campus of the University of Massachusetts (United States).

Now in its ninth year, Perspectives on Terrorism has over 5,200 regular subscribers and many more occasional readers and visitors worldwide. The Articles of its six annual issues are fully peer-reviewed by external referees while its Policy Briefs and other content are subject to internal editorial quality control.

This special double issue is devoted entirely to the so-called Islamic State (IS), presenting 14 research articles on various aspects of the organization, in addition to an extensive, specially compiled bibliography on IS. The articles are products of a conference on IS held in Oslo on 11-12 June 2015. The conference was organized by the Norwegian Defence Research Establishment (FFI) and funded by the Norwegian Ministry of Foreign Affairs, and it brought together leading specialists on IS, jihadism, and civil war along with senior policymakers and government analysts from several countries.

The motivation for the conference – and for this special issue – was that our understanding of IS is lagging behind the group’s battlefield advances. After a wave of studies on al-Qaida in Iraq in the mid-2000s, the academic community largely dropped the ball on the group’s later incarnations ISI and ISIS until it burst onto the global stage last summer with the capture of Mosul. The past year has seen a substantial intellectual catch-up effort, not unlike that mounted for al-Qaida in the early 2000s, but we still have a long way to go.

The articles cover a broad range of topics and questions pertaining to IS as an organization. All of the articles were completed in July 2015 and are therefore unusually up-to-date as far as academic publishing goes….

To give you a sense of the tone you will find here, Charles Lister writes in A Long Way from Success: Assessing the War on the Islamic State:


The clear and present threat posed by IS justifies, and indeed demands a counter-reaction by international states and the local governments who directly face IS on the battlefield. After nine months of coalition operations, a series of tactical-level victories have been won against IS in parts of Iraq and northeastern Syria, but these do not yet appear to amount to anything close to strategic progress in genuinely degrading and destroying IS as an organization. In fact, some facets of the strategies adopted may even prove counterproductive in the long-term.

I suppose the use of “terrorism” in the title of the journal is something of a give away on its agenda.

What be exceptionally useful for the Information Technology community would be a map of private and public funding for anti-terrorism programs. Along with a short summary of which groups were designated as terrorists by each funding source.

Queries along the lines of: Do you think Tibetan monks are terrorists? If you answer yes, you are directed to the Chinese government. That sort of thing.

Pointers?

August 7, 2015

Bug Compensation Inches Towards Inadequate

Filed under: Cybersecurity — Patrick Durusau @ 4:49 pm

According to SC Magazine, compensation for bugs has been expanded by Microsoft.

From Microsoft doubles bug bounty payoff max, expands program:

Microsoft said Wednesday it would further expand its Bounty for Defense program, upping the payout maximum from $50,000 to $100,000 and launching a bonus period for its Online Services Bug Bounty during which bounties will be doubled, meaning researchers can receive as much as $30,000 for discovering authentication vulnerabilities, according to a release.

There are quants or at least semi-quants starting with six-figure incomes in finance. So saying $30,000, even with a drum roll, isn’t all that impressive. Think of all the years of effort to master an arcane set of skills in order to find vulnerabilities. Not to mention that security researchers have to support themselves between bug finds.

Just because you don’t know how to find bugs doesn’t make it easy. Or that just anyone is qualified to do it.

Kudos to Microsoft for inching towards inadequate but after all, bug hunters may prevent damage to millions of your customers. Unlike the quants who are trying to shave something off of your customers.

Want better bug hunting and cybersecurity?

Bug hunting and secure code must become more financially viable than bugs and insecure code.

When that happens, the security research community will blossom, but not a day before.

August 6, 2015

Are You In The Enron Dataset?

Filed under: Privacy — Patrick Durusau @ 4:11 pm

I am still laboring, along with Sam Hunting, to put the final touches on our Balisage presentation: Spreadsheets – 90+ million End User Programmers With No Comment Tracking or Version Control.

Before you ask, yes, yes it does use topic maps to address the semantic darkness that are spreadsheets. 😉

The reason for this post was that I ran across a spreadsheet today that listed both public and private phone numbers for a large number of oil & gas types. Too old now to be much of a bother but just an FYI that prior to checking big data sets, check for private phone numbers as well as SSNs.

BTW, I was rather amazed at the large number of things that “spreadsheets” are used for in fact. Auto-processing would create nearly as many problems as it would solve. I have seen documentation, unnamed content, letters, other content. Does it never occur to anyone to use a word processor?

Who is in Charge of Android Security? – Update

Filed under: Cybersecurity,Security — Patrick Durusau @ 9:49 am

A big shout out to Graham Cluley whose coverage of the Android security issues and the lack of distribution for security patches, have been in part responsible for Google stepping up on the Stagefright exploit.

In Big news. Google patching millions of Android devices against Stagefright exploit, Graham gives an overview of what Google is doing not just to issue the patch but to also see that it is distributed effectively.

If you are an Android user and do avoid the Stagefright exploit, you have Graham Cluley to thank as one of the voices calling for effective distribution of the patch.

You have to do your part as well but effective notice and distribution of patches is one of the keys to improving (not solving) cybersecurity problems.

If you know anyone in government who isn’t still breathless from the 30-day cybersecurity sprint, you should pass the idea for effective notice and distribution of patches along. Application of patches is critical too but don’t overload them. Wait for the next lull in cyberstupidity to suggest that.

August 5, 2015

Who is in Charge of Android Security?

Filed under: Cybersecurity,IoT - Internet of Things,Open Source,Security — Patrick Durusau @ 2:20 pm

Just the other day I posted Targeting 950 Million Android Phones – Open Source Security Checks?. Today my email had a link to: Nearly 90 percent of Android devices vulnerable to endless reboot bug by Allen Greenberg.

Allen points to: Android MediaServer Bug Traps Phones in Endless Reboots by Wish Wu, which reads in part:

We have discovered a new vulnerability that allows attackers to perform denial of service (DoS) attacks on Android’s mediaserver program. This causes a device’s system to reboot and drain all its battery life. In more a severe case, where a related malicious app is set to auto-start, the device can be trapped in an endless reboot and rendered unusable.

The vulnerability, CVE-2015-3823, affects Android versions 4.0.1 Jelly Bean to 5.1.1 Lollipop. Around 89% of the Android users (roughly 9 in 10 Android devices active as of June 2015) are affected. However, we have yet to discover active attacks in the wild that exploit this vulnerability.

This discovery comes hot on the heels of two other major vulnerabilities in Android’s media server component that surfaced last week. One can render devices silent while the other, Stagefright, can be used to install malware through a multimedia message.

Wow! Three critical security bugs in Android in a matter of weeks.

Which makes me ask the question: Who (the hell) is in Charge of Android Security?

Let’s drop the usual open source answer to complaints about the software: “…well, if you have an issue with the software you should contribute a patch…” and wise up that commercial entities are making money off the Android “open source” project.

People can and should contribute to open source projects but at the same time, commercial vendors should not foist avoidance of security bugs off onto the public.

Commercial vendors are already foisting security bugs off on the public because so far, not for very much longer, they have avoided liability for the same. They simply don’t invest in the coding practices that would avoid the security bugs that are so damaging to enterprises and individuals alike.

The same was true in the history of products liability. It is a very complex area of law that is developing rapidly and someday soon the standard EULA will fall and there will be no safety net under software vendors.

There are obvious damages from security bugs and there are vendors who could have avoided the security bugs in the first place. It is only a matter of time before courts discover that the same bugs (usually unchecked input) is causing damages over and over again and that checking input avoids the bug in the majority of cases.

Who can choose to check input or not? That’s right, the defendant with the deep pockets, the software vendor.

Who is in charge of security for your software?

PS: I mentioned the other day that the CVE database is available for download. That would be the starting point for developing a factual basis for known/avoidable bug analysis for software liability. I suspect that has been done and I am unaware of it. Suggestions?

August 3, 2015

Google Sanctions on France

Filed under: Censorship,Government,Politics — Patrick Durusau @ 10:42 am

Google defies French global ‘right to be forgotten’ ruling by Lee Munson.

From the post:

Last month the French data protection authority – the Commission nationale de l’informatique et des libertés (CNIL) – told Google that successful right to be forgotten requests made by Europeans should be applied across all of the company’s search engines, not just those in Europe.

In response, Google yesterday gave its unequivocal answer to that request: “Non!”

Writing on the company’s Google Europe blog, Peter Fleischer, Global Privacy Counsel, explained how the search giant had complied with the original “right to delist” ruling – which gives EU citizens the right to ask internet search engines to remove embarrassing, sensitive or inaccurate results for search queries that include their name – made by the Court of Justice of the European Union in 2014.

Google does a great job of outlining the consequences of allowing global reach of right to be forgotten rulings:

While the right to be forgotten may now be the law in Europe, it is not the law globally. Moreover, there are innumerable examples around the world where content that is declared illegal under the laws of one country, would be deemed legal in others: Thailand criminalizes some speech that is critical of its King, Turkey criminalizes some speech that is critical of Ataturk, and Russia outlaws some speech that is deemed to be “gay propaganda.”

If the CNIL’s proposed approach were to be embraced as the standard for Internet regulation, we would find ourselves in a race to the bottom. In the end, the Internet would only be as free as the world’s least free place.

We believe that no one country should have the authority to control what content someone in a second country can access. We also believe this order is disproportionate and unnecessary, given that the overwhelming majority of French internet users—currently around 97%—access a European version of Google’s search engine like google.fr, rather than Google.com or any other version of Google.

As a matter of principle, therefore, we respectfully disagree with the CNIL’s assertion of global authority on this issue and we have asked the CNIL to withdraw its Formal Notice.

The only part of the post where I diverge from Google is with its “we respectfully disagree…” language.

The longer Google delays, the less interest on any possible penalty but I rather doubt that French regulators are going to back off. France is no doubt encouraged by similar efforts in Canada and Russia as reported by Lee Munson.

Google needs to sanction France before a critical mass of nations take up the censorship banner.

What sanctions? Stop google.fr servers, along with cloud and other computing services.

See how the French economy and people who depend on it reaction to a crippling loss of service.

The French people are responsible for the fools attempting to be global censors of the Internet. They can damned well turn them out as well.

Disclosure Disrupts the Zero-Day Market

Filed under: Cybersecurity,Security — Patrick Durusau @ 8:24 am

Robert Lemos writes in Hacking Team Leak Could Lead to Policies Curtailing Security Research:

Within days, Netragard decided to exit the business of brokering exploit sales—a minor part of its overall business—until better regulations and laws could guarantee sold exploits went to legitimate authorities.

The decision underscores that the breach of Hacking Team’s network, and the resulting leak of sensitive business information, is continuing to have major impacts in the security industry.

The disclosure of seven zero-day vulnerabilities—four in Adobe Flash, two in Windows and one in Internet Explorer, according to vulnerability management firm Bugcrowd’s tally—has already enabled commodity attack software sold in underground malware markets to target otherwise protected systems.

“Those exploits were out there, but they were being used in a limited fashion,” Kymberlee Price, senior director of researcher operations at Bugcrowd, told eWEEK. “Now, they are being used extensively.”

Research has shown that a dramatic spike in usage, sometimes as much as a factor of 100,000, can occur following the public release of an exploit in popular software.

Imagine Rick‘s reaction on Pawn Stars if you were trying to sell him a very rare gemstone and the local news reports that 100,000 of them have just been discovered outside of Las Vegas, Nevada.

Public disclosure of zero-day vulnerabilities effectively guts the zero-day market for those techniques.

Now I understand why some security experts and researchers have promoted a cult of secrecy around zero-day vulnerabilities and other exploits.

Public disclosure, that enables customers to avoid exploits and/or put pressure on vendors, guts the market for sale of those same exploits to “legitimate authorities.”

Netragrad wants regulations to limit the sale of exploits, which keeps the exploit market small and the prices high.

I can understand its motivation from an economic point of view.

I am sure the staff at Netragrad sincerely intend:

0-days’s are nothing more than useful tools that when placed in the right hands can benefit the greater good.

That 0-day regulations will maintain the market price for 0-day’s is just happenstance.

If anything, 0-days and other exploits need more immediate and widespread publicity. That will be unfortunate for 0-day exploit sellers but they will be casualties of openness.

Openness is what will eventually create a disparity between vendors who exercise due diligence on cybersecurity and those who don’t.

Without openness, users are left at the mercy of 0-day vendors and “legitimate authorities.”

PS: There has been some indirect empirical research done on the impact of disclosure on exploit markets. See: Before We Knew It – An Empirical Study of Zero-Day Attacks In The Real World by Leyla Bilge and Tudor Dumitras.

Targeting 950 Million Android Phones – Open Source Security Checks?

Filed under: Cybersecurity,Open Source — Patrick Durusau @ 7:34 am

How to Hack Millions of Android Phones Using Stagefright Bug, Without Sending MMS by Swati Khandelwal.

From the post:

Earlier this week, security researchers at Zimperium revealed a high-severity vulnerability in Android platforms that allowed a single multimedia text message to hack 950 Million Android smartphones and tablets.

As explained in our previous article, the critical flaw resides in a core Android component called “Stagefright,” a native Android media playback library used by Android to process, record and play multimedia files.

To Exploit Stagefright vulnerability, which is actively being exploited in the wild, all an attacker needed is your phone number to send a malicious MMS message and compromise your Android device with no action, no indication required from your side.

Security researchers from Trend Micro have discovered two new attack scenarios that could trigger Stagefright vulnerability without sending malicious multimedia messages:

  • Trigger Exploit from Android Application
  • Crafted HTML exploit to Target visitors of a Webpage on the Internet

These two new Stagefright attack vectors carry more serious security implications than the previous one, as an attacker could exploit the bug remotely to:

  • Hack millions of Android devices, without knowing their phone numbers and spending a penny.
  • Steal Massive Amount of data.
  • Built a botnet network of Hacked Android Devices, etc.

The specially crafted MP4 file will cause mediaserver‘s heap to be destroyed or exploited,” researchers explained how an application could be used to trigger Stagefright attack.

Swati has video demonstrations of both of the new attack vectors and covers defensive measures for users.

Does the presence of such a bug in software from Google, which has access to almost unlimited programming talent and to hear its tale, the best programming talent in the business, make you curious about security for the Internet of Things (IoT)?

Or has Google been practicing “good enough” software development and cutting corners on testing for bugs and security flaws?

Now that I think about it, Android is an open source project and as we all know, given enough eyeballs, all bugs are shallow (Linus’s Law).

Hmmm, perhaps there aren’t enough eyes or eyes with a view towards security issues reviewing the Android codebase?

Is it the case the Google is implicitly relying on the community to discover subtle security issues in Android software?

Or to ask a more general question: Who is responsible for security checks on open source software? If everyone is responsible, I take that to mean no one is responsible.

August 2, 2015

Mapping the world of Mark Twain (subject confusion)

Filed under: Literature,Mapping,Maps,Visualization — Patrick Durusau @ 8:58 pm

Mapping the world of Mark Twain by Andrew Hill.

From the post:

Mapping Mark Twain

This weekend I was looking through Project Gutenberg and found something even better than a single book, I found the complete works of Mark Twain. I remembered how geographic the stories of Twain are and so knew immediately I had found a treasure chest. For the last few days, I’ve been parsing the books line-by-line and trying to find the localities that make up the world of Mark Twain. In the end, the data has over 20,000 localities. Even counting the cases where sir names are mistaken for places, it is a really cool dataset. What I’ll show you here is only the tip of the iceberg. I put the results together as an interactive map that maybe will inspire you to take a journey with Twain on your own, extend your life a little.

Sounds great!

Warning: Subject Confusion

Mapping the world of Mark Twain (the map)!

The blog entry: http://andrewxhill.com/blog/2014/01/26/Mapping-the-world-of-Mark-Twain/ has the same name as the map: http://andrewxhill.com/maps/writers/twain/index.html.

Both are excellent and the blog entry includes details on how you can construct similar maps.

Topic maps disambiguate names that would otherwise lead to confusion!

What names do you need to disambiguate?

Or do you need to avoid subject confusion with names used by others? (Unknown to you.)

August 1, 2015

Lasp

Filed under: CRDT,Erlang,Functional Programming,Lasp — Patrick Durusau @ 8:01 pm

Lasp: A Language for Distributed, Eventually Consistent Computations by Christopher S. Meiklejohn and Peter Van Roy.

From the webpage:

Why Lasp?

Lasp is a new programming model designed to simplify large scale, fault-tolerant, distributed programming. Lasp is being developed as part of the SyncFree European research project. It leverages ideas from distributed dataflow extended with convergent replicated data types, or CRDTs. This supports computations where not all participants are online together at a given moment. The initial design supports synchronization-free programming by combining CRDTs together with primitives for composing them inspired by functional programming. This lets us write long-lived fault-tolerant distributed applications, including ones with nonmonotonic behavior, in a functional paradigm. The initial prototype is implemented as an Erlang library built on top of the Riak Core distributed systems infrastructure.

Interested?

Other resources include:

Lasp-dev, the mailing list for Lasp developers.

Lasp at Github.

I was reminded to post about Lasp by this post from Christopher Meiklejohn:

This post is a continuation of my first post about leaving Basho Technologies after almost 4 years.

It has been quite a long time in the making, but I’m finally happy to announce that I am the recipient of a Erasmus Mundus fellowship in their Joint Doctorate in Distribute Computing program. I will be pursuing a full-time Ph.D., with my thesis devoted to developing the Lasp programming language for distributed computing with the goals of simplifying deterministic, distributed, edge computation.

Starting in February 2016, I will be moving to Belgium to begin my first year of studies at the Université catholique de Louvain supervised by Peter Van Roy followed by a second year in Lisbon at IST supervised by Luís Rodrigues.

If you like this article, please consider supporting my writing on gittip.

Looks like exciting developments are ahead for Lash!

Congratulations to Christopher Meiklejohn!

POC – BIND9 TKEY CVE-2015-5477 DoS

Filed under: Cybersecurity — Patrick Durusau @ 3:36 pm

Rob Graham has posted a proof of concept (POC) for BIND9 TKEY CVE-2015-5477 DoS.

If you don’t memorize Common Vulnerability and Exposures (CVE) as they appear, CVE-2015-5477 gives the following description:

named in ISC BIND 9.x before 9.9.7-P2 and 9.10.x before 9.10.2-P3 allows remote attackers to cause a denial of service (REQUIRE assertion failure and daemon exit) via TKEY queries.

The POC may or may not be of interest to you but in these security conscious times, the main CVE page will be. The entire CVE database is available for download if you want to try your hand at creative indexing.

There are a number of other valuable resources at the CVE page so take the time to explore while you are there.

Knowledge Map At The Washington Post (Rediscovery of HyperText)

Filed under: Knowledge Map,Knowledge Representation,News — Patrick Durusau @ 2:14 pm

How The Washington Post built — and will be building on — its “Knowledge Map” feature by Shan Wang.

From the post:

The Post is looking to create a database of “supplements” — categorized pieces of text and graphics that help give context around complicated news topics — and add it as a contextual layer across lots of different Post stories.

The Washington Post’s Knowledge Map aims to diminish that frustration by embedding context and background directly in a story. (We wrote about it briefly when it debuted earlier this month.) Highlighted links and buttons within the story, allowing readers to click on and then read brief overviews — called “supplements” — on the right hand side of the same page, without having to leave the page (currently the text and supplements are not tethered, so if you scroll away in the main story, there’s no easy way to jump back to the phrase or name you clicked on initially).

Knowledge Map sprouted a few months ago out of a design sprint (based on a five-day brainstorming method outlined by Google Ventures) that included the Post’s New York-based design and development team WPNYC and members of the data science team in the D.C. office, as well as engineers, designers, and other product people. After narrowing down a list of other promising projects, the team presented to the Post newsroom and to its engineering team an idea for providing readers with better summaries and context for the most complicated, long-evolving stories.

That idea of having context built into a story “really resonated” with colleagues, Sampsel said, so her team quickly created a proof-of-concept using an existing Post story, recruiting their first round of testers for the prototype via Craigslist. Because they had no prior data on what sort of key phrases or figures readers might want explained for any given story, the team relied on trial and error to settle on the right level of detail.

Not to take anything away from the Washington Post but doesn’t that scenario sounds a lot like HTML, <a> links with Javascript “hover” content? Perhaps the content is a bit long for hover, perhaps a pop-up window on mouseOver? Hold the context data locally for response time reasons.

Has the potential of hypertext been so muted by advertising, graphics, interactivity and > 1 MB pages that it takes a “design sprint” to bring some of that potential back to the fore?

I’m very glad that:

That idea of having context built into a story “really resonated” with colleagues,

but it isn’t a new idea.

Perhaps the best way to move the Web forward at this point would be to re-read (or read) some of the early web conference proceedings.

Rediscover what the web was like before being Google-driven was an accurate description of the web.

Other suggestions?

Things That Are Clear In Hindsight

Filed under: Social Sciences,Subject Identity,Subject Recognition — Patrick Durusau @ 12:10 pm

Sean Gallagher recently tweeted:

Oh look, the Triumphalism Trilogy is now a boxed set.

triumphalism-trilogy

In case you are unfamiliar with the series, The Tipping Point, Blink, Outliers.

Although entertaining reads, particularly The Tipping Point (IMHO), Gladwell does not describe how to recognize a tipping point in advance of it being a tipping point, nor how to make good decisions without thinking (Blink) or how to recognize human potential before success (Outliers).

Tipping points, good decisions and human potential can be recognized only when they are manifested.

As you can tell from Gladwell’s book sales, selling the hope of knowing the unknowable, remains a viable market.

« Newer Posts

Powered by WordPress