Another Word For It Patrick Durusau on Topic Maps and Semantic Diversity

March 13, 2017

Less Than Accurate Cybersecurity News Headline – From Phys.org No Less

Filed under: Cybersecurity,Journalism,News,Reporting — Patrick Durusau @ 7:50 pm

Skimming through my Twitter stream I encountered:

That sounds important and it’s from Phys.org.

Who describe themselves in 100 words:

Phys.org™ (formerly Physorg.com) is a leading web-based science, research and technology news service which covers a full range of topics. These include physics, earth science, medicine, nanotechnology, electronics, space, biology, chemistry, computer sciences, engineering, mathematics and other sciences and technologies. Launched in 2004, Phys.org’s readership has grown steadily to include 1.75 million scientists, researchers, and engineers every month. Phys.org publishes approximately 100 quality articles every day, offering some of the most comprehensive coverage of sci-tech developments world-wide. Quancast 2009 includes Phys.org in its list of the Global Top 2,000 Websites. Phys.org community members enjoy access to many personalized features such as social networking, a personal home page set-up, RSS/XML feeds, article comments and ranking, the ability to save favorite articles, a daily newsletter, and other options.

So I bit and visited New technique completely protects internet pictures and videos from cyberattacks, which reads in part:

A Ben-Gurion University of the Negev (BGU) researcher has developed a new technique that could provide virtually 100 percent protection against cyberattacks launched through internet videos or images, which are a growing threat.

“Any downloaded or streamed video or picture is a potential vehicle for a cyberattack,” says Professor Ofer Hadar, chair of BGU’s Department of Communication Systems Engineering. “Hackers like videos and pictures because they bypass the regular data transfer systems of highly secure systems, and there is significant space in which to implant malicious code.”

“Preliminary experimental results show that a method based on a combination of Coucou Project techniques results in virtually 100 percent protection against cyberattacks,” says Prof. Hadar. “We envision that firewall and antivirus companies will be able to utilize Coucou protection applications and techniques in their products.”

The Coucou Project receives funding from the BGU Cyber Security Research Center and the BaseCamp Innovation Center at the Advanced Technologies Park adjacent to BGU, which is interested in developing the protective platform into a commercial enterprise.

Summary: Cyberattackers using internet videos or images are in little danger of being thwarted any time soon.

First, Professor Hadar’s technique would need to be verified by other researchers. (Possibly has been but no publications are cited.)

Second, the technique must not introduce additional cybersecurity weaknesses.

Third, vendors have to adopt and implement the techniques.

Fourth, users must upgrade to new software that incorporates the new techniques.

A more accurate headline reads:

New Technique In Theory Protects Pictures and Videos From Cyberattacks

Yes?

Notes to (NUS) Computer Science Freshmen…

Filed under: Books,Computer Science,Programming — Patrick Durusau @ 4:07 pm

Notes to (NUS) Computer Science Freshmen, From The Future

From the intro:

Early into the AY12/13 academic year, Prof Tay Yong Chiang organized a supper for Computer Science freshmen at Tembusu College. The bunch of seniors who were gathered there put together a document for NUS computing freshmen. This is that document.

Feel free to create a pull request to edit or add to it, and share it with other freshmen you know.

There is one sad note:


The Art of Computer Programming (a review of everything in Computer Science; pretty much nobody, save Knuth, has finished reading this)

When you think about the amount of time Knuth has spent researching, writing and editing The Art of Computer Programming (TAOCP), it doesn’t sound unreasonable to expect others, a significant number of others, to have read it.

Any online reading groups focused on TAOCP?

AI Brain Scans

Filed under: Artificial Intelligence,Graphs,Neural Networks,Visualization — Patrick Durusau @ 3:19 pm

‘AI brain scans’ reveal what happens inside machine learning


The ResNet architecture is used for building deep neural networks for computer vision and image recognition. The image shown here is the forward (inference) pass of the ResNet 50 layer network used to classify images after being trained using the Graphcore neural network graph library

Credit Graphcore / Matt Fyles

The image is great eye candy, but if you want to see images annotated with information, check out: Inside an AI ‘brain’ – What does machine learning look like? (Graphcore)

From the product overview:

Poplar™ is a scalable graph programming framework targeting Intelligent Processing Unit (IPU) accelerated servers and IPU accelerated server clusters, designed to meet the growing needs of both advanced research teams and commercial deployment in the enterprise. It’s not a new language, it’s a C++ framework which abstracts the graph-based machine learning development process from the underlying graph processing IPU hardware.

Poplar includes a comprehensive, open source set of Poplar graph libraries for machine learning. In essence, this means existing user applications written in standard machine learning frameworks, like Tensorflow and MXNet, will work out of the box on an IPU. It will also be a natural basis for future machine intelligence programming paradigms which extend beyond tensor-centric deep learning. Poplar has a full set of debugging and analysis tools to help tune performance and a C++ and Python interface for application development if required.

The IPU-Appliance for the Cloud is due out in 2017. I have looked at Graphcore but came up dry on the Poplar graph libraries and/or an emulator for the IPU.

Perhaps those will both appear later in 2017.

Optimized hardware for graph calculations sounds promising but rapidly processing nodes that may or may not represent the same subject seems like a defect waiting to make itself known.

Many approaches rapidly process uncertain big data but being no more ignorant than your competition is hardly a selling point.

March 10, 2017

Creating A Social Media ‘Botnet’ To Skew A Debate

Filed under: Education,Government,Politics,Social Media,Twitter — Patrick Durusau @ 5:34 pm

New Research Shows How Common Core Critics Built Social Media ‘Botnets’ to Skew the Education Debate by Kevin Mahnken.

From the post:

Anyone following education news on Twitter between 2013 and 2016 would have been hard-pressed to ignore the gradual curdling of Americans’ attitudes toward the Common Core State Standards. Once seen as an innocuous effort to lift performance in classrooms, they slowly came to be denounced as “Dirty Commie agenda trash” and a “Liberal/Islam indoctrination curriculum.”

After years of social media attacks, the damage is impressive to behold: In 2013, 83 percent of respondents in Education Next’s annual poll of Americans’ education attitudes felt favorably about the Common Core, including 82 percent of Republicans. But by the summer of 2016, support had eroded, with those numbers measuring only 50 percent and 39 percent, respectively. The uproar reached such heights, and so quickly, that it seemed to reflect a spontaneous populist rebellion against the most visible education reform in a decade.

Not so, say researchers with the University of Pennsylvania’s Consortium for Policy Research in Education. Last week, they released the #commoncore project, a study that suggests that public animosity toward Common Core was manipulated — and exaggerated — by organized online communities using cutting-edge social media strategies.

As the project’s authors write, the effect of these strategies was “the illusion of a vociferous Twitter conversation waged by a spontaneous mass of disconnected peers, whereas in actuality the peers are the unified proxy voice of a single viewpoint.”

Translation: A small circle of Common Core critics were able to create and then conduct their own echo chambers, skewing the Twitter debate in the process.

The most successful of these coordinated campaigns originated with the Patriot Journalist Network, a for-profit group that can be tied to almost one-quarter of all Twitter activity around the issue; on certain days, its PJNET hashtag has appeared in 69 percent of Common Core–related tweets.

The team of authors tracked nearly a million tweets sent during four half-year spans between September 2013 and April 2016, studying both how the online conversation about the standards grew (more than 50 percent between the first phase, September 2013 through February 2014, and the third, May 2015 through October 2015) and how its interlocutors changed over time.

Mahnken talks as though creating a ‘botnet’ to defeat adoption of the Common Core State Standards is a bad thing.

I never cared for #commoncore because testing makes money for large and small testing vendors. It has no other demonstrated impact on the educational process.

Let’s assume you want to build a championship high school baseball team. To do that, various officious intermeddlers, who have no experience with baseball, fund creation of the Common Core Baseball Standards.

Every three years, every child is tested against the Common Core Baseball Standards and their performance recorded. No funds are allocated for additional training for gifted performers, equipment, baseball fields, etc.

By the time these students reach high school, will you have the basis for a championship team? Perhaps, but if you do, it due to random chance and not the Common Core Baseball Standards.

If you want a championship high school baseball team, you fund training, equipment, baseball fields and equipment, in addition to spending money on the best facilities for your hoped for championship high school team. Consistently and over time you spend money.

The key to better education results isn’t testing, but funding based on the education results you hope to achieve.

I do commend the #commoncore project website for being an impressive presentation of Twitter data, even though it is clearly a propaganda machine for pro Common Core advocates.

The challenge here is to work backwards from what was observed by the project to both principles and tactics that made #stopcommoncore so successful. That is we know it has succeeded, at least to some degree, but how do we replicate that success on other issues?

Replication is how science demonstrates the reliability of a technique.

Looking forward to hearing your thoughts, suggestions, etc.

Enjoy!

Eight Simple Rules for Doing Accurate Journalism [+ One]

Filed under: Journalism,News — Patrick Durusau @ 4:17 pm

Eight Simple Rules for Doing Accurate Journalism by Craig Silverman.

From the post:

It’s a cliché to say clichés exist for a reason. As journalists, we’re supposed to avoid them like the, um, plague. But it’s useful to have a catchy phrase that can stick in someone’s mind, particularly if you’re trying to spread knowledge or change behaviour.

This week I began cataloguing some of my own sayings about accuracy — you can consider them aspiring clichés — and other phrases I find helpful or instructive in preparation for a workshop I’m giving with The Huffington Post’s Mandy Jenkins at next week’s Online News Association conference. Our session is called B.S. Detection for Online Journalists. The goal is to equip participants with tools, tips, and knowledge to get things right, and weed out misinformation and hoaxes before they spread them.

So, with apologies to Bill Maher, I offer some new, some old, and some wonderfully clichéd rules for doing accurate journalism. Keep these in your head and they’ll help you do good work.

The problem of verification, if journal retractions are credited, isn’t limited to those writing under deadline pressure. Verification is neglected by those who spend months word-smithing texts.

I like Silverman’s post but I would ask:

Why do you say that?

However commonplace or bizarre a statement maybe, always challenge the speaker for their basis for a statement.

Take former CIA Director Michael Hayden‘s baseless notion that:

“…but this group of millennials and related groups simply have different understandings of the words loyalty, secrecy, and transparency than certainly my generation did.”

As Zaid Jilani goes on to demonstrate, Hayden’s opinion isn’t rooted in fact but prejudice.

The question at that point is whether Hayden’s prejudice is newsworthy enough to be reported. Having ascertain that Hayden is just grousing, why not leave the interview on the cutting room floor?

Journalists have no obligation to repeat the prejudices of current or former government officials as being worthy of notice.

XQuery Ready CIA Vault7 Files

Filed under: CIA,Government,XML,XQuery — Patrick Durusau @ 11:26 am

I have extracted the HTML files from WikiLeaks Vault7 Year Zero 2017 V 1.7z, processed them with Tidy (see note on correction below), and uploaded the “tidied” HTML files to: Vault7-CIA-Clean-HTML-Only.

Beyond the usual activities of Tidy, I did have to correct the file page_26345506.html: by creating a comment around one line of content:

<!– <declarations><string name=”½ö”></string></declarations&>lt;p>›<br> –>

Otherwise, the files are only corrected HTML markup with no other changes.

The HTML compresses well, 7811 files coming in at 3.4 MB.

Demonstrate the power of basic XQuery skills!

Enjoy!

March 9, 2017

Unicode 10.0 Beta Review

Filed under: Fonts,Unicode — Patrick Durusau @ 9:45 pm

Unicode 10.0 Beta Review

In today’s mail:

The Unicode Standard is the foundation for all modern software and communications around the world, including all modern operating systems, browsers, laptops, and smart phones—plus the Internet and Web (URLs, HTML, XML, CSS, JSON, etc.). The Unicode Standard, its associated standards, and data form the foundation for CLDR and ICU releases. Thus it is important to ensure a smooth transition to each new version of the standard.

Unicode 10.0 includes a number of changes. Some of the Unicode Standard Annexes have modifications for Unicode 10.0, often in coordination with changes to character properties. In particular, there are changes to UAX #14, Unicode Line Breaking Algorithm, UAX #29, Unicode Text Segmentation, and UAX #31, Unicode Identifier and Pattern Syntax. In addition, UAX #50, Unicode Vertical Text Layout, has been newly incorporated as a part of the standard. Four new scripts have been added in Unicode 10.0, including Nüshu. There are also 56 additional emoji characters, a major new extension of CJK ideographs, and 285 hentaigana, important historic variants for Hiragana syllables.

Please review the documentation, adjust your code, test the data files, and report errors and other issues to the Unicode Consortium by May 1, 2017. Feedback instructions are on the beta page.

See http://unicode.org/versions/beta-10.0.0.html for more information about testing the 10.0.0 beta.

See http://unicode.org/versions/Unicode10.0.0/ for the current draft summary of Unicode 10.0.0.

It’s not too late for you to contribute to the Unicode party! There plenty of reviewing and by no means has all the work been done!

For this particular version, comments are due by May 1, 2017.

Enjoy!

Smile! You May Be On A Candid Camera!

Filed under: Cybersecurity,Privacy — Patrick Durusau @ 9:09 pm

Hundreds of Thousands of Vulnerable IP Cameras Easy Target for Botnet, Researcher Says by Chris Brook.

From the post:

A researcher claims that hundreds of thousands of shoddily made IP cameras suffer from vulnerabilities that could make them an easy target for attackers looking to spy, brute force them, or steal their credentials.

Researcher Pierre Kim disclosed the vulnerabilities Wednesday and gave a comprehensive breakdown of the affected models in an advisory on his GitHub page.

A gifted security researcher who has discovered a number of backdoors in routers, estimates there are at least 18,000 vulnerable cameras in the United States alone. That figure may be as high as 200,000 worldwide.

For all of the pissing and moaning in Chris’ post, I don’t see the problem.

Governments, corporations, web hosts either have us under surveillance or their equipment is down for repairs.

Equipment that isn’t under their direct control, such as “shoddily made IP cameras,” provide an opportunity for citizens to return the surveillance favor.

To perform surveillance those who accept surveillance of the “masses” but find surveillance of their activities oddly objectionable.

Think of it this way:

The US government has to keep track of approximately 324 million people, give or take. With all the sources of information on every person, that’s truly a big data problem.

Turn that problem around and consider that Congress has only 535 members.

That’s more of a laptop sized data problem, albeit that they are clever about covering their tracks. Or think they are at any rate.

No, the less security that exists in general the more danger there is for highly visible individuals.

Think about who is more vulnerable before you complain about a lack of security.

The security the government is trying to protect isn’t for you. I promise. (The hoarding of cyber exploits by the CIA is only one such example.)

How Bad Is Wikileaks Vault7 (CIA) HTML?

Filed under: HTML,Wikileaks,WWW,XQuery — Patrick Durusau @ 8:29 pm

How bad?

Unless you want to hand correct 7809 html files to use with XQuery, grab the latest copy of Tidy

It’s not the worst HTML I have ever seen, but put that in the context of having seen a lot of really poor HTML.

I’ve “tidied” up a test collection and will grab a fresh copy of the files before producing and releasing a clean set of the HTML files.

Producing a document collection for XQuery processing. Working towards something suitable for application of NLP and other tools.

March 8, 2017

That CIA exploit list in full: … [highlights]

Filed under: CIA,Cybersecurity,Government,Privacy,Security,Wikileaks — Patrick Durusau @ 5:58 pm

That CIA exploit list in full: The good, the bad, and the very ugly by Iain Thomson.

From the post:

We’re still going through the 8,761 CIA documents published on Tuesday by WikiLeaks for political mischief, although here are some of the highlights.

First, though, a few general points: one, there’s very little here that should shock you. The CIA is a spying organization, after all, and, yes, it spies on people.

Two, unlike the NSA, the CIA isn’t mad keen on blanket surveillance: it targets particular people, and the hacking tools revealed by WikiLeaks are designed to monitor specific persons of interest. For example, you may have seen headlines about the CIA hacking Samsung TVs. As we previously mentioned, that involves breaking into someone’s house and physically reprogramming the telly with a USB stick. If the CIA wants to bug you, it will bug you one way or another, smart telly or no smart telly. You’ll probably be tricked into opening a dodgy attachment or download.

That’s actually a silver lining to all this: end-to-end encrypted apps, such as Signal and WhatsApp, are so strong, the CIA has to compromise your handset, TV or computer to read your messages and snoop on your webcam and microphones, if you’re unlucky enough to be a target. Hacking devices this way is fraught with risk and cost, so only highly valuable targets will be attacked. The vast, vast majority of us are not walking around with CIA malware lurking in our pockets, laptop bags, and living rooms.

Thirdly, if you’ve been following US politics and WikiLeaks’ mischievous role in the rise of Donald Trump, you may have clocked that Tuesday’s dump was engineered to help the President pin the hacking of his political opponents’ email server on the CIA. The leaked documents suggest the agency can disguise its operations as the work of a foreign government. Thus, it wasn’t the Russians who broke into the Democrats’ computers and, by leaking the emails, helped swing Donald the election – it was the CIA all along, Trump can now claim. That’ll shut the intelligence community up. The President’s pet news outlet Breitbart is already running that line.

Iain does a good job of picking out some of the more interesting bits from the CIA (alleged) file dump. No, you will have to read Iain’s post for those.

I mention Iain’s post primarily as a way to entice you into reading the all the files in hopes of discovering more juicy tidbits.

Read the files. Your security depends on the indifference of the CIA and similar agencies. Is that your model for privacy?

Gap Analysis Resource – Electrical Grid

Filed under: Government,Security — Patrick Durusau @ 5:37 pm

Electricity – Federal Efforts to Enhance Grid Resilience Government Accounting Office (GAO) (January 2017)

What GAO Found

The Department of Energy (DOE), the Department of Homeland Security (DHS), and the Federal Energy Regulatory Commission (FERC) reported implementing 27 grid resiliency efforts since 2013 and identified a variety of results from these efforts. The efforts addressed a range of threats and hazards—including cyberattacks, physical attacks, and natural disasters—and supported different types of activities (see table). These efforts also addressed each of the three federal priorities for enhancing the security and resilience of the electricity grid: (1) developing and deploying tools and technologies to enhance awareness of potential disruptions, (2) planning and exercising coordinated responses to disruptive events, and (3) ensuring actionable intelligence on threats is communicated between government and industry in a time-sensitive manner. Agency officials reported a variety of results from these efforts, including the development of new technologies—such as a rapidly-deployable large, highpower transformer—and improved coordination and information sharing between the federal government and industry related to potential cyberattacks.

(table omitted)

Federal grid resiliency efforts were fragmented across DOE, DHS, and FERC and overlapped to some degree but were not duplicative. GAO found that the 27 efforts were fragmented in that they were implemented by three agencies and addressed the same broad area of national need: enhancing the resilience of the electricity grid. However, DOE, DHS, and FERC generally tailored their efforts to contribute to their specific missions. For example, DOE’s 11 efforts related to its strategic goal to support a more secure and resilient U.S. energy infrastructure. GAO also found that the federal efforts overlapped to some degree but were not duplicative because none had the same goals or engaged in the same activities. For example, three DOE and DHS efforts addressed resiliency issues related to large, high-power transformers, but the goals were distinct—one effort focused on developing a rapidly deployable transformer to use in the event of multiple large, high-power transformer failures; another focused on developing next-generation transformer components with more resilient features; and a third focused on developing a plan for a national transformer reserve. Moreover, officials from all three agencies reported taking actions to coordinate federal grid resiliency efforts, such as serving on formal coordinating bodies that bring together federal, state, and industry stakeholders to discuss resiliency issues on a regular basis, and contributing to the development of federal plans that address grid resiliency gaps and priorities. GAO found that these actions were consistent with key practices for enhancing and sustaining federal agency coordination.
…(emphasis in original)

A high level view of efforts to “protect” the electrical grid (grid) in the United States.

Most of the hazards, massive solar flares, the 1859 Carrington Event, or a nuclear EMP, would easily overwhelm many if not all current measures to harden the grid.

Still, participants get funded to talk about hazards and dangers they can’t prevent nor easily remedy.

What dangers do you want to protect the grid against?

Headless Raspberry Pi Hacking Platform Running Kali Linux

Filed under: Cybersecurity,Security — Patrick Durusau @ 5:01 pm

Set Up a Headless Raspberry Pi Hacking Platform Running Kali Linux by Sadmin.

From the post:

The Raspberry Pi is a credit card-sized computer that can crack Wi-Fi, clone key cards, break into laptops, and even clone an existing Wi-Fi network to trick users into connecting to the Pi instead. It can jam Wi-Fi for blocks, track cell phones, listen in on police scanners, broadcast an FM radio signal, and apparently even fly a goddamn missile into a helicopter.

The key to this power is a massive community of developers and builders who contribute thousands of builds for the Kali Linux and Raspberry Pi platforms. For less than a tank of gas, a Raspberry Pi 3 buys you a low-cost, flexible cyberweapon.

Of course, it’s important to compartmentalize your hacking and avoid using systems that uniquely identify you, like customized hardware. Not everyone has access to a supercomputer or gaming tower, but fortunately one is not needed to have a solid Kali Linux platform.

With over 10 million units sold, the Raspberry Pi can be purchased in cash by anyone with $35 to spare. This makes it more difficult to determine who is behind an attack launched from a Raspberry Pi, as it could just as likely be a state-sponsored attack flying under the radar or a hyperactive teenager in high school coding class.

Blogging while I wait for the Wikileaks Vault7 Part 1 files to load into an XML database. The rhyme or reason (or the lack thereof) behind Wikileaks releases continues to escape me.

Within a day or so I will drop what I think is a more useful organization of that information.

While you wait, this is a particularly good post on using a Raspberry Pi “for reconnaissance and attacking Wi-Fi networks” in the author’s words.

Although a Raspberry Pi is easy to conceal, both on your person and on location, the purpose of such a device isn’t hard to discern.

If you are carrying a Raspberry Pi, avoid being searched until after you can dispose of it. Make sure that your fingerprints or biological trace evidence is not on it.

I say “your fingerprints or biological trace evidence” because it would be amusing if fingerprints or biological trace evidence implicated some resident of the facility where it is found.

The results of being suspected of possessing a Kali Linux equipped Raspberry Pi versus being proven to have possessed such a device, may differ by years.

Go carefully.

March 7, 2017

Confirmation: Internet of Things As Hacking Avenue

Filed under: CIA,Cybersecurity,Government,IoT - Internet of Things,Vault 7 — Patrick Durusau @ 7:07 pm

I mentioned in the Internet of Things (IoT) in Reading the Unreadable SROM: Inside the PSOC4 [Hacking Leader In Internet of Things Suppliers] as a growing, “Compound Annual Growth Rate (CAGR) of 33.3%,” source of cyber insecurity.

Today, Bill Brenner writes:

WikiLeaks’ release of 8,761 pages of internal CIA documents makes this much abundantly clear: the agency has built a monster hacking operation – possibly the biggest in the world – on the backs of the many internet-connected household gadgets we take for granted.

That’s the main takeaway among security experts Naked Security reached out to after the leak went public earlier Tuesday.

I appreciate the confirmation!

Yes, the IoT can and is being used for government surveillance.

At the same time, the IoT is a tremendous opportunity to level the playing field against corporations and governments alike.

If the IoT isn’t being used against corporations and governments, whose fault is that?

That’s my guess too.

You can bulk download the first drop from: https://archive.org/details/wikileaks.vault7part1.tar.

Vault 7: CIA Hacking Tools In Bulk Download

Filed under: Cybersecurity,Security,Wikileaks — Patrick Durusau @ 5:43 pm

If you want to avoid mirroring Vault 7: CIA Hacking Tools Revealed for yourself, check out: https://archive.org/details/wikileaks.vault7part1.tar.

Why Wikileaks doesn’t offer bulk access to its data sets, you would have to ask Wikileaks.

Enjoy!

Wikileaks Armed – You’re Not

Filed under: Cybersecurity,Government,Wikileaks — Patrick Durusau @ 9:33 am

Vault 7: CIA Hacking Tools Revealed (Wikileaks).

Very excited to read:

Today, Tuesday 7 March 2017, WikiLeaks begins its new series of leaks on the U.S. Central Intelligence Agency. Code-named “Vault 7” by WikiLeaks, it is the largest ever publication of confidential documents on the agency.

The first full part of the series, “Year Zero”, comprises 8,761 documents and files from an isolated, high-security network situated inside the CIA’s Center for Cyber Intelligence in Langley, Virgina. It follows an introductory disclosure last month of CIA targeting French political parties and candidates in the lead up to the 2012 presidential election.

Recently, the CIA lost control of the majority of its hacking arsenal including malware, viruses, trojans, weaponized “zero day” exploits, malware remote control systems and associated documentation. This extraordinary collection, which amounts to more than several hundred million lines of code, gives its possessor the entire hacking capacity of the CIA. The archive appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive.

Very disappointed to read:


Wikileaks has carefully reviewed the “Year Zero” disclosure and published substantive CIA documentation while avoiding the distribution of ‘armed’ cyberweapons until a consensus emerges on the technical and political nature of the CIA’s program and how such ‘weapons’ should analyzed, disarmed and published.

Wikileaks has also decided to redact and anonymise some identifying information in “Year Zero” for in depth analysis. These redactions include ten of thousands of CIA targets and attack machines throughout Latin America, Europe and the United States. While we are aware of the imperfect results of any approach chosen, we remain committed to our publishing model and note that the quantity of published pages in “Vault 7” part one (“Year Zero”) already eclipses the total number of pages published over the first three years of the Edward Snowden NSA leaks.

For all of the fretting over the “…extreme proliferation risk in the development of cyber ‘weapons’…”, bottom line is Wikileaks and its agents are armed with CIA cyber weapons and you are not.

Assange/Wikileaks have cast their vote in favor of arming themselves and protecting the CIA and others.

Responsible leaking of cyber weapons means arming everyone equally.

March 6, 2017

Continuing Management Fail At Twitter

Filed under: Censorship,Free Speech,Twitter — Patrick Durusau @ 11:54 am

Twitter management continues to fail.

Consider censoring the account of Lauri Love. (a rumored hacker)

Competent management at Twitter would be licensing the rights to create shareable mutes/filters for all posts from Lauri Love.

The FBI, Breitbart, US State Department, and others would vie for users of their filters, which block “dangerous and/or seditious content.”

Filters licensed in increments, depending on how many shares you want to enable.

Twitter with no censorship at all would drive the market for such filters.

Licensing filters by number of shares provides a steady revenue stream and Twitter could its censorship prone barnacles. More profit, reduced costs, what’s not to like?

PS: I ask nothing for this suggestion. Getting Twitter out of the censorship game on behalf of governments is benefit enough for me.

Reading the Unreadable SROM: Inside the PSOC4 [Hacking Leader In Internet of Things Suppliers]

Filed under: Chip Hacking,Cybersecurity — Patrick Durusau @ 10:35 am

Reading the Unreadable SROM: Inside the PSOC4 by Elliot Williams.

From the post:

Wow. [Dmitry Grinberg] just broke into the SROM on Cypress’ PSoC 4 chips. The supervisory read-only memory (SROM) in question is a region of proprietary code that runs when the chip starts up, and in privileged mode. It’s exactly the kind of black box that’s a little bit creepy and a horribly useful target for hackers if the black box can be broken open. What’s inside? In the manual it says “The user has no access to read or modify the SROM code.” Nobody outside of Cypress knows. Until now.

This matters because the PSoC 4000 chips are among the cheapest ARM Cortex-M0 parts out there. Consequently they’re inside countless consumer devices. Among [Dmitry]’s other tricks, he’s figured out how to write into the SROM, which opens the door for creating an undetectable rootkit on the chip that runs out of each reset. That’s the scary part.

The cool parts are scattered throughout [Dmitry]’s long and detailed writeup. He also found that the chips that have 8 K of flash actually have 16 K, and access to the rest of the memory is enabled by setting a single bit. This works because flash is written using routines that live in SROM, rather than the usual hardware-level write-to-register-and-wait procedure that we’re accustomed to with other micros. Of course, because it’s all done in software, you can brick the flash too by writing the wrong checksums. [Dmitry] did that twice. Good thing the chips are inexpensive.

We should all commend Dmitry Grinberg on his choice of the leading Internet of Things (IoT) supplier as his target.

Cyber-insecurity grows with every software security solution but

The Internet of Things market size is estimated to grow from USD 157.05 Billion in 2016 to USD 661.74 Billion by 2021, at a Compound Annual Growth Rate (CAGR) of 33.3% from 2016 to 2021. (Internet of Things (IoT) Market)

Insecurity growing at a “Compound Annual Growth Rate (CAGR) of 33.3%” is impressive to say the least. Not to mention all the legacy insecurities that have never been patched or where patches have not been installed.

Few will duplicate Dmitry’s investigation but no doubt tools will soon bring the fruits of his labor to a broader market.

Responsible Disclosure

The comments on Dmitry’s work have the obligatory complaints about public disclosure of these flaws.

Every public disclosure is a step towards transparency of both corporations and governments.

I see not cause for complaint.

You?

Enjoy the Projects gallery as well.

March 5, 2017

Why I Love XML (and Good Thing, It’s Everywhere) [Needs Subject Identity Too]

Filed under: Data Integration,Heterogeneous Data,XML — Patrick Durusau @ 9:39 pm

Why I Love XML (and Good Thing, It’s Everywhere) by Lee Pollington.

Lee makes a compelling argument for XML as the underlying mechanism for data integration when saying:

…Perhaps the data in your relational databases is structured. What about your knowledge management systems, customer information systems, document systems, CMS, mail, etc.? How do you integrate that data with structured data to get a holistic view of all your data? What do you do when you want to bring a group of relational schemas from different systems together to get that elusive 360 view – which is being demanded by the world’s regulators banks? Mergers and acquisitions drive this requirement too. How do you search across that data?

Sure there are solution stack answers. We’ve all seen whiteboards with ever growing number of boxes and those innocuous puny arrows between them that translate to teams of people, buckets of code, test and operations teams. They all add up to ever-increasing costs, complexity, missed deadlines & market share loss. Sound overly dramatic? Gartner calculated a worldwide spend of $5 Billion on data integration software in 2015. How much did you spend … would you know where to start calculating that cost?

While pondering what you spend on a yearly basis for data integration, contemplate two more questions from Lee:

…So take a moment to think about how you treat the data format that underpins your intellectual property? First-class citizen or after-thought?…

If you are treating your XML elements as first class citizens, do tell me that you created subject identity tests for those subjects?

So that a programmer new to your years of legacy XML will understand that <MFBM>, <MBFT> and <MBF> elements are all expressed in units of 1,000 board feet.

Yes?

Reducing the cost of data integration tomorrow, next year and five years after that, requires investment in the here and now.

Perhaps that is why data integration costs continue to climb.

Why pay for today what can be put off until tomorrow? (Future conversion costs are a line item in some future office holder’s budget.)

March 4, 2017

Virtual Jihadists (Bots)

Filed under: Deep Learning,Machine Learning,TensorFlow — Patrick Durusau @ 5:17 pm

Chip Huyen, who teaches CS 20SI: “TensorFlow for Deep Learning Research” @Standford, has posted code examples for the class, along with a chatbot, developed for one of the assignments.

The readme for the chatbot reads in part:

A neural chatbot using sequence to sequence model with attentional decoder. This is a fully functional chatbot.

This is based on Google Translate Tensorflow model https://github.com/tensorflow/models/blob/master/tutorials/rnn/translate/

Sequence to sequence model by Cho et al.(2014)

Created by Chip Huyen as the starter code for assignment 3, class CS 20SI: “TensorFlow for Deep Learning Research” cs20si.stanford.edu

The detailed assignment handout and information on training time can be found at http://web.stanford.edu/class/cs20si/assignments/a3.pdf

Dialogue is lacking but this chatbot could be trained to appear to government forces as a live “jihadist” following and conversing with other “jihadists.” Who may themselves be chatbots.

Unlike the expense of pilots for a fleet of drones, a single user could “pilot” a group of chatbots, creating an over-sized impression in cyberspace. The deeper the modeling of human jihadists, the harder it will be to distinguish virtual jihadists.

I say “jihadists” for headline effect. You could create interacting chatbots for right/left wing hate groups, gun owners, churches, etc., in short, anyone seeking to dilute surveillance.

(Unlike the ACLU or EFF, I don’t concede there are any legitimate reasons for government surveillance. The dangers of government surveillance far exceed any possible crime it could prevent. Government surveillance is the question. The answer is NO.)


CS 20SI: Tensorflow for Deep Learning Research

From the webpage:

Tensorflow is a powerful open-source software library for machine learning developed by researchers at Google Brain. It has many pre-built functions to ease the task of building different neural networks. Tensorflow allows distribution of computation across different computers, as well as multiple CPUs and GPUs within a single machine. TensorFlow provides a Python API, as well as a less documented C++ API. For this course, we will be using Python.

This course will cover the fundamentals and contemporary usage of the Tensorflow library for deep learning research. We aim to help students understand the graphical computational model of Tensorflow, explore the functions it has to offer, and learn how to build and structure models best suited for a deep learning project. Through the course, students will use Tensorflow to build models of different complexity, from simple linear/logistic regression to convolutional neural network and recurrent neural networks with LSTM to solve tasks such as word embeddings, translation, optical character recognition. Students will also learn best practices to structure a model and manage research experiments.

Enjoy!

Trump Tweets Strategically – You Respond (fill in the blank)

Filed under: Politics,Tweets,Twitter — Patrick Durusau @ 4:00 pm

George Lakoff tweeted:

Here’s an example of a “strategic” tweet by Trump.

Donald J. Trump tweets:

Terrible! Just found out that Obama had my “wires tapped” in Trump Tower just before the victory. Nothing found. This is McCarthyism!

For testing purposes, how would you characterize this sample of tweets that are a small part of the 35K replies to Trump’s tweet.


pourmecoffee‏Verified account @pourmecoffee
@realDonaldTrump Correct. Making allegations without evidence is the literal definition of McCarthyism.

FFT-Obama for Prison‏ @FemalesForTrump
.@pourmecoffee
when will the liars learn. Trump ALWAYS does his homework! The truth will support his tweet in 3, 2, 1 …
#saturdaymorning

Ignatz‏ @ignatzz
@FemalesForTrump @pourmecoffee Yes, I remember that proof that Obama was born in Kenya. And the Bowling Green Massacre.

FFT-Obama for Prison‏ @FemalesForTrump
@ignatzz @pourmecoffee he WAS born in Kenya. Hawaii b/c is a fake. #fact
He didn’t make the bowling green statement. Now go away

Lisa Armstrong‏Verified account @LisaArmstrong
@FemalesForTrump You people are still stuck on the lie that Obama was born in Kenya? Why? Where is the proof? #alternativefacts

Jet Black‏ @jetd69
@LisaArmstrong @FemalesForTrump There’s little point in arguing with her. She’s as off her chops as he is. Females for Trump indeed!

Lisa Armstrong‏Verified account @LisaArmstrong
@jetd69 @FemalesForTrump I know you’re right. It’s just that the willingness of #Trump supporters to believe flat out lies astounds me.

AngieStrader‏ @AngieStrader
@LisaArmstrong @jetd69 @FemalesForTrump this goes both ways. Dems want Trump on treason. Based on what facts? What verifiable sources?

Lisa Armstrong‏Verified account @LisaArmstrong
@AngieStrader The difference is there’s a long list of shady things Trump has actually done. These are facts. Obama being Kenyan is a lie.

Do you see any strategic tweets in that list or in the other 37K responses (as of Saturday afternoon, 4 March 2017)?

If the point of Trump’s tweet was diversion, I would have to say it succeeded beautifully.

You?

The strategic response to a Trump tweet is ignoring them in favor of propagating your theme.

March 2, 2017

Covert FM Radio Stations For Activists – Thumb In Eye Of Stingray Devices

Filed under: Cybersecurity,Government,Security — Patrick Durusau @ 9:37 pm

Singing posters and talking shirts: UW engineers turn everyday objects into FM radio stations by Jennifer Langston.

From the post:


They overlaid the audio and data on top of ambient news signals from a local NPR radio station. “FM radio signals are everywhere. You can listen to music or news in your car and it’s a common way for us to get our information,” said co-author and UW computer science and engineering doctoral student Anran Wang. “So what we do is basically make each of these everyday objects into a mini FM radio station at almost zero power.

”Such ubiquitous low-power connectivity can also enable smart fabric applications such as clothing integrated with sensors to monitor a runner’s gait and vital signs that transmits the information directly to a user’s phone. In a second demonstration, the researchers from the UW Networks & Mobile Systems Lab used conductive thread to sew an antenna into a cotton T-shirt, which was able to use ambient radio signals to transmit data to a smartphone at rates up to 3.2 kilobits per second.

The system works by taking an everyday FM radio signal broadcast from an urban radio tower. The “smart” poster or T-shirt uses a low-power reflector to manipulate the signal in a way that encodes the desired audio or data on top of the FM broadcast to send a “message” to the smartphone receiver on an unoccupied frequency in the FM radio band.

For the details:


The UW team has — for the first time — demonstrated how to apply a technique called “backscattering” to outdoor FM radio signals. The new system transmits messages by reflecting and encoding audio and data in these signals that are ubiquitous in urban environments, without affecting the original radio transmissions. Results are published in a paper to be presented in Boston at the 14th USENIX Symposium on Networked Systems Design and Implementation in March.

So government agents can cover cellphone frequencies with Stingray (“cell site simulators”) devices.

Wonder if they can cover the entire FM band? 😉

I’m guessing not. You?

Imagine a phone or shirt that is tuned to the frequency of a covert FM transmitter at a particular location. The information is just hanging out there but unless the “right” receiver walks by, its never known to anyone.

Ideal for messages directing public gatherings with near zero risk of interception by, shall we say, unfriendly parties?

Or other types of messages, imagine a singing dead drop as it were. You move away, the song goes away.

Enjoy!

Covering Trump: … [LiveStream, 3 March 2017]

Filed under: Journalism,News,Reporting — Patrick Durusau @ 9:02 pm

Covering Trump: What Happens When Journalism, Politics, and Fake News Collide by Shelley Hepworth.

From the post:

AFTER SIX WEEKS OF HIS PRESIDENCY, the media covering Trump’s administration is beginning to get a feel for the challenges that lie ahead. The president has labeled the press “the enemy of the American people” and excluded some news outlets from briefings; the First Amendment feels like it’s under threat; and fake news and “alternative facts” abound. The unorthodox nature of this environment has raised questions: How important are press briefings? What are the ethics of using anonymous sources and leaked data? And how should we respond to a disinformation campaign targeted at the media?

To get a handle on this, the Columbia Journalism Review has partnered with Reuters and The Guardian to bring together some of the best minds in the business for a one-day conference on Friday, March 3, Covering Trump: What Happens When Journalism, Politics, and Fake News Collide. The event includes panel discussions on press coverage in a no-access era, the rise of fake news, investigating Trump’s connections to Russia, and the ethics of reporting on data leaks. There will also be a lunchtime keynote with New Yorker Editor in Chief David Remnick in conversation with Columbia Journalism School Dean Steve Coll.

The conference will be livestreamed on this page from 10:30 am Friday, and we invite viewers to join in the conversation on Twitter using the hashtag #coveringtrump.
… (emphasis in original)

Cadablanca fans will recognize that:

I’m am shocked, shocked to learn [government routinely lies to and about the press]

Still, media resistance to government, belated though it may be, is appreciated.

Catch this discussion live and carry the discussion forward in groups both in and out of the media.

March 1, 2017

Oil Pipeline Operator Directory (For Activists)

Filed under: #DAPL,Protests — Patrick Durusau @ 5:19 pm

Activists may have occasions to contact oil pipeline operators.

Failing to find a convenient directory of oil pipeline operators, I have created the Oil Pipeline Operator Directory (2015), with basic contact information. 2015 is the latest data.

  • OPERATOR_ID – Important because it’s the key to tracking an operator
  • COMPANY_NAME
  • STREET
  • CITY
  • STATE
  • ZIP
  • TITLE (PREPARER) Person preparing form 7000-1.1 (Rev. 06-2014).
  • NAME (PREPARER)
  • EMAIL (PREPARER)
  • PHONE (PREPARER)
  • FAX (PREPARER)
  • NAME (SIGNER) Person certifying the information on the form.
  • TITLE (SIGNER)
  • EMAIL (SIGNER)
  • PHONE (SIGNER)

I preserved the distinction between the person preparing Form 7000-1.1 and the person signing it for the operator.

I haven’t found a convenient mapping of operators to pipelines. The data exists, I have seen maps drawn using the Operator_ID to identify pipelines. But haven’t run it to ground, yet.

Creation of the Directory

I started with the 2015 report in Hazardous Liquid Annual Data – 2010 to present (ZIP) hosted at Distribution, Transmission & Gathering, LNG, and Liquid Annual Data by the Pipeline and Hazardous Materials Safety Administration (PHMSA). Using sheet HL AR Part N to O I deleted columns A, B, C, D, E, L and R, then sorted by operator number and deleted duplicates. Where one duplicate was more complete, I kept the more complete duplicate.

There is a wealth of data at Pipeline and Hazardous Materials Safety Administration (PHMSA). Is there any of it in particular which if made more easily accessible would be more important or useful?

Suggestions on what data and how to make it more useful?

« Newer Posts

Powered by WordPress